Sample records for estimating site specific

  1. Probability-based estimates of site-specific copper water quality criteria for the Chesapeake Bay, USA.

    PubMed

    Arnold, W Ray; Warren-Hicks, William J

    2007-01-01

    The object of this study was to estimate site- and region-specific dissolved copper criteria for a large embayment, the Chesapeake Bay, USA. The intent is to show the utility of 2 copper saltwater quality site-specific criteria estimation models and associated region-specific criteria selection methods. The criteria estimation models and selection methods are simple, efficient, and cost-effective tools for resource managers. The methods are proposed as potential substitutes for the US Environmental Protection Agency's water effect ratio methods. Dissolved organic carbon data and the copper criteria models were used to produce probability-based estimates of site-specific copper saltwater quality criteria. Site- and date-specific criteria estimations were made for 88 sites (n = 5,296) in the Chesapeake Bay. The average and range of estimated site-specific chronic dissolved copper criteria for the Chesapeake Bay were 7.5 and 5.3 to 16.9 microg Cu/L. The average and range of estimated site-specific acute dissolved copper criteria for the Chesapeake Bay were 11.7 and 8.3 to 26.4 microg Cu/L. The results suggest that applicable national and state copper criteria can increase in much of the Chesapeake Bay and remain protective. Virginia Department of Environmental Quality copper criteria near the mouth of the Chesapeake Bay, however, need to decrease to protect species of equal or greater sensitivity to that of the marine mussel, Mytilus sp.

  2. Site-specific estimation of peak-streamflow frequency using generalized least-squares regression for natural basins in Texas

    USGS Publications Warehouse

    Asquith, William H.; Slade, R.M.

    1999-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation, has developed a computer program to estimate peak-streamflow frequency for ungaged sites in natural basins in Texas. Peak-streamflow frequency refers to the peak streamflows for recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Peak-streamflow frequency estimates are needed by planners, managers, and design engineers for flood-plain management; for objective assessment of flood risk; for cost-effective design of roads and bridges; and also for the desin of culverts, dams, levees, and other flood-control structures. The program estimates peak-streamflow frequency using a site-specific approach and a multivariate generalized least-squares linear regression. A site-specific approach differs from a traditional regional regression approach by developing unique equations to estimate peak-streamflow frequency specifically for the ungaged site. The stations included in the regression are selected using an informal cluster analysis that compares the basin characteristics of the ungaged site to the basin characteristics of all the stations in the data base. The program provides several choices for selecting the stations. Selecting the stations using cluster analysis ensures that the stations included in the regression will have the most pertinent information about flooding characteristics of the ungaged site and therefore provide the basis for potentially improved peak-streamflow frequency estimation. An evaluation of the site-specific approach in estimating peak-streamflow frequency for gaged sites indicates that the site-specific approach is at least as accurate as a traditional regional regression approach.

  3. Air/Superfund national technical guidance study series, Volume 2. Estimation of baseline air emission at Superfund sites. Interim report(Final)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1989-01-01

    This volume is one in a series of manuals prepared for EPA to assist its Remedial Project Managers in the assessment of the air contaminant pathway and developing input data for risk assessment. The manual provides guidance on developing baseline-emission estimates from hazardous waste sites. Baseline-emission estimates (BEEs) are defined as emission rates estimated for a site in its undisturbed state. Specifically, the manual is intended to: Present a protocol for selecting the appropriate level of effort to characterize baseline air emissions; Assist site managers in designing an approach for BEEs; Describe useful technologies for developing site-specific baseline emission estimatesmore » (BEEs); Help site managers select the appropriate technologies for generating site-specific BEEs.« less

  4. Evaluation of the Enhanced Integrated Climatic Model for modulus-based construction specification for Oklahoma pavements.

    DOT National Transportation Integrated Search

    2013-07-01

    The study provides estimation of site specific variation in environmental factors that can be : used in predicting seasonal and long-term variations in moduli of unbound materials. Using : these site specific estimates, the EICM climatic input files ...

  5. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Cogestion and recreation site demand: a model of demand-induced quality effects

    USGS Publications Warehouse

    Douglas, Aaron J.; Johnson, Richard L.

    1993-01-01

    This analysis focuses on problems of estimating site-specific dollar benefits conferred by outdoor recreation sites in the face of congestion costs. Encounters, crowding effects and congestion costs have often been treated by natural resource economists in a piecemeal fashion. In the current paper, encounters and crowding effects are treated systematically. We emphasize the quantitative impact of congestion costs on site-specific estimates of benefits conferred by improvements in outdoor recreation sites. The principal analytic conclusion is that techniques that streamline on data requirements produce biased estimates of benefits conferred by site improvements at facilities with significant crowding effects. The principal policy recommendation is that the Federal and state agencies should collect and store information on visitation rates, encounter levels and congestion costs at various outdoor recreation sites.

  7. Statistical and Economic Techniques for Site-specific Nematode Management.

    PubMed

    Liu, Zheng; Griffin, Terry; Kirkpatrick, Terrence L

    2014-03-01

    Recent advances in precision agriculture technologies and spatial statistics allow realistic, site-specific estimation of nematode damage to field crops and provide a platform for the site-specific delivery of nematicides within individual fields. This paper reviews the spatial statistical techniques that model correlations among neighboring observations and develop a spatial economic analysis to determine the potential of site-specific nematicide application. The spatial econometric methodology applied in the context of site-specific crop yield response contributes to closing the gap between data analysis and realistic site-specific nematicide recommendations and helps to provide a practical method of site-specifically controlling nematodes.

  8. SITE-SPECIFIC CHARACTERIZATION OF SOIL RADON POTENTIALS

    EPA Science Inventory

    The report presents a theoretical basis for measuring site-specific radon potentials. However, the empirical measurements suggest that the precision of such measurements is marginal, leaving an uncertainty of about a factor of 2 in site-specific estimates. Although this may be us...

  9. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  10. A Comparison of Regional and SiteSpecific Volume Estimation Equations

    Treesearch

    Joe P. McClure; Jana Anderson; Hans T. Schreuder

    1987-01-01

    Regression equations for volume by region and site class were examined for lobiolly pine. The regressions for the Coastal Plain and Piedmont regions had significantly different slopes. The results shared important practical differences in percentage of confidence intervals containing the true total volume and in percentage of estimates within a specific proportion of...

  11. Incorporating geologic information into hydraulic tomography: A general framework based on geostatistical approach

    NASA Astrophysics Data System (ADS)

    Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Onoe, Hironori; Mok, Chin Man W.; Wen, Jet-Chau; Huang, Shao-Yang; Wang, Wenke

    2017-04-01

    Hydraulic tomography (HT) has become a mature aquifer test technology over the last two decades. It collects nonredundant information of aquifer heterogeneity by sequentially stressing the aquifer at different wells and collecting aquifer responses at other wells during each stress. The collected information is then interpreted by inverse models. Among these models, the geostatistical approaches, built upon the Bayesian framework, first conceptualize hydraulic properties to be estimated as random fields, which are characterized by means and covariance functions. They then use the spatial statistics as prior information with the aquifer response data to estimate the spatial distribution of the hydraulic properties at a site. Since the spatial statistics describe the generic spatial structures of the geologic media at the site rather than site-specific ones (e.g., known spatial distributions of facies, faults, or paleochannels), the estimates are often not optimal. To improve the estimates, we introduce a general statistical framework, which allows the inclusion of site-specific spatial patterns of geologic features. Subsequently, we test this approach with synthetic numerical experiments. Results show that this approach, using conditional mean and covariance that reflect site-specific large-scale geologic features, indeed improves the HT estimates. Afterward, this approach is applied to HT surveys at a kilometer-scale-fractured granite field site with a distinct fault zone. We find that by including fault information from outcrops and boreholes for HT analysis, the estimated hydraulic properties are improved. The improved estimates subsequently lead to better prediction of flow during a different pumping test at the site.

  12. The effect of multiple primary rules on population-based cancer survival

    PubMed Central

    Weir, Hannah K.; Johnson, Christopher J.; Thompson, Trevor D.

    2015-01-01

    Purpose Different rules for registering multiple primary (MP) cancers are used by cancer registries throughout the world, making international data comparisons difficult. This study evaluates the effect of Surveillance, Epidemiology, and End Results (SEER) and International Association of Cancer Registries (IACR) MP rules on population-based cancer survival estimates. Methods Data from five US states and six metropolitan area cancer registries participating in the SEER Program were used to estimate age-standardized relative survival (RS%) for first cancers-only and all first cancers matching the selection criteria according to SEER and IACR MP rules for all cancer sites combined and for the top 25 cancer site groups among men and women. Results During 1995–2008, the percentage of MP cancers (all sites, both sexes) increased 25.4 % by using SEER rules (from 14.6 to 18.4 %) and 20.1 % by using IACR rules (from 13.2 to 15.8 %). More MP cancers were registered among females than among males, and SEER rules registered more MP cancers than IACR rules (15.8 vs. 14.4 % among males; 17.2 vs. 14.5 % among females). The top 3 cancer sites with the largest differences were melanoma (5.8 %), urinary bladder (3.5 %), and kidney and renal pelvis (2.9 %) among males, and breast (5.9 %), melanoma (3.9 %), and urinary bladder (3.4 %) among females. Five-year survival estimates (all sites combined) restricted to first primary cancers-only were higher than estimates by using first site-specific primaries (SEER or IACR rules), and for 11 of 21 sites among males and 11 of 23 sites among females. SEER estimates are comparable to IACR estimates for all site-specific cancers and marginally higher for all sites combined among females (RS 62.28 vs. 61.96 %). Conclusion Survival after diagnosis has improved for many leading cancers. However, cancer patients remain at risk of subsequent cancers. Survival estimates based on first cancers-only exclude a large and increasing number of MP cancers. To produce clinically and epidemiologically relevant and less biased cancer survival estimates, data on all cancers should be included in the analysis. The multiple primary rules (SEER or IACR) used to identify primary cancers do not affect survival estimates if all first cancers matching the selection criteria are used to produce site-specific survival estimates. PMID:23558444

  13. Evaluating Variability and Uncertainty of Geological Strength Index at a Specific Site

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Aladejare, Adeyemi Emman

    2016-09-01

    Geological Strength Index (GSI) is an important parameter for estimating rock mass properties. GSI can be estimated from quantitative GSI chart, as an alternative to the direct observational method which requires vast geological experience of rock. GSI chart was developed from past observations and engineering experience, with either empiricism or some theoretical simplifications. The GSI chart thereby contains model uncertainty which arises from its development. The presence of such model uncertainty affects the GSI estimated from GSI chart at a specific site; it is, therefore, imperative to quantify and incorporate the model uncertainty during GSI estimation from the GSI chart. A major challenge for quantifying the GSI chart model uncertainty is a lack of the original datasets that have been used to develop the GSI chart, since the GSI chart was developed from past experience without referring to specific datasets. This paper intends to tackle this problem by developing a Bayesian approach for quantifying the model uncertainty in GSI chart when using it to estimate GSI at a specific site. The model uncertainty in the GSI chart and the inherent spatial variability in GSI are modeled explicitly in the Bayesian approach. The Bayesian approach generates equivalent samples of GSI from the integrated knowledge of GSI chart, prior knowledge and observation data available from site investigation. Equations are derived for the Bayesian approach, and the proposed approach is illustrated using data from a drill and blast tunnel project. The proposed approach effectively tackles the problem of how to quantify the model uncertainty that arises from using GSI chart for characterization of site-specific GSI in a transparent manner.

  14. Comparison of screening-level and Monte Carlo approaches for wildlife food web exposure modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastorok, R.; Butcher, M.; LaTier, A.

    1995-12-31

    The implications of using quantitative uncertainty analysis (e.g., Monte Carlo) and site-specific tissue residue data for wildlife exposure modeling were examined with data on trace elements at the Clark Fork River Superfund Site. Exposure of white-tailed deer, red fox, and American kestrel was evaluated using three approaches. First, a screening-level exposure model was based on conservative estimates of exposure parameters, including estimates of dietary residues derived from bioconcentration factors (BCFs) and soil chemistry. A second model without Monte Carlo was based on site-specific data for tissue residues of trace elements (As, Cd, Cu, Pb, Zn) in key dietary species andmore » plausible assumptions for habitat spatial segmentation and other exposure parameters. Dietary species sampled included dominant grasses (tufted hairgrass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Third, the Monte Carlo analysis was based on the site-specific residue data and assumed or estimated distributions for exposure parameters. Substantial uncertainties are associated with several exposure parameters, especially BCFS, such that exposure and risk may be greatly overestimated in screening-level approaches. The results of the three approaches are compared with respect to realism, practicality, and data gaps. Collection of site-specific data on trace elements concentrations in plants and animals eaten by the target wildlife receptors is a cost-effective way to obtain realistic estimates of exposure. Implications of the results for exposure and risk estimates are discussed relative to use of wildlife exposure modeling and evaluation of remedial actions at Superfund sites.« less

  15. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially interpolating 100-year dew point values rather than a more gauge-based approach. Site specific reviews demonstrated that both issues had potential for lowering the PMP estimate significantly by affecting the in-place and transposed moisture maximization value and, in turn, the final controlling storm for a given basin size and PMP estimate.

  16. Regression models for estimating salinity and selenium concentrations at selected sites in the Upper Colorado River Basin, Colorado, 2009-2012

    USGS Publications Warehouse

    Linard, Joshua I.; Schaffrath, Keelin R.

    2014-01-01

    Elevated concentrations of salinity and selenium in the tributaries and main-stem reaches of the Colorado River are a water-quality concern and have been the focus of remediation efforts for many years. Land-management practices with the objective of limiting the amount of salt and selenium that reaches the stream have focused on improving the methods by which irrigation water is conveyed and distributed. Federal land managers implement improvements in accordance with the Colorado River Basin Salinity Control Act of 1974, which directs Federal land managers to enhance and protect the quality of water available in the Colorado River. In an effort to assist in evaluating and mitigating the detrimental effects of salinity and selenium, the U.S. Geological Survey, in cooperation with the Bureau of Reclamation, the Colorado River Water Resources District, and the Bureau of Land Management, analyzed salinity and selenium data collected at sites to develop regression models. The study area and sites are on the Colorado River or in one of three small basins in Western Colorado: the White River Basin, the Lower Gunnison River Basin, and the Dolores River Basin. By using data collected from water years 2009 through 2011, regression models able to estimate concentrations were developed for salinity at six sites and selenium at six sites. At a minimum, data from discrete measurement of salinity or selenium concentration, streamflow, and specific conductance at each of the sites were needed for model development. Comparison of the Adjusted R2 and standard error statistics of the two salinity models developed at each site indicated the models using specific conductance as the explanatory variable performed better than those using streamflow. The addition of multiple explanatory variables improved the ability to estimate selenium concentration at several sites compared with use of solely streamflow or specific conductance. The error associated with the log-transformed salinity and selenium estimates is consistent in log space; however, when the estimates are transformed into non-log values, the error increases as the estimates decrease. Continuous streamflow and specific conductance data collected at study sites provide the means to examine temporal variability in constituent concentration and load. The regression models can estimate continuous concentrations or loads on the basis of continuous specific conductance or streamflow data. Similar estimates are available for other sites at the USGS National Real-Time Water Quality Web page (http://nrtwq.usgs.gov) and provide water-resource managers with a means of improving their general understanding of how constituent concentration or load can change annually, seasonally, or in real time.

  17. Efficacy of generic allometric equations for estimating biomass: a test in Japanese natural forests.

    PubMed

    Ishihara, Masae I; Utsugi, Hajime; Tanouchi, Hiroyuki; Aiba, Masahiro; Kurokawa, Hiroko; Onoda, Yusuke; Nagano, Masahiro; Umehara, Toru; Ando, Makoto; Miyata, Rie; Hiura, Tsutom

    2015-07-01

    Accurate estimation of tree and forest biomass is key to evaluating forest ecosystem functions and the global carbon cycle. Allometric equations that estimate tree biomass from a set of predictors, such as stem diameter and tree height, are commonly used. Most allometric equations are site specific, usually developed from a small number of trees harvested in a small area, and are either species specific or ignore interspecific differences in allometry. Due to lack of site-specific allometries, local equations are often applied to sites for which they were not originally developed (foreign sites), sometimes leading to large errors in biomass estimates. In this study, we developed generic allometric equations for aboveground biomass and component (stem, branch, leaf, and root) biomass using large, compiled data sets of 1203 harvested trees belonging to 102 species (60 deciduous angiosperm, 32 evergreen angiosperm, and 10 evergreen gymnosperm species) from 70 boreal, temperate, and subtropical natural forests in Japan. The best generic equations provided better biomass estimates than did local equations that were applied to foreign sites. The best generic equations included explanatory variables that represent interspecific differences in allometry in addition to stem diameter, reducing error by 4-12% compared to the generic equations that did not include the interspecific difference. Different explanatory variables were selected for different components. For aboveground and stem biomass, the best generic equations had species-specific wood specific gravity as an explanatory variable. For branch, leaf, and root biomass, the best equations had functional types (deciduous angiosperm, evergreen angiosperm, and evergreen gymnosperm) instead of functional traits (wood specific gravity or leaf mass per area), suggesting importance of other traits in addition to these traits, such as canopy and root architecture. Inclusion of tree height in addition to stem diameter improved the performance of the generic equation only for stem biomass and had no apparent effect on aboveground, branch, leaf, and root biomass at the site level. The development of a generic allometric equation taking account of interspecific differences is an effective approach for accurately estimating aboveground and component biomass in boreal, temperate, and subtropical natural forests.

  18. Time vs. Money: A Quantitative Evaluation of Monitoring Frequency vs. Monitoring Duration.

    PubMed

    McHugh, Thomas E; Kulkarni, Poonam R; Newell, Charles J

    2016-09-01

    The National Research Council has estimated that over 126,000 contaminated groundwater sites are unlikely to achieve low ug/L clean-up goals in the foreseeable future. At these sites, cost-effective, long-term monitoring schemes are needed in order to understand the long-term changes in contaminant concentrations. Current monitoring optimization schemes rely on site-specific evaluations to optimize groundwater monitoring frequency. However, when using linear regression to estimate the long-term zero-order or first-order contaminant attenuation rate, the effect of monitoring frequency and monitoring duration on the accuracy and confidence for the estimated attenuation rate is not site-specific. For a fixed number of monitoring events, doubling the time between monitoring events (e.g., changing from quarterly monitoring to semi-annual monitoring) will double the accuracy of estimated attenuation rate. For a fixed monitoring frequency (e.g., semi-annual monitoring), increasing the number of monitoring events by 60% will double the accuracy of the estimated attenuation rate. Combining these two factors, doubling the time between monitoring events (e.g., quarterly monitoring to semi-annual monitoring) while decreasing the total number of monitoring events by 38% will result in no change in the accuracy of the estimated attenuation rate. However, the time required to collect this dataset will increase by 25%. Understanding that the trade-off between monitoring frequency and monitoring duration is not site-specific should simplify the process of optimizing groundwater monitoring frequency at contaminated groundwater sites. © 2016 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  19. Minimum follow-up time required for the estimation of statistical cure of cancer patients: verification using data from 42 cancer sites in the SEER database

    PubMed Central

    Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent

    2005-01-01

    Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508

  20. Can camera traps monitor Komodo dragons a large ectothermic predator?

    PubMed

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  1. Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?

    PubMed Central

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site*survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species. PMID:23527027

  2. Computer software to estimate timber harvesting system production, cost, and revenue

    Treesearch

    Dr. John E. Baumgras; Dr. Chris B. LeDoux

    1992-01-01

    Large variations in timber harvesting cost and revenue can result from the differences between harvesting systems, the variable attributes of harvesting sites and timber stands, or changing product markets. Consequently, system and site specific estimates of production rates and costs are required to improve estimates of harvesting revenue. This paper describes...

  3. Estimates of hydraulic properties from a one-dimensional numerical model of vertical aquifer-system deformation, Lorenzi site, Las Vegas, Nevada

    USGS Publications Warehouse

    Pavelko, Michael T.

    2004-01-01

    Land subsidence related to aquifer-system compaction and ground-water withdrawals has been occurring in Las Vegas Valley, Nevada, since the 1930's, and by the late 1980's some areas in the valley had subsided more than 5 feet. Since the late 1980's, seasonal artificial-recharge programs have lessened the effects of summertime pumping on aquifer-system compaction, but the long-term trend of compaction continues in places. Since 1994, the U.S. Geological Survey has continuously monitored water-level changes in three piezometers and vertical aquifer-system deformation with a borehole extensometer at the Lorenzi site in Las Vegas, Nevada. A one-dimensional, numerical, ground-water flow model of the aquifer system below the Lorenzi site was developed for the period 1901-2000, to estimate aquitard vertical hydraulic conductivity, aquitard inelastic skeletal specific storage, and aquitard and aquifer elastic skeletal specific storage. Aquifer water-level data were used in the model as the aquifer-system stresses that controlled simulated vertical aquifer-system deformation. Nonlinear-regression methods were used to calibrate the model, utilizing estimated and measured aquifer-system deformation data to minimize a weighted least-squares objective function, and estimate optimal property values. Model results indicate that at the Lorenzi site, aquitard vertical hydraulic conductivity is 3 x 10-6 feet per day, aquitard inelastic skeletal specific storage is 4 x 10-5 per foot, aquitard elastic skeletal specific storage is 5 x 10-6 per foot, and aquifer elastic skeletal specific storage is 3 x 10-7 per foot. Regression statistics indicate that the model and data provided sufficient information to estimate the target properties, the model adequately simulated observed data, and the estimated property values are accurate and unique.

  4. National Stormwater Calculator - Version 1.1 (Model)

    EPA Science Inventory

    EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...

  5. Secondary plant succession on disturbed sites at Yucca Mountain, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angerer, J.P.; Ostler, W.K.; Gabbert, W.D.

    1994-12-01

    This report presents the results of a study of secondary plant succession on disturbed sites created during initial site investigations in the late 1970s and early 1980s at Yucca Mountain, NV. Specific study objectives were to determine the rate and success of secondary plant succession, identify plant species found in disturbances that may be suitable for site-specific reclamation, and to identify environmental variables that influence succession on disturbed sites. During 1991 and 1992, fifty seven disturbed sites were located. Vegetation parameters, disturbance characteristics and environmental variables were measured at each site. Disturbed site vegetation parameters were compared to that ofmore » undisturbed sites to determine the status of disturbed site plant succession. Vegetation on disturbed sites, after an average of ten years, was different from undisturbed areas. Ambrosia dumosa, Chrysothamnus teretifolius, Hymenoclea salsola, Gutierrezia sarothrae, Atriplex confertifolia, Atriplex canescens, and Stephanomeria pauciflora were the most dominant species across all disturbed sites. With the exception of A. dumosa, these species were generally minor components of the undisturbed vegetation. Elevation, soil compaction, soil potassium, and amounts of sand and gravel in the soil were found to be significant environmental variables influencing the species composition and abundance of perennial plants on disturbed sites. The recovery rate for disturbed site secondary succession was estimated. Using a linear function (which would represent optimal conditions), the recovery rate for perennial plant cover, regardless of which species comprised the cover, was estimated to be 20 years. However, when a logarithmic function (which would represent probable conditions) was used, the recovery rate was estimated to be 845 years. Recommendations for future studies and site-specific reclamation of disturbances are presented.« less

  6. Estimating Children's Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho.

    PubMed

    von Lindern, Ian; Spalinger, Susan; Stifelman, Marc L; Stanek, Lindsay Wichers; Bartrem, Casey

    2016-09-01

    Soil/dust ingestion rates are important variables in assessing children's health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose-response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children's blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/dust ingestion rates are 86-94 mg/day for 6-month- to 2-year-old children and 51-67 mg/day for 2- to 9-year-old children. Soil/dust ingestion rate estimates for 1- to 9-year-old children at the BHSS are lower than those commonly used in human health risk assessment. A substantial component of children's exposure comes from sources beyond the immediate home environment. von Lindern I, Spalinger S, Stifelman ML, Stanek LW, Bartrem C. 2016. Estimating children's soil/dust ingestion rates through retrospective analyses of blood lead biomonitoring from the Bunker Hill Superfund Site in Idaho. Environ Health Perspect 124:1462-1470; http://dx.doi.org/10.1289/ehp.1510144.

  7. RAETRAD-F: VERSION 1.1 USER'S GUIDE FOR ANALYZING SITE-SPECIFIC MEASUREMENTS OF SOIL RADON POTENTIAL CATEGORY FOR FLORIDA HOUSES

    EPA Science Inventory

    The document describes RAETRAD-F (RAdon Emanation and TRAnsport into Dwellings--Florida), a computer code that provides a simple way to analyze site-specific soil measurements to estimate upper-limit indoor radon concentrations in a reference house on a site. The code uses data f...

  8. Site Transfer Functions of Three-Component Ground Motion in Western Turkey

    NASA Astrophysics Data System (ADS)

    Ozgur Kurtulmus, Tevfik; Akyol, Nihal; Camyildiz, Murat; Gungor, Talip

    2015-04-01

    Because of high seismicity accommodating crustal deformation and deep graben structures, on which have, urbanized and industrialized large cities in western Turkey, the importance of site-specific seismic hazard assessments becomes more crucial. Characterizing source, site and path effects is important for both assessing the seismic hazard in a specific region and generation of the building codes/or renewing previous ones. In this study, we evaluated three-component recordings for micro- and moderate-size earthquakes with local magnitudes ranging between 2.0 and 5.6. This dataset is used for site transfer function estimations, utilizing two different spectral ratio approaches 'Standard Spectral Ratio-(SSR)' and 'Horizontal to Vertical Spectral Ratio-(HVSR)' and a 'Generalized Inversion Technique-(GIT)' to highlight site-specific seismic hazard potential of deep basin structures of the region. Obtained transfer functions revealed that the sites located near the basin edges are characterized by broader HVSR curves. Broad HVSR peaks could be attributed to the complexity of wave propagation related to significant 2D/3D velocity variations at the sediment-bedrock interface near the basin edges. Comparison of HVSR and SSR estimates for the sites located on the grabens showed that SSR estimates give larger values at lower frequencies which could be attributed to lateral variations in regional velocity and attenuation values caused by basin geometry and edge effects. However, large amplitude values of vertical component GIT site transfer functions were observed at varying frequency ranges for some of the stations. These results imply that vertical component of ground motion is not amplification free. Contamination of HVSR site transfer function estimates at different frequency bands could be related to complexities in the wave field caused by deep or shallow heterogeneities in the region such as differences in the basin geometries, fracturing and fluid saturation along different propagation paths. The results also show that, even if the site is located on a horst, the presence of weathered zones near the surface could cause moderate frequency dependent site effects.

  9. Multiscale Structure of UXO Site Characterization: Spatial Estimation and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostrouchov, George; Doll, William E.; Beard, Les P.

    2009-01-01

    Unexploded ordnance (UXO) site characterization must consider both how the contamination is generated and how we observe that contamination. Within the generation and observation processes, dependence structures can be exploited at multiple scales. We describe a conceptual site characterization process, the dependence structures available at several scales, and consider their statistical estimation aspects. It is evident that most of the statistical methods that are needed to address the estimation problems are known but their application-specific implementation may not be available. We demonstrate estimation at one scale and propose a representation for site contamination intensity that takes full account of uncertainty,more » is flexible enough to answer regulatory requirements, and is a practical tool for managing detailed spatial site characterization and remediation. The representation is based on point process spatial estimation methods that require modern computational resources for practical application. These methods have provisions for including prior and covariate information.« less

  10. Hierarchical Bayes estimation of species richness and occupancy in spatially replicated surveys

    USGS Publications Warehouse

    Kery, M.; Royle, J. Andrew

    2008-01-01

    1. Species richness is the most widely used biodiversity metric, but cannot be observed directly as, typically, some species are overlooked. Imperfect detectability must therefore be accounted for to obtain unbiased species-richness estimates. When richness is assessed at multiple sites, two approaches can be used to estimate species richness: either estimating for each site separately, or pooling all samples. The first approach produces imprecise estimates, while the second loses site-specific information. 2. In contrast, a hierarchical Bayes (HB) multispecies site-occupancy model benefits from the combination of information across sites without losing site-specific information and also yields occupancy estimates for each species. The heart of the model is an estimate of the incompletely observed presence-absence matrix, a centrepiece of biogeography and monitoring studies. We illustrate the model using Swiss breeding bird survey data, and compare its estimates with the widely used jackknife species-richness estimator and raw species counts. 3. Two independent observers each conducted three surveys in 26 1-km(2) quadrats, and detected 27-56 (total 103) species. The average estimated proportion of species detected after three surveys was 0.87 under the HB model. Jackknife estimates were less precise (less repeatable between observers) than raw counts, but HB estimates were as repeatable as raw counts. The combination of information in the HB model thus resulted in species-richness estimates presumably at least as unbiased as previous approaches that correct for detectability, but without costs in precision relative to uncorrected, biased species counts. 4. Total species richness in the entire region sampled was estimated at 113.1 (CI 106-123); species detectability ranged from 0.08 to 0.99, illustrating very heterogeneous species detectability; and species occupancy was 0.06-0.96. Even after six surveys, absolute bias in observed occupancy was estimated at up to 0.40. 5. Synthesis and applications. The HB model for species-richness estimation combines information across sites and enjoys more precise, and presumably less biased, estimates than previous approaches. It also yields estimates of several measures of community size and composition. Covariates for occupancy and detectability can be included. We believe it has considerable potential for monitoring programmes as well as in biogeography and community ecology.

  11. Sierra Nevada meadows: species alpha diversity

    Treesearch

    Raymond D. Ratliff

    1993-01-01

    Plant species diversity refers to variety and abundance; it does not necessarily relate to meadow health but may provide information important in an ecosystem context. Monitoring to detect change in diversity usually begins with estimating alpha (within) diversity of plant communities. Because few such estimates exist for meadow site classes or specific sites of the...

  12. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  13. True versus Apparent Malaria Infection Prevalence: The Contribution of a Bayesian Approach

    PubMed Central

    Claes, Filip; Van Hong, Nguyen; Torres, Kathy; Mao, Sokny; Van den Eede, Peter; Thi Thinh, Ta; Gamboa, Dioni; Sochantha, Tho; Thang, Ngo Duc; Coosemans, Marc; Büscher, Philippe; D'Alessandro, Umberto; Berkvens, Dirk; Erhart, Annette

    2011-01-01

    Aims To present a new approach for estimating the “true prevalence” of malaria and apply it to datasets from Peru, Vietnam, and Cambodia. Methods Bayesian models were developed for estimating both the malaria prevalence using different diagnostic tests (microscopy, PCR & ELISA), without the need of a gold standard, and the tests' characteristics. Several sources of information, i.e. data, expert opinions and other sources of knowledge can be integrated into the model. This approach resulting in an optimal and harmonized estimate of malaria infection prevalence, with no conflict between the different sources of information, was tested on data from Peru, Vietnam and Cambodia. Results Malaria sero-prevalence was relatively low in all sites, with ELISA showing the highest estimates. The sensitivity of microscopy and ELISA were statistically lower in Vietnam than in the other sites. Similarly, the specificities of microscopy, ELISA and PCR were significantly lower in Vietnam than in the other sites. In Vietnam and Peru, microscopy was closer to the “true” estimate than the other 2 tests while as expected ELISA, with its lower specificity, usually overestimated the prevalence. Conclusions Bayesian methods are useful for analyzing prevalence results when no gold standard diagnostic test is available. Though some results are expected, e.g. PCR more sensitive than microscopy, a standardized and context-independent quantification of the diagnostic tests' characteristics (sensitivity and specificity) and the underlying malaria prevalence may be useful for comparing different sites. Indeed, the use of a single diagnostic technique could strongly bias the prevalence estimation. This limitation can be circumvented by using a Bayesian framework taking into account the imperfect characteristics of the currently available diagnostic tests. As discussed in the paper, this approach may further support global malaria burden estimation initiatives. PMID:21364745

  14. Estimating Children's Soil/Dust Ingestion Rates through ...

    EPA Pesticide Factsheets

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/du

  15. Use of instantaneous streamflow measurements to improve regression estimates of index flow for the summer month of lowest streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, David J.

    2011-01-01

    In Michigan, index flow Q50 is a streamflow characteristic defined as the minimum of median flows for July, August, and September. The state of Michigan uses index flow estimates to help regulate large (greater than 100,000 gallons per day) water withdrawals to prevent adverse effects on characteristic fish populations. At sites where long-term streamgages are located, index flows are computed directly from continuous streamflow records as GageQ50. In an earlier study, a multiple-regression equation was developed to estimate index flows IndxQ50 at ungaged sites. The index equation explains about 94 percent of the variability of index flows at 147 (index) streamgages by use of six explanatory variables describing soil type, aquifer transmissivity, land cover, and precipitation characteristics. This report extends the results of the previous study, by use of Monte Carlo simulations, to evaluate alternative flow estimators, DiscQ50, IntgQ50, SiteQ50, and AugmQ50. The Monte Carlo simulations treated each of the available index streamgages, in turn, as a miscellaneous site where streamflow conditions are described by one or more instantaneous measurements of flow. In the simulations, instantaneous flows were approximated by daily mean flows at the corresponding site. All estimators use information that can be obtained from instantaneous flow measurements and contemporaneous daily mean flow data from nearby long-term streamgages. The efficacy of these estimators was evaluated over a set of measurement intensities in which the number of simulated instantaneous flow measurements ranged from 1 to 100 at a site. The discrete measurement estimator DiscQ50 is based on a simple linear regression developed between information on daily mean flows at five or more streamgages near the miscellaneous site and their corresponding GageQ50 index flows. The regression relation then was used to compute a DiscQ50 estimate at the miscellaneous site by use of the simulated instantaneous flow measurement. This process was repeated to develop a set of DiscQ50 estimates for all simulated instantaneous measurements, a weighted DiscQ50 estimate was formed from this set. Results indicated that the expected value of this weighted estimate was more precise than the IndxQ50 estimate for all measurement intensities evaluated. The integrated index-flow estimator, IntgQ50, was formed by computing a weighted average of the index estimate IndxQ50 and the DiscQ50 estimate. Results indicated that the IntgQ50 estimator was more precise than the DiscQ50 estimator at low measurement intensities of one to two measurements. At greater measurement intensities, the precision of the IntgQ50 estimator converges to the DiscQ50 estimator. Neither the DiscQ50 nor the IntgQ50 estimators provided site-specific estimates. In particular, although expected values of DiscQ50 and IntgQ50 estimates converge with increasing measurement intensity, they do not necessarily converge to the site-specific value of Q50. The site estimator of flow, SiteQ50, was developed to facilitate this convergence at higher measurement intensities. This is accomplished by use of the median of simulated instantaneous flow values for each measurement intensity level. A weighted estimate of the median and information associated with the IntgQ50 estimate was used to form the SiteQ50 estimate. Initial simulations indicate that the SiteQ50 estimator generally has greater precision than the IntgQ50 estimator at measurement intensities greater than 3, however, additional analysis is needed to identify streamflow conditions under which instantaneous measurements will produce estimates that generally converge to the index flows. A preliminary augmented index regression equation was developed, which contains the index regression estimate and two additional variables associated with base-flow recession characteristics. When these recession variables were estimated as the medians of recession parameters compute

  16. Estimating Children’s Soil/Dust Ingestion Rates through Retrospective Analyses of Blood Lead Biomonitoring from the Bunker Hill Superfund Site in Idaho

    PubMed Central

    von Lindern, Ian; Spalinger, Susan; Stifelman, Marc L.; Stanek, Lindsay Wichers; Bartrem, Casey

    2016-01-01

    Background: Soil/dust ingestion rates are important variables in assessing children’s health risks in contaminated environments. Current estimates are based largely on soil tracer methodology, which is limited by analytical uncertainty, small sample size, and short study duration. Objectives: The objective was to estimate site-specific soil/dust ingestion rates through reevaluation of the lead absorption dose–response relationship using new bioavailability data from the Bunker Hill Mining and Metallurgical Complex Superfund Site (BHSS) in Idaho, USA. Methods: The U.S. Environmental Protection Agency (EPA) in vitro bioavailability methodology was applied to archived BHSS soil and dust samples. Using age-specific biokinetic slope factors, we related bioavailable lead from these sources to children’s blood lead levels (BLLs) monitored during cleanup from 1988 through 2002. Quantitative regression analyses and exposure assessment guidance were used to develop candidate soil/dust source partition scenarios estimating lead intake, allowing estimation of age-specific soil/dust ingestion rates. These ingestion rate and bioavailability estimates were simultaneously applied to the U.S. EPA Integrated Exposure Uptake Biokinetic Model for Lead in Children to determine those combinations best approximating observed BLLs. Results: Absolute soil and house dust bioavailability averaged 33% (SD ± 4%) and 28% (SD ± 6%), respectively. Estimated BHSS age-specific soil/dust ingestion rates are 86–94 mg/day for 6-month- to 2-year-old children and 51–67 mg/day for 2- to 9-year-old children. Conclusions: Soil/dust ingestion rate estimates for 1- to 9-year-old children at the BHSS are lower than those commonly used in human health risk assessment. A substantial component of children’s exposure comes from sources beyond the immediate home environment. Citation: von Lindern I, Spalinger S, Stifelman ML, Stanek LW, Bartrem C. 2016. Estimating children’s soil/dust ingestion rates through retrospective analyses of blood lead biomonitoring from the Bunker Hill Superfund Site in Idaho. Environ Health Perspect 124:1462–1470; http://dx.doi.org/10.1289/ehp.1510144 PMID:26745545

  17. Natural attenuation software (NAS): Assessing remedial strategies and estimating timeframes

    USGS Publications Warehouse

    Mendez, E.; Widdowson, M.; Chapelle, F.; Casey, C.

    2005-01-01

    Natural Attenuation Software (NAS) is a screening tool to estimate remediation timeframes for monitored natural attenuation (MNA) and to assist in decision-making on the level of source zone treatment in conjunction with MNA using site-specific remediation objectives. Natural attenuation processes that NAS models include are advection, dispersion, sorption, non-aqueous phase liquid (NAPL) dissolution, and biodegradation of either petroleum hydrocarbons or chlorinated ethylenes. Newly-implemented enhancements to NAS designed to maximize the utility of NAS for site managers were observed. NAS has expanded source contaminant specification options to include chlorinated ethanes and chlorinated methanes, and to allow for the analysis of any other user-defined contaminants that may be subject to microbially-mediated transformations (heavy metals, radioisotopes, etc.). Included is the capability to model co-mingled plumes, with constituents from multiple contaminant categories. To enable comparison of remediation timeframe estimates between MNA and specific engineered remedial actions , NAS was modified to incorporate an estimation technique for timeframes associated with pump-and-treat remediation technology for comparison to MNA. This is an abstract of a paper presented at the 8th International In Situ and On-Site Bioremediation Symposium (Baltimore, MD 6/6-9/2005).

  18. Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models

    PubMed Central

    Fodor, Nándor

    2012-01-01

    In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451

  19. 10 CFR 960.3-1-4-2 - Site nomination for characterization.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...

  20. 10 CFR 960.3-1-4-2 - Site nomination for characterization.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...

  1. 10 CFR 960.3-1-4-2 - Site nomination for characterization.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...

  2. 10 CFR 960.3-1-4-2 - Site nomination for characterization.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... testing of core samples for the evaluation of geochemical and engineering rock properties, and chemical... industrial activities; and extrapolations of regional data to estimate site-specific characteristics and...

  3. Evaluation of agricultural best-management practices in the Conestoga River headwaters, Pennsylvania; hydrology of a small carbonate site near Ephrata, Pennsylvania, prior to implementation of nutrient management

    USGS Publications Warehouse

    Koerkle, E.H.; Hall, D.W.; Risser, D.W.; Lietman, P.L.; Chichester, D.C.

    1997-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Department of Agriculture and Pennsylvania Department of Environmental Protection, investigated the effects of agricultural best-management practices on water quality in the Conestoga River headwaters watershed. This report describes environmental factors and the surface-water and ground-water quality of one 47.5-acre field site, Field-Site 2, from October 1984 through September 1986, prior to implementation of nutrient management. The site is partially terraced agricultural cropland underlain by carbonate rock. Twenty-seven acres are terraced, pipe-drained, and are under no-till cultivation. The remaining acreage is under minimum-till cultivation. Corn is the primary crop. The average annual rate of fertilization at the site was 480 pounds per acre of nitrogen and 110 pounds per acre of phosphorus. An unconfined limestone and dolomitic aquifer underlies the site, Depth to bedrock ranges from 5 to 30 feet below land surface. Estimated specific yields range from 0.05 to 0.10, specific capacities of wells range from less than 1 to about 20 gallons per minute per foot of drawdown, and estimates of transmissivities range from 10 to 10,000 square feet per day. Average ground-water recharge was estimated to be about 23 inches per year. The specific capacity and transmissivity data indicate that two aquifer regimes are present at the site. Wells drilled into dolomites in the eastern part of the site have larger specific capacities (averaging 20 gallons per minute per foot of drawdown) relative to specific capacities (averaging less than 1 gallon per minute per foot of drawdown) of wells drilled into limestones in the western part of the site. Median concentrations of soil-soluble nitrate and soluble phosphorus in the top 4 feet of silt- or silty-clay-loam soil ranged from 177 to 329 and 8.5 to 35 pounds per acre, respectively. Measured runoff from the pipe-drained terraces ranged from 10 to 48,000 cubic feet and was 1.7 and 0.8 percent, respectively, of the 1985 and 1986 annual precipitation. An estimated 90,700 cubic feet of surface runoff carried 87 pounds to total nitrogen and 37 pounds of total phosphorus, or less that 0.65 percent of the amount of either nutrient applied during the study period. Rainfall on the snow-covered, frozen ground produced more that half of the runoff and nitrogen and phosphorus loads measured in pipe-drained runoff. Graphical and regression analyses of surface runoff suggest that (1) mean-storm concentrations of total nitrogen species and total phosphorus decreased with increasing time between a runoff event and the last previous nutrient application, and (2) mean total-phosphorus concentrations approached a baseline value (estimated at 2 to 5 milligrams per liter for total-phosphorus concentrations) after several months without nutrient applications. Dissolved nitrate concentrations in ground water in wells unaffected by an on-site ammonia spill ranged from 7.4 to 100 milligrams per liter. Average annual additions and removals of nitrogen were estimated. Nitrogen was added to the site by applications of manure and commercial fertilizer nitrogen, as well as by precipitation and ground water entering across the western site boundary. These sources of nitrogen accounted for 95, 3, 1, and 1 percent, respectively, of estimated additions. Nitrogen was removed from the site in harvested crops, by ground-water discharge, by volatilization, and in surface runoff, which accounted for 42, 28, 29, and less than 1 percent, respectively, of estimated removals.

  4. Estimating pathway-specific contributions to biodegradation in aquifers based on dual isotope analysis: theoretical analysis and reactive transport simulations.

    PubMed

    Centler, Florian; Heße, Falk; Thullner, Martin

    2013-09-01

    At field sites with varying redox conditions, different redox-specific microbial degradation pathways contribute to total contaminant degradation. The identification of pathway-specific contributions to total contaminant removal is of high practical relevance, yet difficult to achieve with current methods. Current stable-isotope-fractionation-based techniques focus on the identification of dominant biodegradation pathways under constant environmental conditions. We present an approach based on dual stable isotope data to estimate the individual contributions of two redox-specific pathways. We apply this approach to carbon and hydrogen isotope data obtained from reactive transport simulations of an organic contaminant plume in a two-dimensional aquifer cross section to test the applicability of the method. To take aspects typically encountered at field sites into account, additional simulations addressed the effects of transverse mixing, diffusion-induced stable-isotope fractionation, heterogeneities in the flow field, and mixing in sampling wells on isotope-based estimates for aerobic and anaerobic pathway contributions to total contaminant biodegradation. Results confirm the general applicability of the presented estimation method which is most accurate along the plume core and less accurate towards the fringe where flow paths receive contaminant mass and associated isotope signatures from the core by transverse dispersion. The presented method complements the stable-isotope-fractionation-based analysis toolbox. At field sites with varying redox conditions, it provides a means to identify the relative importance of individual, redox-specific degradation pathways. © 2013.

  5. Estimating microcystin levels at recreational sites in western Lake Erie and Ohio

    USGS Publications Warehouse

    Francy, Donna S.; Brady, Amie M. G.; Ecker, Christopher D.; Graham, Jennifer L.; Stelzer, Erin A.; Struffolino, Pamela; Loftin, Keith A.

    2016-01-01

    Cyanobacterial harmful algal blooms (cyanoHABs) and associated toxins, such as microcystin, are a major global water-quality issue. Water-resource managers need tools to quickly predict when and where toxin-producing cyanoHABs will occur. This could be done by using site-specific models that estimate the potential for elevated toxin concentrations that cause public health concerns. With this study, samples were collected at three Ohio lakes to identify environmental and water-quality factors to develop linear-regression models to estimate microcystin levels. Measures of the algal community (phycocyanin, cyanobacterial biovolume, and cyanobacterial gene concentrations) and pH were most strongly correlated with microcystin concentrations. Cyanobacterial genes were quantified for general cyanobacteria, general Microcystis and Dolichospermum, and for microcystin synthetase (mcyE) for Microcystis, Dolichospermum, and Planktothrix. For phycocyanin, the relations were different between sites and were different between hand-held measurements on-site and nearby continuous monitor measurements for the same site. Continuous measurements of parameters such as phycocyanin, pH, and temperature over multiple days showed the highest correlations to microcystin concentrations. The development of models with high R2values (0.81–0.90), sensitivities (92%), and specificities (100%) for estimating microcystin concentrations above or below the Ohio Recreational Public Health Advisory level of 6 μg L−1 was demonstrated for one site; these statistics may change as more data are collected in subsequent years. This study showed that models could be developed for estimates of exceeding a microcystin threshold concentration at a recreational freshwater lake site, with potential to expand their use to provide relevant public health information to water resource managers and the public for both recreational and drinking waters.

  6. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Treesearch

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duberstein, Corey A.; Simmons, Mary Ann; Sackschewsky, Michael R.

    Mitigation threshold guidelines for the Hanford Site are based on habitat requirements of the sage sparrow (Amphispiza belli) and only apply to areas with a mature sagebrush (Artemisia tridentata) overstory and a native understory. The sage sparrow habitat requirements are based on literature values and are not specific to the Hanford Site. To refine these guidelines for the Site, a multi-year study was undertaken to quantify habitat characteristics of sage sparrow territories. These characteristics were then used to develop a habitat suitability index (HSI) model which can be used to estimate the habitat value of specific locations on the Site.

  8. Simulations in site error estimation for direction finders

    NASA Astrophysics Data System (ADS)

    López, Raúl E.; Passi, Ranjit M.

    1991-08-01

    The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.

  9. WHAT ARE THE BEST MEANS TO ASSESS SITES AND MOVE TOWARD CLOSURE, USING APPROPRIATE SITE SPECIFIC RISK EVALUATIONS?

    EPA Science Inventory

    To facilitate evaluation of existing site characterization data, ORD has developed on-line tools and models that integrate data and models into innovative applications. Forty calculators have been developed in four groups: parameter estimators, models, scientific demos and unit ...

  10. ESTIMATING THE RATE OF NATURAL BIOATTENUATION OF GROUND WATER CONTAMINANTS BY A MASS CONSERVATION APPROACH

    EPA Science Inventory

    Recent field and experimental research has shown that certain classes of subsurface contaminants can biodegrade at many sites. A number of site specific factors influences the rate of biodegradation, which helps determine the ultimate extent of contamination at these sites. The...

  11. Empirical evidence for site coefficients in building code provisions

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  12. Comparison of electrofishing techniques to detect larval lampreys in wadeable streams in the Pacific Northwest

    USGS Publications Warehouse

    Dunham, Jason B.; Chelgren, Nathan D.; Heck, Michael P.; Clark, Steven M.

    2013-01-01

    We evaluated the probability of detecting larval lampreys using different methods of backpack electrofishing in wadeable streams in the U.S. Pacific Northwest. Our primary objective was to compare capture of lampreys using electrofishing with standard settings for salmon and trout to settings specifically adapted for capture of lampreys. Field work consisted of removal sampling by means of backpack electrofishing in 19 sites in streams representing a broad range of conditions in the region. Captures of lampreys at these sites were analyzed with a modified removal-sampling model and Bayesian estimation to measure the relative odds of capture using the lamprey-specific settings compared with the standard salmonid settings. We found that the odds of capture were 2.66 (95% credible interval, 0.87–78.18) times greater for the lamprey-specific settings relative to standard salmonid settings. When estimates of capture probability were applied to estimating the probabilities of detection, we found high (>0.80) detectability when the actual number of lampreys in a site was greater than 10 individuals and effort was at least two passes of electrofishing, regardless of the settings used. Further work is needed to evaluate key assumptions in our approach, including the evaluation of individual-specific capture probabilities and population closure. For now our results suggest comparable results are possible for detection of lampreys by using backpack electrofishing with salmonid- or lamprey-specific settings.

  13. Quantifying the uncertainty in site amplification modeling and its effects on site-specific seismic-hazard estimation in the upper Mississippi embayment and adjacent areas

    USGS Publications Warehouse

    Cramer, C.H.

    2006-01-01

    The Mississippi embayment, located in the central United States, and its thick deposits of sediments (over 1 km in places) have a large effect on earthquake ground motions. Several previous studies have addressed how these thick sediments might modify probabilistic seismic-hazard maps. The high seismic hazard associated with the New Madrid seismic zone makes it particularly important to quantify the uncertainty in modeling site amplification to better represent earthquake hazard in seismic-hazard maps. The methodology of the Memphis urban seismic-hazard-mapping project (Cramer et al., 2004) is combined with the reference profile approach of Toro and Silva (2001) to better estimate seismic hazard in the Mississippi embayment. Improvements over previous approaches include using the 2002 national seismic-hazard model, fully probabilistic hazard calculations, calibration of site amplification with improved nonlinear soil-response estimates, and estimates of uncertainty. Comparisons are made with the results of several previous studies, and estimates of uncertainty inherent in site-amplification modeling for the upper Mississippi embayment are developed. I present new seismic-hazard maps for the upper Mississippi embayment with the effects of site geology incorporating these uncertainties.

  14. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

    2017-01-01

    The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

  15. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  16. Dose-response relationship between cigarette smoking and site-specific cancer risk: protocol for a systematic review with an original design combining umbrella and traditional reviews.

    PubMed

    Lugo, Alessandra; Bosetti, Cristina; Peveri, Giulia; Rota, Matteo; Bagnardi, Vincenzo; Gallus, Silvano

    2017-11-01

    Only a limited number of meta-analyses providing risk curve functions of dose-response relationships between various smoking-related variables and cancer-specific risk are available. To identify all relevant original publications on the issue, we will conduct a series of comprehensive systematic reviews based on three subsequent literature searches: (1) an umbrella review, to identify meta-analyses, pooled analyses and systematic reviews published before 28 April 2017 on the association between cigarette smoking and the risk of 28 (namely all) malignant neoplasms; (2) for each cancer site, an updated review of original publications on the association between cigarette smoking and cancer risk, starting from the last available comprehensive review identified through the umbrella review; and (3) a review of all original articles on the association between cigarette smoking and site-specific cancer risk included in the publications identified through the umbrella review and the updated reviews. The primary outcomes of interest will be (1) the excess incidence/mortality of various cancers for smokers compared with never smokers; and (2) the dose-response curves describing the association between smoking intensity, duration and time since stopping and incidence/mortality for various cancers. For each cancer site, we will perform a meta-analysis by pooling study-specific estimates for smoking status. We will also estimate the dose-response curves for other smoking-related variables through random-effects meta-regression models based on a non-linear dose-response relationship framework. Ethics approval is not required for this study. Main results will be published in peer-reviewed journals and will also be included in a publicly available website. We will provide therefore the most complete and updated estimates on the association between various measures of cigarette smoking and site-specific cancer risk. This will allow us to obtain precise estimates on the cancer burden attributable to cigarette smoking. This protocol was registered in the International Prospective Register of Systematic Reviews (CRD42017063991). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Solid Cancer Incidence in the Techa River Incidence Cohort: 1956-2007.

    PubMed

    Davis, F G; Yu, K L; Preston, D; Epifanova, S; Degteva, M; Akleyev, A V

    2015-07-01

    Previously reported studies of the Techa River Cohort have established associations between radiation dose and the occurrence of solid cancers and leukemia (non-CLL) that appear to be linear in dose response. These analyses include 17,435 cohort members alive and not known to have had cancer prior to January 1, 1956 who lived in areas near the river or Chelyabinsk City at some time between 1956 and the end of 2007, utilized individualized dose estimates computed using the Techa River Dosimetry System 2009 and included five more years of follow-up. The median and mean dose estimates based on these doses are consistently higher than those based on earlier Techa River Dosimetry System 2000 dose estimates. This article includes new site-specific cancer risk estimates and risk estimates adjusted for available information on smoking. There is a statistically significant (P = 0.02) linear trend in the smoking-adjusted all-solid cancer incidence risks with an excess relative risk (ERR) after exposure to 100 mGy of 0.077 with a 95% confidence interval of 0.013-0.15. Examination of site-specific risks revealed statistically significant radiation dose effects only for cancers of the esophagus and uterus with an ERR per 100 mGy estimates in excess of 0.10. Esophageal cancer risk estimates were modified by ethnicity and sex, but not smoking. While the solid cancer rates are attenuated when esophageal cancer is removed (ERR = 0.063 per 100 mGy), a dose-response relationship is present and it remains likely that radiation exposure has increased the risks for most solid cancers in the cohort despite the lack of power to detect statistically significant risks for specific sites.

  18. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  19. An evaluation of methods for estimating decadal stream loads

    USGS Publications Warehouse

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  20. How to estimate site index for oaks in the Missouri Ozarks

    Treesearch

    Robert A. McQuilkin

    1978-01-01

    How well does a certain tree species grow on a specific tract of land? Foresters traditionally answer this question in terms of "site index"--the average height of dominant and codominant trees at age 50 years in fully stocked, even-aged stands. Site index is widely used as an index of site quality because it is easy to measure and because it correlates well...

  1. Errors in Representing Regional Acid Deposition with Spatially Sparse Monitoring: Case Studies of the Eastern US Using Model Predictions

    EPA Science Inventory

    The current study uses case studies of model-estimated regional precipitation and wet ion deposition to estimate errors in corresponding regional values derived from the means of site-specific values within regions of interest located in the eastern US. The mean of model-estimate...

  2. Recharge at the Hanford Site: Status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gee, G.W.

    A variety of field programs designed to evaluate recharge and other water balance components including precipitation, infiltration, evaporation, and water storage changes, have been carried out at the Hanford Site since 1970. Data from these programs have indicated that a wide range of recharge rates can occur depending upon specific site conditions. Present evidence suggests that minimum recharge occurs where soils are fine-textured and surfaces are vegetated with deep-rooted plants. Maximum recharge occurs where coarse soils or gravels exist at the surface and soils are kept bare. Recharge can occur in areas where shallow-rooted plants dominate the surface, particularly wheremore » soils are coarse-textured. Recharge estimates have been made for the site using simulation models. A US Geological Survey model that attempts to account for climate variability, soil storage parameters, and plant factors has calculated recharge values ranging from near zero to an average of about 1 cm/yr for the Hanford Site. UNSAT-H, a deterministic model developed for the site, appears to be the best code available for estimating recharge on a site-specific basis. Appendix I contains precipitation data from January 1979 to June 1987. 42 refs., 11 figs., 11 tabs.« less

  3. A-BOMB SURVIVOR SITE-SPECIFIC RADIOGENIC CANCER RISKS ESTIMATES

    EPA Science Inventory

    A draft manuscript is being prepared that describes ways to improve estimates of risk from radiation that have been derived from A-bomb survivors. The work has been published in the journal Radiation Research volume 169, pages 87-98.

  4. Estimating national crop yield potential and the relevance of weather data sources

    NASA Astrophysics Data System (ADS)

    Van Wart, Justin

    2011-12-01

    To determine where, when, and how to increase yields, researchers often analyze the yield gap (Yg), the difference between actual current farm yields and crop yield potential. Crop yield potential (Yp) is the yield of a crop cultivar grown under specific management limited only by temperature and solar radiation and also by precipitation for water limited yield potential (Yw). Yp and Yw are critical components of Yg estimations, but are very difficult to quantify, especially at larger scales because management data and especially daily weather data are scarce. A protocol was developed to estimate Yp and Yw at national scales using site-specific weather, soils and management data. Protocol procedures and inputs were evaluated to determine how to improve accuracy of Yp, Yw and Yg estimates. The protocol was also used to evaluate raw, site-specific and gridded weather database sources for use in simulations of Yp or Yw. The protocol was applied to estimate crop Yp in US irrigated maize and Chinese irrigated rice and Yw in US rainfed maize and German rainfed wheat. These crops and countries account for >20% of global cereal production. The results have significant implications for past and future studies of Yp, Yw and Yg. Accuracy of national long-term average Yp and Yw estimates was significantly improved if (i) > 7 years of simulations were performed for irrigated and > 15 years for rainfed sites, (ii) > 40% of nationally harvested area was within 100 km of all simulation sites, (iii) observed weather data coupled with satellite derived solar radiation data were used in simulations, and (iv) planting and harvesting dates were specified within +/- 7 days of farmers actual practices. These are much higher standards than have been applied in national estimates of Yp and Yw and this protocol is a substantial step in making such estimates more transparent, robust, and straightforward. Finally, this protocol may be a useful tool for understanding yield trends and directing research and development efforts aimed at providing for a secure and stable future food supply.

  5. SITE-SPECIFIC MEASUREMENTS OF RESIDENTIAL RADON PROTECTION CATEGORY

    EPA Science Inventory

    The report describes a series of benchmark measurements of soil radon potential at seven Florida sites and compares the measurements with regional estimates of radon potential from the Florida radon protection map. The measurements and map were developed under the Florida Radon R...

  6. Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan

    NASA Astrophysics Data System (ADS)

    Milando, Chad W.; Batterman, Stuart A.

    2018-05-01

    The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.

  7. Site Amplification Characteristics of the Several Seismic Stations at Jeju Island, in Korea, using S-wave Energy, Background Noise, and Coda waves from the East Japan earthquake (Mar. 11th, 2011) Series.

    NASA Astrophysics Data System (ADS)

    Seong-hwa, Y.; Wee, S.; Kim, J.

    2016-12-01

    Observed ground motions are composed of 3 main factors such as seismic source, seismic wave attenuation and site amplification. Among them, site amplification is also important factor and should be considered to estimate soil-structure dynamic interaction with more reliability. Though various estimation methods are suggested, this study used the method by Castro et. al.(1997) for estimating site amplification. This method has been extended to background noise, coda waves and S waves recently for estimating site amplification. This study applied the Castro et. al.(1997)'s method to 3 different seismic waves, that is, S-wave Energy, Background Noise, and Coda waves. This study analysed much more than about 200 ground motions (acceleration type) from the East Japan earthquake (March 11th, 2011) Series of seismic stations at Jeju Island (JJU, SGP, HALB, SSP and GOS; Fig. 1), in Korea. The results showed that most of the seismic stations gave similar results among three types of seismic energies. Each station showed its own characteristics of site amplification property in low, high and specific resonance frequency ranges. Comparison of this study to other studies can give us much information about dynamic amplification of domestic sites characteristics and site classification.

  8. Diurnal variations of the Martian surface layer meteorological parameters during the first 45 sols at two Viking Lander sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutton, J.L.; Leovy, C.B.; Tillman, J.E.

    1978-12-01

    Wind speed, ambient and surface temperatures from both Viking Landers have been used to compute bulk Richardson numbers and Monin-Obukhov lengths during the earliest phase of the Mars missions. These parameters are used to estimate drag and heat transfer coefficients, friction velocities and surface heat fluxes at the two sites. The principal uncertainty is in the specification of the roughness length. Maximum heat fluxes occur near local noon at both sites, and are estimated to be in the range 15--20 W m/sup -2/ at the Viking 1 site and 10--15 W m/sup -2/ at the Viking 2 site. Maximum valuesmore » of friction velocity occur in late morning at Viking 1 and are estimated to be 0.4--0.6 m s/sup -1/. They occur shortly after drawn at the Viking 2 site where peak values are estimated to be in the range 0.25--0.35 m s/sup -1/. Extension of these calculations to later times during the mission will require allowance for dust opacity effects in the estimation of surface temperature and in the correction of radiation errors of the Viking 2 temperature sensor.« less

  9. Utility theoretic approach to estimating the demand for and benefits from recreational fishing: the impact of acid rain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, W.D. Jr.

    1985-01-01

    The Adirondack State Park has been hit especially hard by acid precipitation. Acid precipitation impacts particular species of fish at various high altitudes sites in the park. The author estimates consumer surplus measures for changes in a variable to proxy the stock size of these species at specific sites. To do this he first estimates the individual's demand for a recreation site as a function of site prices and the site's characteristics. The demand function for the individual is derived from a utility function. A travel cost approach is used to estimate an individual's share of total fishing time spentmore » at a five fishing site. The shares are estimated by maximum likelihood and the results indicate that price and the three characteristics do explain the allocation of the individual's time spent at the various sites selected for the analysis. Finally, consumer surplus measures for a reduction in the catch rates of the species most likely to be affected by Acid Precipitation are calculated. The meaning of these measures in the context of a model that assumes weak separability is examined. These reductions in catch rates can be linked to changes in the level of Acid Precipitation in the Park, and this provides us with a method for quantifying the impact of acid precipitation on recreation fishing.« less

  10. Cancer incidence attributable to inadequate physical activity in Alberta in 2012

    PubMed Central

    Brenner, Darren R.; Poirier, Abbey E.; Grundy, Anne; Khandwala, Farah; McFadden, Alison; Friedenreich, Christine M.

    2017-01-01

    Background: Physical inactivity has been consistently associated with increased risk of colorectal, endometrial, breast (in postmenopausal women), prostate, lung and ovarian cancers. The objective of the current analysis was to estimate the proportion and absolute number of site-specific cancer cases attributable to inadequate physical activity in Alberta in 2012. Methods: We used population attributable risks to estimate the proportion of each site-specific cancer attributable to inactivity. Relative risk estimates were obtained from the epidemiological literature, and prevalence estimates were calculated with the use of data from the Canadian Community Health Survey cycle 2.1 (2003). Respondents who acquired 1.5-2.9 kcal/kg per day and less than 1.5 kcal/kg per day of physical activity were classified as moderately active and inactive, respectively, and both levels were considered inadequate for mitigating cancer risks. We obtained age-, sex- and site-specific cancer incidence data from the Alberta Cancer Registry for 2012. Results: About 59%-75% of men and 69%-78% of women did not engage in adequate physical activity. Overall, 13.8% of cancers across all associated cancers were estimated to be attributable to inadequate physical activity, representing 7.2% of all cancers diagnosed in Alberta in 2012. Suboptimal levels of physical activity had a greater impact among women: the proportion of all associated cancers attributable to inadequate physical activity was 18.3% for women and 9.9% for men. Interpretation: A substantial proportion of cancer cases diagnosed in Alberta were estimated to be attributable to inadequate physical activity. With the high prevalence of physical inactivity among adults in the province, developing strategies to increase physical activity levels could have a notable impact on reducing future cancer burden in Alberta. PMID:28468830

  11. External quality-assurance results for the National Atmospheric Deposition Program/National Trends Network during 1991

    USGS Publications Warehouse

    Nilles, M.A.; Gordon, J.D.; Schroder, L.J.; Paulin, C.E.

    1995-01-01

    The U.S. Geological Survey used four programs in 1991 to provide external quality assurance for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). An intersite-comparison program was used to evaluate onsite pH and specific-conductance determinations. The effects of routine sample handling, processing, and shipping of wet-deposition samples on analyte determinations and an estimated precision of analyte values and concentrations were evaluated in the blind-audit program. Differences between analytical results and an estimate of the analytical precision of four laboratories routinely measuring wet deposition were determined by an interlaboratory-comparison program. Overall precision estimates for the precipitation-monitoring system were determined for selected sites by a collocated-sampler program. Results of the intersite-comparison program indicated that 93 and 86 percent of the site operators met the NADP/NTN accuracy goal for pH determinations during the two intersite-comparison studies completed during 1991. The results also indicated that 96 and 97 percent of the site operators met the NADP/NTN accuracy goal for specific-conductance determinations during the two 1991 studies. The effects of routine sample handling, processing, and shipping, determined in the blind-audit program indicated significant positive bias (a=.O 1) for calcium, magnesium, sodium, potassium, chloride, nitrate, and sulfate. Significant negative bias (or=.01) was determined for hydrogen ion and specific conductance. Only ammonium determinations were not biased. A Kruskal-Wallis test indicated that there were no significant (*3t=.01) differences in analytical results from the four laboratories participating in the interlaboratory-comparison program. Results from the collocated-sampler program indicated the median relative error for cation concentration and deposition exceeded eight percent at most sites, whereas the median relative error for sample volume, sulfate, and nitrate concentration at all sites was less than four percent. The median relative error for hydrogen ion concentration and deposition ranged from 4.6 to 18.3 percent at the four sites and as indicated in previous years of the study, was inversely proportional to the acidity of the precipitation at a given site. Overall, collocated-sampling error typically was five times that of laboratory error estimates for most analytes.

  12. Preliminary Evaluation of Method to Monitor Landfills Resilience against Methane Emission

    NASA Astrophysics Data System (ADS)

    Chusna, Noor Amalia; Maryono, Maryono

    2018-02-01

    Methane emission from landfill sites contribute to global warming and un-proper methane treatment can pose an explosion hazard. Stakeholder and government in the cities in Indonesia been found significant difficulties to monitor the resilience of landfill from methane emission. Moreover, the management of methane gas has always been a challenging issue for long waste management service and operations. Landfills are a significant contributor to anthropogenic methane emissions. This study conducted preliminary evaluation of method to manage methane gas emission by assessing LandGem and IPCC method. From the preliminary evaluation, this study found that the IPCC method is based on the availability of current and historical country specific data regarding the waste disposed of in landfills while from the LandGEM method is an automated tool for estimating emission rates for total landfill gas this method account total gas of methane, carbon dioxide and other. The method can be used either with specific data to estimate emissions in the site or default parameters if no site-specific data are available. Both of method could be utilize to monitor the methane emission from landfill site in cities of Central Java.

  13. The time-course of protection of the RTS,S vaccine against malaria infections and clinical disease.

    PubMed

    Penny, Melissa A; Pemberton-Ross, Peter; Smith, Thomas A

    2015-11-04

    Recent publications have reported follow-up of the RTS,S/AS01 malaria vaccine candidate Phase III trials at 11 African sites for 32 months (or longer). This includes site- and time-specific estimates of incidence and efficacy against clinical disease with four different vaccination schedules. These data allow estimation of the time-course of protection against infection associated with two different ages of vaccination, both with and without a booster dose. Using an ensemble of individual-based stochastic models, each trial cohort in the Phase III trial was simulated assuming many different hypothetical profiles for the vaccine efficacy against infection in time, for both the primary course and boosting dose and including the potential for either exponential or non-exponential decay. The underlying profile of protection was determined by Bayesian fitting of these model predictions to the site- and time-specific incidence of clinical malaria over 32 months (or longer) of follow-up. Using the same stochastic models, projections of clinical efficacy in each of the sites were modelled and compared to available observed trial data. The initial protection of RTS,S immediately following three doses is estimated as providing an efficacy against infection of 65 % (when immunizing infants aged 6-12 weeks old) and 91 % (immunizing children aged 5-17 months old at first vaccination). This protection decays relatively rapidly, with an approximately exponential decay for the 6-12 weeks old cohort (with a half-life of 7.2 months); for the 5-17 months old cohort a biphasic decay with a similar half-life is predicted, with an initial rapid decay followed by a slower decay. The boosting dose was estimated to return protection to an efficacy against infection of 50-55 % for both cohorts. Estimates of clinical efficacy by trial site are consistent with those reported in the trial for all cohorts. The site- and time-specific clinical observations from the RTS,S/AS01 trial data allowed a reasonably precise estimation of the underlying vaccine protection against infection which is consistent with common underlying efficacy and decay rates across the trial sites. This calibration suggests that the decay in efficacy against clinical disease is more rapid than that against infection because of age-shifts in the incidence of disease. The dynamical models predict that clinical effectiveness will continue to decay and that likely effects beyond the time-scale of the trial will be small.

  14. A logistic regression equation for estimating the probability of a stream flowing perennially in Massachusetts

    USGS Publications Warehouse

    Bent, Gardner C.; Archfield, Stacey A.

    2002-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing perennially at a specific site in Massachusetts. The equation provides city and town conservation commissions and the Massachusetts Department of Environmental Protection with an additional method for assessing whether streams are perennial or intermittent at a specific site in Massachusetts. This information is needed to assist these environmental agencies, who administer the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a 200-foot-wide protected riverfront area extending along the length of each side of the stream from the mean annual high-water line along each side of perennial streams, with exceptions in some urban areas. The equation was developed by relating the verified perennial or intermittent status of a stream site to selected basin characteristics of naturally flowing streams (no regulation by dams, surface-water withdrawals, ground-water withdrawals, diversion, waste-water discharge, and so forth) in Massachusetts. Stream sites used in the analysis were identified as perennial or intermittent on the basis of review of measured streamflow at sites throughout Massachusetts and on visual observation at sites in the South Coastal Basin, southeastern Massachusetts. Measured or observed zero flow(s) during months of extended drought as defined by the 310 Code of Massachusetts Regulations (CMR) 10.58(2)(a) were not considered when designating the perennial or intermittent status of a stream site. The database used to develop the equation included a total of 305 stream sites (84 intermittent- and 89 perennial-stream sites in the State, and 50 intermittent- and 82 perennial-stream sites in the South Coastal Basin). Stream sites included in the database had drainage areas that ranged from 0.14 to 8.94 square miles in the State and from 0.02 to 7.00 square miles in the South Coastal Basin.Results of the logistic regression analysis indicate that the probability of a stream flowing perennially at a specific site in Massachusetts can be estimated as a function of (1) drainage area (cube root), (2) drainage density, (3) areal percentage of stratified-drift deposits (square root), (4) mean basin slope, and (5) location in the South Coastal Basin or the remainder of the State. Although the equation developed provides an objective means for estimating the probability of a stream flowing perennially at a specific site, the reliability of the equation is constrained by the data used to develop the equation. The equation may not be reliable for (1) drainage areas less than 0.14 square mile in the State or less than 0.02 square mile in the South Coastal Basin, (2) streams with losing reaches, or (3) streams draining the southern part of the South Coastal Basin and the eastern part of the Buzzards Bay Basin and the entire area of Cape Cod and the Islands Basins.

  15. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  16. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  17. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  18. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  19. Estimation of roughness coefficients for natural stream channels with vegetated banks

    USGS Publications Warehouse

    Coon, William F.

    1998-01-01

    Roughness coefficients for 21 stream sites in New York state are presented. The site-specific relation between roughness coefficent and flow depth varies in a predictable manner, depending on energy gradient, relative smoothness (Rd50), and channel-vegetation density. The percentage of wetted perimeter that is vegetated is a useful indicator of when streambank vegetation can affect the roughness coefficient. To estimate the magnitude of this effect requires evaluation of the density and percent of submergence of vegetation.

  20. Estimating site occupancy and detection probability parameters for meso- and large mammals in a coastal eosystem

    USGS Publications Warehouse

    O'Connell, Allan F.; Talancy, Neil W.; Bailey, Larissa L.; Sauer, John R.; Cook, Robert; Gilbert, Andrew T.

    2006-01-01

    Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.

  1. On the design of paleoenvironmental data networks for estimating large-scale patterns of climate

    NASA Astrophysics Data System (ADS)

    Kutzbach, J. E.; Guetter, P. J.

    1980-09-01

    Guidelines are determined for the spatial density and location of climatic variables (temperature and precipitation) that are appropriate for estimating the continental- to hemispheric-scale pattern of atmospheric circulation (sea-level pressure). Because instrumental records of temperature and precipitation simulate the climatic information that is contained in certain paleoenvironmental records (tree-ring, pollen, and written-documentary records, for example), these guidelines provide useful sampling strategies for reconstructing the pattern of atmospheric circulation from paleoenvironmental records. The statistical analysis uses a multiple linear regression model. The sampling strategies consist of changes in site density (from 0.5 to 2.5 sites per million square kilometers) and site location (from western North American sites only to sites in Japan, North America, and western Europe) of the climatic data. The results showed that the accuracy of specification of the pattern of sea-level pressure: (1) is improved if sites with climatic records are spread as uniformly as possible over the area of interest; (2) increases with increasing site density-at least up to the maximum site density used in this study; (3) is improved if sites cover an area that extends considerably beyond the limits of the area of interest. The accuracy of specification was lower for independent data than for the data that were used to develop the regression model; some skill was found for almost all sampling strategies.

  2. Estimating cotton nitrogen nutrition status using leaf greenness and ground cover information

    USDA-ARS?s Scientific Manuscript database

    Assessing nitrogen (N) status is important from economic and environmental standpoints. To date, many spectral indices to estimate cotton chlorophyll or N content have been purely developed using statistical analysis approach where they are often subject to site-specific problems. This study describ...

  3. Benefit transfer with limited data: An application to recreational fishing losses from surface mining

    EPA Science Inventory

    The challenges of applying benefit transfer models to policy sites are often underestimated. Analysts commonly need to estimate site-specific effects for areas that lack data on the number of people who use the resource, intensity of use, and other relevant variables. Yet, the be...

  4. NEW APPROACHES TO ESTIMATION OF SOLID-WASTE QUANTITY AND COMPOSITION

    EPA Science Inventory

    Efficient and statistically sound sampling protocols for estimating the quantity and composition of solid waste over a stated period of time in a given location, such as a landfill site or at a specific point in an industrial or commercial process, are essential to the design ...

  5. PROTOCOL - A COMPUTERIZED SOLID WASTE QUANTITY AND COMPOSITION ESTIMATION SYSTEM: OPERATIONAL MANUAL

    EPA Science Inventory

    The assumptions of traditional sampling theory often do not fit the circumstances when estimating the quantity and composition of solid waste arriving at a given location, such as a landfill site, or at a specific point in an industrial or commercial process. The investigator oft...

  6. VERIFICATION OF SIMPLIFIED PROCEDURES FOR SITE- SPECIFIC SO2 AND NOX CONTROL COST ESTIMATES

    EPA Science Inventory

    The report documents results of an evaluation to verify the accuracy of simplified procedures for estimating sulfur dioxide (S02) and nitrogen oxides (NOx) retrofit control costs and performance for 200 502-emitting coal-fired power plants in the 31-state eastern region. nitially...

  7. DESIGN OF AQUIFER REMEDIATION SYSTEMS: (2) Estimating site-specific performance and benefits of partial source removal

    EPA Science Inventory

    A Lagrangian stochastic model is proposed as a tool that can be utilized in forecasting remedial performance and estimating the benefits (in terms of flux and mass reduction) derived from a source zone remedial effort. The stochastic functional relationships that describe the hyd...

  8. ACCURACY OF THE 1992 NATIONAL LAND COVER DATASET AREA ESTIMATES: AN ANALYSIS AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Abstract for poster presentation:

    Site-specific accuracy assessments evaluate fine-scale accuracy of land-use/land-cover(LULC) datasets but provide little insight into accuracy of area estimates of LULC

    classes derived from sampling units of varying size. Additiona...

  9. Comparison of the Current Center of Site Annual Neshap Dose Modeling at the Savannah River Site with Other Assessment Methods.

    PubMed

    Minter, Kelsey M; Jannik, G Timothy; Stagich, Brooke H; Dixon, Kenneth L; Newton, Joseph R

    2018-04-01

    The U.S. Environmental Protection Agency (EPA) requires the use of the model CAP88 to estimate the total effective dose (TED) to an offsite maximally exposed individual (MEI) for demonstrating compliance with 40 CFR 61, Subpart H: The National Emission Standards for Hazardous Air Pollutants (NESHAP) regulations. For NESHAP compliance at the Savannah River Site (SRS), the EPA, the U.S. Department of Energy (DOE), South Carolina's Department of Health and Environmental Control, and SRS approved a dose assessment method in 1991 that models all radiological emissions as if originating from a generalized center of site (COS) location at two allowable stack heights (0 m and 61 m). However, due to changes in SRS missions, radiological emissions are no longer evenly distributed about the COS. An area-specific simulation of the 2015 SRS radiological airborne emissions was conducted to compare to the current COS method. The results produced a slightly higher dose estimate (2.97 × 10 mSv vs. 2.22 × 10 mSv), marginally changed the overall MEI location, and noted that H-Area tritium emissions dominated the dose. Thus, an H-Area dose model was executed as a potential simplification of the area-specific simulation by adopting the COS methodology and modeling all site emissions from a single location in H-Area using six stack heights that reference stacks specific to the tritium production facilities within H-Area. This "H-Area Tritium Stacks" method produced a small increase in TED estimates (3.03 × 10 mSv vs. 2.97 × 10 mSv) when compared to the area-specific simulation. This suggests that the current COS method is still appropriate for demonstrating compliance with NESHAP regulations but that changing to the H-Area Tritium Stacks assessment method may now be a more appropriate representation of operations at SRS.

  10. Sorption of uranium (VI) on homoionic sodium smectite experimental study and surface complexation modeling.

    PubMed

    Korichi, Smain; Bensmaili, Aicha

    2009-09-30

    This paper is an extension of a previous paper where the natural and purified clay in the homoionic Na form were physico-chemically characterized (doi:10.1016/j.clay.2008.04.014). In this study, the adsorption behavior of U (VI) on a purified Na-smectite suspension is studied using batch adsorption experiments and surface complexation modeling (double layer model). The sorption of uranium was investigated as a function of pH, uranium concentration, solid to liquid ratio, effect of natural organic matter (NOM) and NaNO(3) background electrolyte concentration. Using the MINTEQA2 program, the speciation of uranium was calculated as a function of pH and uranium concentration. Model predicted U (VI) aqueous speciation suggests that important aqueous species in the [U (VI)]=1mg/L and pH range 3-7 including UO(2)(2+), UO(2)OH(+), and (UO(2))(3)(OH)(5)(+). The concentration of UO(2)(2+) decreased and that of (UO(2))(3)(OH)(5)(+) increased with increasing pH. The potentiometric titration values and uptake of uranium in the sodium smectite suspension were simulated by FITEQL 4.0 program using a two sites model, which is composed of silicate and aluminum reaction sites. We compare the acidity constants values obtained by potentiometric titration from the purified sodium smectite with those obtained from single oxides (quartz and alpha-alumina), taking into account the surface heterogeneity and the complex nature of natural colloids. We investigate the uranium sorption onto purified Na-smectite assuming low, intermediate and high edge site surfaces which are estimated from specific surface area percentage. The sorption data is interpreted and modeled as a function of edge site surfaces. A relationship between uranium sorption and total site concentration was confirmed and explained through variation in estimated edge site surface value. The modeling study shows that, the convergence during DLM modeling is related to the best estimation of the edge site surface from the N(2)-BET specific surface area, SSA(BET) (thus, total edge site concentrations). The specific surface area should be at least 80-100m(2)/g for smectite clays in order to reach convergence during the modeling. The range of 10-20% SSA(BET) was used to estimate the values of edge site surfaces that led to the convergence during modeling. An agreement between the experimental data and model predictions is found reasonable when 15% SSA(BET) was used as edge site surface. However, the predicted U (VI) adsorption underestimated and overestimated the experimental observations at the 10 and 20% of the measured SSA(BET), respectively. The dependence of uranium sorption modeling results on specific surface area and edge site surface is useful to describe and predict U (VI) retardation as a function of chemical conditions in the field-scale reactive transport simulations. Therefore this approach can be used in the environmental quality assessment.

  11. Preliminary assessment of chloride concentrations, loads, and yields in selected watersheds along the Interstate 95 corridor, southeastern Connecticut, 2008-09

    USGS Publications Warehouse

    Brown, Craig J.; Mullaney, John R.; Morrison, Jonathan; Mondazzi, Remo

    2011-01-01

    Water-quality conditions were assessed to evaluate potential effects of road-deicer applications on stream-water quality in four watersheds along Interstate 95 (I-95) in southeastern Connecticut from November 1, 2008, through September 30, 2009. This preliminary study is part of a four-year cooperative study by the U.S. Geological Survey (USGS), the Federal Highway Administration (FHWA), and the Connecticut Department of Transportation (ConnDOT). Streamflow and water quality were studied at four watersheds?Four Mile River, Oil Mill Brook, Stony Brook, and Jordan Brook. Water-quality samples were collected and specific conductance was measured continuously at paired water-quality monitoring sites upstream and downstream from I-95. Specific conductance values were related to chloride (Cl) concentrations to assist in determining the effects of road-deicing operations on the levels of Cl in the streams. Streamflow and water-quality data were compared with weather data and with the timing, amount, and composition of deicers applied to state highways. Grab samples were collected during winter stormwater-runoff events, such as winter storms or periods of rain or warm temperatures in which melting takes place, and periodically during the spring and summer. Cl concentrations at the eight water-quality monitoring sites were well below the U.S. Environmental Protection Agency (USEPA) recommended chronic and acute Cl toxicity criteria of 230 and 860 milligrams per liter (mg/L), respectively. Specific conductance and estimated Cl concentrations in streams, particularly at sites downstream from I-95, peaked during discharge events in the winter and early spring as a result of deicers applied to roads and washed off by stormwater or meltwater. During winter storms, deicing activities, or subsequent periods of melting, specific conductance and estimated Cl concentrations peaked as high as 703 microsiemens per centimeter (?S/cm) and 160 mg/L at the downstream sites. During most of the spring and summer, specific conductance and estimated Cl concentrations decreased during discharge events because the low-ionic strength of stormwater had a diluting effect on stream-water quality. However, peaks in specific conductance and estimated Cl concentrations at Jordan Brook and Stony Brook corresponded to peaks in streamflow well after winter snow or ice events; these delayed peaks in Cl concentration likely resulted from deicing salts that remained in melting snow piles and (or) that were flushed from soils and shallow groundwater, then discharged downstream. Cl loads in streams generally were highest in the winter and early spring. The estimated load for the period of record at the four monitoring sites downstream from I-95 ranged from 0.33 ton per day (ton/d) at the Stony Brook watershed to 0.59 ton/d at the Jordan Brook watershed. The Cl yields ranged from 0.07 ton per day per square mile (ton/d/)mi2) at Oil Mill Brook, one of the least developed watersheds, to 0.21 (ton/d)/mi2) at Jordan Brook, the watershed with the highest percentage of urban development and impervious surfaces. The median estimates of Cl load from atmospheric deposition ranged from 11 to 19 tons, and contributed 4.3 to 7.1 percent of the Cl load in streamflow from the watershed areas. A comparison of the Cl load input and output estimates indicates that less Cl is leaving the watersheds than is entering through atmospheric deposition and application of deicers. The lag time between introduction of Cl to the watershed and transport to the stream, and uncertainty in the load estimates may be the reasons for this discrepancy. In addition, estimates of direct infiltration of Cl to groundwater from atmospheric deposition, deicer applications, and septic-tank drainfields to groundwater were outside the scope of the November 2008 to September 2009 assessment. However, increased concentrations of ions were observed between upstream and downstream sites and could result from deicer appli

  12. Radioactive liquid wastes discharged to ground in the 200 Areas during 1976

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirabella, J.E.

    An overall summary is presented giving the radioactive liquid wastes discharged to ground during 1976 and since startup (for both total and decayed depositions) within the Production and Waste Management Division control zone (200 Area plateau). Overall summaries are also presented for 200 East Area and for 200 West Area. The data contain an estimate of the radioactivity discharged to individual ponds, cribs and specific retention sites within the Production and Waste Management Division during 1976 and from startup through December 31, 1976; an estimate of the decayed activities from startup through 1976; the location and reference drawings of eachmore » disposal site; and the usage dates of each disposal site. The estimates for the radioactivity discharged and for decayed activities dicharged from startup through December 31, 1976 are based upon Item 4 of the Bibliography. The volume of liquid discharged to the ponds also includes major nonradioactive streams. The wastes discharged during 1976 to each active disposal site are detailed on a month-to-month basis, along with the monthly maximum concentration and average concentration data. An estimate of the radioactivity discharged to each active site along with the remaining decayed activities is given.« less

  13. Integrating ecosystems measurements from multiple eddy-covariance sites to a simple model of ecosystem process - Are there possibilities for a uniform model calibration?

    NASA Astrophysics Data System (ADS)

    Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki

    2014-05-01

    Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.

  14. Assessment of risks to ground-feeding songbirds from lead in the Coeur d'Alene Basin, Idaho, USA.

    PubMed

    Sample, Bradley E; Hansen, James A; Dailey, Anne; Duncan, Bruce

    2011-10-01

    Previous assessment of ecological risks within the Coeur d'Alene River Basin identified Pb as a key risk driver for ground-feeding songbirds. Because this conclusion was based almost exclusively on literature data, its strength was determined to range from low to moderate. With the support of the US Environmental Protection Agency (USEPA), the US Fish and Wildlife Service collected site-specific data to address the uncertainty associated with Pb risks to songbirds. These data, plus those from the previous Coeur d'Alene Basin ecological risk assessment, were integrated, and risks to ground-feeding songbirds were reevaluated. These site-specific data were also used to develop updated preliminary remedial goals (PRGs) for Pb in soils that would be protective of songbirds. Available data included site-specific Pb concentrations in blood, liver, and ingesta from 3 songbird species (American robin, song sparrow, and Swainson's thrush), colocated soil data, and soil data from other locations in the basin. Semi-log regression models based on the association between soil Pb and tissue Pb concentrations were applied to measured soil concentrations from the previous risk assessment to estimate Pb exposures in riparian and adjacent upland habitats throughout the Coeur d'Alene Basin. Measured and estimated tissue or dietary exposure was tabulated for 3 areas plus the reference, and then compared to multiple effects measures. As many as 6 exposure-effect metrics were available for assessing risk in any one area. Analyses of site-specific tissue- and diet-based exposure data indicate that exposure of ground-feeding songbirds to Pb in the Coeur d'Alene Basin is sufficient to result in adverse effects. Because this conclusion is based on multiple exposure-effect metrics that include site-specific data, the strength of this conclusion is high. Ecological PRGs were developed by integrating the site-specific regression models with tissue and dietary effect levels to create exposure models, which were solved for the soil concentration that produced an exposure estimate equal to the effect level (i.e., the ecological PRG). The lowest PRG obtained for any species' exposure-effect measure combination was 490 mg/kg for subclinical effects due to Pb in the blood of American robins; the highest was 7200 mg/kg for severe clinical effects due to Pb in the blood of song sparrows. Because the lowest ground-feeding songbird PRG was comparable to multiple cleanup goals developed for the basin (i.e., soil invertebrates, wildlife populations, and human health), in addition to the site-specific cleanup level of 530 mg Pb/kg sediment for the protection of waterfowl (USEPA 2002) the USEPA has made a risk-management determination that a site-specific Pb cleanup level of 530 mg/kg in soil would be protective of songbirds in the Coeur d'Alene Basin. Copyright © 2011 SETAC.

  15. 76 FR 40352 - National Nuclear Security Administration; Amended Record of Decision: Site-Wide Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-08

    ... similar to those estimated for transportation of radioactive material in other DOE NEPA documents. The air... radiological materials located at civilian sites worldwide. Part of the GTRI mission is implemented through... specific actions analyzed in DOE/EIS-0380-SA-02 include packaging the sealed sources (sometimes with a part...

  16. 78 FR 31941 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-28

    ... burden 810 RISE Staff Pre-Test 157 1 .25 39 RISE Staff Post-Test 157 1 .25 39 RISE burden 78 Estimated... addition, evaluation plans were developed to support rigorous site-specific and cross-site studies to... and Lesbian Center's Recognize Intervene Support Empower (RISE) project. A third phase of the study...

  17. Prediction of earthquake ground motion at rock sites in Japan: evaluation of empirical and stochastic approaches for the PEGASOS Refinement Project

    NASA Astrophysics Data System (ADS)

    Edwards, Benjamin; Fäh, Donat

    2017-11-01

    Strong ground-motion databases used to develop ground-motion prediction equations (GMPEs) and calibrate stochastic simulation models generally include relatively few recordings on what can be considered as engineering rock or hard rock. Ground-motion predictions for such sites are therefore susceptible to uncertainty and bias, which can then propagate into site-specific hazard and risk estimates. In order to explore this issue we present a study investigating the prediction of ground motion at rock sites in Japan, where a wide range of recording-site types (from soil to very hard rock) are available for analysis. We employ two approaches: empirical GMPEs and stochastic simulations. The study is undertaken in the context of the PEGASOS Refinement Project (PRP), a Senior Seismic Hazard Analysis Committee (SSHAC) Level 4 probabilistic seismic hazard analysis of Swiss nuclear power plants, commissioned by swissnuclear and running from 2008 to 2013. In order to reduce the impact of site-to-site variability and expand the available data set for rock and hard-rock sites we adjusted Japanese ground-motion data (recorded at sites with 110 m s-1 < Vs30 < 2100 m s-1) to a common hard-rock reference. This was done through deconvolution of: (i) empirically derived amplification functions and (ii) the theoretical 1-D SH amplification between the bedrock and surface. Initial comparison of a Japanese GMPE's predictions with data recorded at rock and hard-rock sites showed systematic overestimation of ground motion. A further investigation of five global GMPEs' prediction residuals as a function of quarter-wavelength velocity showed that they all presented systematic misfit trends, leading to overestimation of median ground motions at rock and hard-rock sites in Japan. In an alternative approach, a stochastic simulation method was tested, allowing the direct incorporation of site-specific Fourier amplification information in forward simulations. We use an adjusted version of the model developed for Switzerland during the PRP. The median simulation prediction at true rock and hard-rock sites (Vs30 > 800 m s-1) was found to be comparable (within expected levels of epistemic uncertainty) to predictions using an empirical GMPE, with reduced residual misfit. As expected, due to including site-specific information in the simulations, the reduction in misfit could be isolated to a reduction in the site-related within-event uncertainty. The results of this study support the use of finite or pseudo-finite fault stochastic simulation methods in estimating strong ground motions in regions of weak and moderate seismicity, such as central and northern Europe. Furthermore, it indicates that weak-motion data has the potential to allow estimation of between- and within-site variability in ground motion, which is a critical issue in site-specific seismic hazard analysis, particularly for safety critical structures.

  18. Regional flood probabilities

    USGS Publications Warehouse

    Troutman, Brent M.; Karlinger, Michael R.

    2003-01-01

    The T‐year annual maximum flood at a site is defined to be that streamflow, that has probability 1/T of being exceeded in any given year, and for a group of sites the corresponding regional flood probability (RFP) is the probability that at least one site will experience a T‐year flood in any given year. The RFP depends on the number of sites of interest and on the spatial correlation of flows among the sites. We present a Monte Carlo method for obtaining the RFP and demonstrate that spatial correlation estimates used in this method may be obtained with rank transformed data and therefore that knowledge of the at‐site peak flow distribution is not necessary. We examine the extent to which the estimates depend on specification of a parametric form for the spatial correlation function, which is known to be nonstationary for peak flows. It is shown in a simulation study that use of a stationary correlation function to compute RFPs yields satisfactory estimates for certain nonstationary processes. Application of asymptotic extreme value theory is examined, and a methodology for separating channel network and rainfall effects on RFPs is suggested. A case study is presented using peak flow data from the state of Washington. For 193 sites in the Puget Sound region it is estimated that a 100‐year flood will occur on the average every 4.5 years.

  19. The precision of wet atmospheric deposition data from national atmospheric deposition program/national trends network sites determined with collocated samplers

    USGS Publications Warehouse

    Nilles, M.A.; Gordon, J.D.; Schroder, L.J.

    1994-01-01

    A collocated, wet-deposition sampler program has been operated since October 1988 by the U.S. Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments at four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database. Sampling precision was determined from the absolute value of differences in the analytical results for the paired samples in terms of median relative and absolute difference. The median relative difference for Mg2+, Na+, K+ and NH4+ concentration and deposition was quite variable between sites and exceeded 10% at most sites. Relative error for analytes whose concentrations typically approached laboratory method detection limits were greater than for analytes that did not typically approach detection limits. The median relative difference for SO42- and NO3- concentration, specific conductance, and sample volume at all sites was less than 7%. Precision for H+ concentration and deposition ranged from less than 10% at sites with typically high levels of H+ concentration to greater than 30% at sites with low H+ concentration. Median difference for analyte concentration and deposition was typically 1.5-2-times greater for samples collected during the winter than during other seasons at two northern sites. Likewise, the median relative difference in sample volume for winter samples was more than double the annual median relative difference at the two northern sites. Bias accounted for less than 25% of the collocated variability in analyte concentration and deposition from weekly collocated precipitation samples at most sites.A collocated, wet-deposition sampler program has been operated since OCtober 1988 by the U.S Geological Survey to estimate the overall sampling precision of wet atmospheric deposition data collected at selected sites in the National Atmospheric Deposition Program and National Trends Network (NADP/NTN). A duplicate set of wet-deposition sampling instruments was installed adjacent to existing sampling instruments four different NADP/NTN sites for each year of the study. Wet-deposition samples from collocated sites were collected and analysed using standard NADP/NTN procedures. Laboratory analyses included determinations of pH, specific conductance, and concentrations of major cations and anions. The estimates of precision included all variability in the data-collection system, from the point of sample collection through storage in the NADP/NTN database.

  20. Initial Validation of NDVI time seriesfrom AVHRR, VEGETATION, and MODIS

    NASA Technical Reports Server (NTRS)

    Morisette, Jeffrey T.; Pinzon, Jorge E.; Brown, Molly E.; Tucker, Jim; Justice, Christopher O.

    2004-01-01

    The paper will address Theme 7: Multi-sensor opportunities for VEGETATION. We present analysis of a long-term vegetation record derived from three moderate resolution sensors: AVHRR, VEGETATION, and MODIS. While empirically based manipulation can ensure agreement between the three data sets, there is a need to validate the series. This paper uses atmospherically corrected ETM+ data available over the EOS Land Validation Core Sites as an independent data set with which to compare the time series. We use ETM+ data from 15 globally distributed sites, 7 of which contain repeat coverage in time. These high-resolution data are compared to the values of each sensor by spatially aggregating the ETM+ to each specific sensors' spatial coverage. The aggregated ETM+ value provides a point estimate for a specific site on a specific date. The standard deviation of that point estimate is used to construct a confidence interval for that point estimate. The values from each moderate resolution sensor are then evaluated with respect to that confident interval. Result show that AVHRR, VEGETATION, and MODIS data can be combined to assess temporal uncertainties and address data continuity issues and that the atmospherically corrected ETM+ data provide an independent source with which to compare that record. The final product is a consistent time series climate record that links historical observations to current and future measurements.

  1. Transposable Elements and DNA Methylation Create in Embryonic Stem Cells Human-Specific Regulatory Sequences Associated with Distal Enhancers and Noncoding RNAs

    PubMed Central

    Glinsky, Gennadi V.

    2015-01-01

    Despite significant progress in the structural and functional characterization of the human genome, understanding of the mechanisms underlying the genetic basis of human phenotypic uniqueness remains limited. Here, I report that transposable element-derived sequences, most notably LTR7/HERV-H, LTR5_Hs, and L1HS, harbor 99.8% of the candidate human-specific regulatory loci (HSRL) with putative transcription factor-binding sites in the genome of human embryonic stem cells (hESC). A total of 4,094 candidate HSRL display selective and site-specific binding of critical regulators (NANOG [Nanog homeobox], POU5F1 [POU class 5 homeobox 1], CCCTC-binding factor [CTCF], Lamin B1), and are preferentially located within the matrix of transcriptionally active DNA segments that are hypermethylated in hESC. hESC-specific NANOG-binding sites are enriched near the protein-coding genes regulating brain size, pluripotency long noncoding RNAs, hESC enhancers, and 5-hydroxymethylcytosine-harboring regions immediately adjacent to binding sites. Sequences of only 4.3% of hESC-specific NANOG-binding sites are present in Neanderthals’ genome, suggesting that a majority of these regulatory elements emerged in Modern Humans. Comparisons of estimated creation rates of novel TF-binding sites revealed that there was 49.7-fold acceleration of creation rates of NANOG-binding sites in genomes of Chimpanzees compared with the mouse genomes and further 5.7-fold acceleration in genomes of Modern Humans compared with the Chimpanzees genomes. Preliminary estimates suggest that emergence of one novel NANOG-binding site detectable in hESC required 466 years of evolution. Pathway analysis of coding genes that have hESC-specific NANOG-binding sites within gene bodies or near gene boundaries revealed their association with physiological development and functions of nervous and cardiovascular systems, embryonic development, behavior, as well as development of a diverse spectrum of pathological conditions such as cancer, diseases of cardiovascular and reproductive systems, metabolic diseases, multiple neurological and psychological disorders. A proximity placement model is proposed explaining how a 33–47% excess of NANOG, CTCF, and POU5F1 proteins immobilized on a DNA scaffold may play a functional role at distal regulatory elements. PMID:25956794

  2. Estimation of water flux in urban area using eddy covariance measurements in Riverside, Southern California

    USDA-ARS?s Scientific Manuscript database

    Micrometeorological methods can direct measure the sensible and latent heat flux in specific sites and provide robust estimates of the evaporative fraction (EF), which is the fraction of available surface energy contained in latent heat. Across a vegetation coverage gradient in urban area, an empir...

  3. PROTOCOL - A COMPUTERIZED SOLID WASTE QUANTITY AND COMPOSITION ESTIMATION SYSTEM. Project Summary (EPA/600/S2-91/005)

    EPA Science Inventory

    The assumptions of traditional sampling theory often do not fit the circumstances when estimating the quantity and composition of solid waste arriving at a given location, such as a landfill site, or at a specific point in an industrial or commercial process. The investigator oft...

  4. Electrocorticographic high gamma activity versus electrical cortical stimulation mapping of naming.

    PubMed

    Sinai, Alon; Bowers, Christopher W; Crainiceanu, Ciprian M; Boatman, Dana; Gordon, Barry; Lesser, Ronald P; Lenz, Frederick A; Crone, Nathan E

    2005-07-01

    Subdural electrocorticographic (ECoG) recordings in patients undergoing epilepsy surgery have shown that functional activation is associated with event-related broadband gamma activity in a higher frequency range (>70 Hz) than previously studied in human scalp EEG. To investigate the utility of this high gamma activity (HGA) for mapping language cortex, we compared its neuroanatomical distribution with functional maps derived from electrical cortical stimulation (ECS), which remains the gold standard for predicting functional impairment after surgery for epilepsy, tumours or vascular malformations. Thirteen patients had undergone subdural electrode implantation for the surgical management of intractable epilepsy. Subdural ECoG signals were recorded while each patient verbally named sequentially presented line drawings of objects, and estimates of event-related HGA (80-100 Hz) were made at each recording site. Routine clinical ECS mapping used a subset of the same naming stimuli at each cortical site. If ECS disrupted mouth-related motor function, i.e. if it affected the mouth, lips or tongue, naming could not be tested with ECS at the same cortical site. Because naming during ECoG involved these muscles of articulation, the sensitivity and specificity of ECoG HGA were estimated relative to both ECS-induced impairments of naming and ECS disruption of mouth-related motor function. When these estimates were made separately for 12 electrode sites per patient (the average number with significant HGA), the specificity of ECoG HGA with respect to ECS was 78% for naming and 81% for mouth-related motor function, and equivalent sensitivities were 38% and 46%, respectively. When ECS maps of naming and mouth-related motor function were combined, the specificity and sensitivity of ECoG HGA with respect to ECS were 84% and 43%, respectively. This study indicates that event-related ECoG HGA during confrontation naming predicts ECS interference with naming and mouth-related motor function with good specificity but relatively low sensitivity. Its favourable specificity suggests that ECoG HGA can be used to construct a preliminary functional map that may help identify cortical sites of lower priority for ECS mapping. Passive recordings of ECoG gamma activity may be done simultaneously at all electrode sites without the risk of after-discharges associated with ECS mapping, which must be done sequentially at pairs of electrodes. We discuss the relative merits of these two functional mapping techniques.

  5. National Stormwater Calculator

    EPA Pesticide Factsheets

    EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico).

  6. Effects of capillarity and microtopography on wetland specific yield

    USGS Publications Warehouse

    Sumner, D.M.

    2007-01-01

    Hydrologic models aid in describing water flows and levels in wetlands. Frequently, these models use a specific yield conceptualization to relate water flows to water level changes. Traditionally, a simple conceptualization of specific yield is used, composed of two constant values for above- and below-surface water levels and neglecting the effects of soil capillarity and land surface microtopography. The effects of capiltarity and microtopography on specific yield were evaluated at three wetland sites in the Florida Everglades. The effect of capillarity on specific yield was incorporated based on the fillable pore space within a soil moisture profile at hydrostatic equilibrium with the water table. The effect of microtopography was based on areal averaging of topographically varying values of specific yield. The results indicate that a more physically-based conceptualization of specific yield incorporating capillary and microtopographic considerations can be substantially different from the traditional two-part conceptualization, and from simpler conceptualizations incorporating only capillarity or only microtopography. For the sites considered, traditional estimates of specific yield could under- or overestimate the more physically based estimates by a factor of two or more. The results suggest that consideration of both capillarity and microtopography is important to the formulation of specific yield in physically based hydrologic models of wetlands. ?? 2007, The Society of Wetland Scientists.

  7. The frequency of company-sponsored alcohol brand-related sites on Facebook™-2012.

    PubMed

    Nhean, Siphannay; Nyborn, Justin; Hinchey, Danielle; Valerio, Heather; Kinzel, Kathryn; Siegel, Michael; Jernigan, David H

    2014-06-01

    This research provides an estimate of the frequency of company-sponsored alcohol brand-related sites on Facebook™. We conducted a systematic overview of the extent of alcohol brand-related sites on Facebook™ in 2012. We conducted a 2012 Facebook™ search for sites specifically related to 898 alcohol brands across 16 different alcoholic beverage types. Descriptive statistics were produced using Microsoft SQL Server. We identified 1,017 company-sponsored alcohol-brand related sites on Facebook™. Our study advances previous literature by providing a systematic overview of the extent of alcohol brand sites on Facebook™.

  8. Pest management in Douglas-fir seed orchards: a microcomputer decision method

    Treesearch

    James B. Hoy; Michael I. Haverty

    1988-01-01

    The computer program described provides a Douglas-fir seed orchard manager (user) with a quantitative method for making insect pest management decisions on a desk-top computer. The decision system uses site-specific information such as estimates of seed crop size, insect attack rates, insecticide efficacy and application costs, weather, and crop value. At sites where...

  9. Comparing nocturnal eddy covariance measurements to estimates of ecosystem respiration made by scaling chamber measurements at six coniferous boreal sites

    USGS Publications Warehouse

    Lavigne, M.B.; Ryan, M.G.; Anderson, D.E.; Baldocchi, D.D.; Crill, P.M.; Fitzjarrald, D.R.; Goulden, M.L.; Gower, S.T.; Massheder, J.M.; McCaughey, J.H.; Rayment, M.; Striegl, Robert G.

    1997-01-01

    During the growing season, nighttime ecosystem respiration emits 30–100% of the daytime net photosynthetic uptake of carbon, and therefore measurements of rates and understanding of its control by the environment are important for understanding net ecosystem exchange. Ecosystem respiration can be measured at night by eddy covariance methods, but the data may not be reliable because of low turbulence or other methodological problems. We used relationships between woody tissue, foliage, and soil respiration rates and temperature, with temperature records collected on site to estimate ecosystem respiration rates at six coniferous BOREAS sites at half-hour or 1-hour intervals, and then compared these estimates to nocturnal measurements of CO2 exchange by eddy covariance. Soil surface respiration was the largest source of CO2 at all sites (48–71%), and foliar respiration made a large contribution to ecosystem respiration at all sites (25–43%). Woody tissue respiration contributed only 5–15% to ecosystem respiration. We estimated error for the scaled chamber predictions of ecosystem respiration by using the uncertainty associated with each respiration parameter and respiring biomass value. There was substantial uncertainty in estimates of foliar and soil respiration because of the spatial variability of specific respiration rates. In addition, more attention needs to be paid to estimating foliar respiration during the early part of the growing season, when new foliage is growing, and to determining seasonal trends of soil surface respiration. Nocturnal eddy covariance measurements were poorly correlated to scaled chamber estimates of ecosystem respiration (r2=0.06–0.27) and were consistently lower than scaled chamber predictions (by 27% on average for the six sites). The bias in eddy covariance estimates of ecosystem respiration will alter estimates of gross assimilation in the light and of net ecosystem exchange rates over extended periods.

  10. Estimation of salt loads for the Dolores River in the Paradox Valley, Colorado, 1980–2015

    USGS Publications Warehouse

    Mast, M. Alisa

    2017-07-13

    Regression models that relate total dissolved solids (TDS) concentrations to specific conductance were used to estimate salt loads for two sites on the Dolores River in the Paradox Valley in western Colorado. The salt-load estimates will be used by the Bureau of Reclamation to evaluate salt loading to the river coming from the Paradox Valley and the effect of the Paradox Valley Unit (PVU), a project designed to reduce the salinity of the Colorado River. A second-order polynomial provided the best fit of the discrete data for both sites on the river. The largest bias occurred in samples with elevated sulfate concentrations (greater than 500 milligrams per liter), which were associated with short-duration runoff events in late summer and fall. Comparison of regression models from a period of time before operation began at the PVU and three periods after operation began suggests the relation between TDS and specific conductance has not changed over time. Net salt gain through the Paradox Valley was estimated as the TDS load at the downstream site minus the load at the upstream site. The mean annual salt gain was 137,900 tons per year prior to operation of the PVU (1980–1993) and 43,300 tons per year after the PVU began operation (1997–2015). The difference in annual salt gain in the river between the pre-PVU and post-PVU periods was 94,600 tons per year, which represents a nearly 70 percent reduction in salt loading to the river.

  11. Nationwide summary of US Geological Survey regional regression equations for estimating magnitude and frequency of floods for ungaged sites, 1993

    USGS Publications Warehouse

    Jennings, M.E.; Thomas, W.O.; Riggs, H.C.

    1994-01-01

    For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.

  12. Can Sap Flow Help Us to Better Understand Transpiration Patterns in Landscapes?

    NASA Astrophysics Data System (ADS)

    Hassler, S. K.; Weiler, M.; Blume, T.

    2017-12-01

    Transpiration is a key process in the hydrological cycle and a sound understanding and quantification of transpiration and its spatial variability is essential for management decisions and for improving the parameterisation of hydrological and soil-vegetation-atmosphere transfer models. At the tree scale, transpiration is commonly estimated by measuring sap flow. Besides evaporative demand and water availability, tree-specific characteristics such as species, size or social status, stand-specific characteristics such as basal area or stand density and site-specific characteristics such as geology, slope position or aspect control sap flow of individual trees. However, little is known about the relative importance or the dynamic interplay of these controls. We studied these influences with multiple linear regression models to explain the variability of sap velocity measurements in 61 beech and oak trees, located at 24 sites spread over a 290 km²-catchment in Luxembourg. For each of 132 consecutive days of the growing season of 2014 we applied linear models to the daily spatial pattern of sap velocity and determined the importance of the different predictors. By upscaling sap velocities to the tree level with the help of species-dependent empirical estimates for sapwood area we also examined patterns of sap flow as a more direct representation of transpiration. Results indicate that a combination of mainly tree- and site-specific factors controls sap velocity patterns in this landscape, namely tree species, tree diameter, geology and aspect. For sap flow, the site-specific predictors provided the largest contribution to the explained variance, however, in contrast to the sap velocity analysis, geology was more important than aspect. Spatial variability of atmospheric demand and soil moisture explained only a small fraction of the variance. However, the temporal dynamics of the explanatory power of the tree-specific characteristics, especially species, were correlated to the temporal dynamics of potential evaporation. We conclude that spatial representation of transpiration in models could benefit from including patterns according to tree and site characteristics.

  13. Probabilistic estimation of long-term volcanic hazard under evolving tectonic conditions in a 1 Ma timeframe

    NASA Astrophysics Data System (ADS)

    Jaquet, O.; Lantuéjoul, C.; Goto, J.

    2017-10-01

    Risk assessments in relation to the siting of potential deep geological repositories for radioactive wastes demand the estimation of long-term tectonic hazards such as volcanicity and rock deformation. Owing to their tectonic situation, such evaluations concern many industrial regions around the world. For sites near volcanically active regions, a prevailing source of uncertainty is related to volcanic hazard. For specific situations, in particular in relation to geological repository siting, the requirements for the assessment of volcanic and tectonic hazards have to be expanded to 1 million years. At such time scales, tectonic changes are likely to influence volcanic hazard and therefore a particular stochastic model needs to be developed for the estimation of volcanic hazard. The concepts and theoretical basis of the proposed model are given and a methodological illustration is provided using data from the Tohoku region of Japan.

  14. Estimation of Freely-Dissolved Concentrations of Polychlorinated Biphenyls, 2,3,7,8-Substituted Congeners and Homologs of Polychlorinated dibenzo-p-dioxins and Dibenzofurans in Water for Development of Total Maximum Daily Loadings for the Bluestone River Watershed, Virginia and West Virginia

    USGS Publications Warehouse

    Gale, Robert W.

    2007-01-01

    The Commonwealth of Virginia Department of Environmental Quality, working closely with the State of West Virginia Department of Environmental Protection and the U.S. Environmental Protection Agency is undertaking a polychlorinated biphenyl source assessment study for the Bluestone River watershed. The study area extends from the Bluefield area of Virginia and West Virginia, targets the Bluestone River and tributaries suspected of contributing to polychlorinated biphenyl, polychlorinated dibenzo-p-dioxin and dibenzofuran contamination, and includes sites near confluences of Big Branch, Brush Fork, and Beaver Pond Creek. The objectives of this study were to gather information about the concentrations, patterns, and distribution of these contaminants at specific study sites to expand current knowledge about polychlorinated biphenyl impacts and to identify potential new sources of contamination. Semipermeable membrane devices were used to integratively accumulate the dissolved fraction of the contaminants at each site. Performance reference compounds were added prior to deployment and used to determine site-specific sampling rates, enabling estimations of time-weighted average water concentrations during the deployed period. Minimum estimated concentrations of polychlorinated biphenyl congeners in water were about 1 picogram per liter per congener, and total concentrations at study sites ranged from 130 to 18,000 picograms per liter. The lowest concentration was 130 picograms per liter, about threefold greater than total hypothetical concentrations from background levels in field blanks. Polychlorinated biphenyl concentrations in water fell into three groups of sites: low (130-350 picogram per liter); medium (640-3,500 picogram per liter; and high (11,000-18,000 picogram per liter). Concentrations at the high sites, Beacon Cave and Beaverpond Branch at the Resurgence, were about four- to sixfold higher than concentrations estimated for the medium group of sites. Minimum estimated concentrations of polychlorinated dibenzo-p-dioxin and dibenzofuran congeners in water were about 0.2 to 1 femtograms per liter. Estimated total concentrations of 2,3,7,8-substituted congeners in water at study sites ranged from less than 1 to 22,000 femtograms per liter and less than 1 to 2,300 femtograms per liter for polychlorinated dibenzo-p-dioxin and dibenzofuran congeners, respectively. Total concentrations of 2,3,7,8-substituted congeners in water were comprised largely of octachlorodibenzo-p-dioxin and dibenzofuran, with less than 10 percent of the total contributed by concentrations of other congeners, mainly 2,3,7,8-heptachlorodibenzo-p-dioxin and dibenzofuran. Of special interest for this study was 2,3,7,8-tetrachlorodibenzo-p-dioxin with a regulatory surface water-quality criterion of 1,200 femtograms per liter. Estimated concentrations in water ranged from 0.5 to 41 femtograms per liter. Concentrations in water were less than 5 femtograms per liter at all study sites, except the Bluefield Westside Sewage Treatment Plan, with an estimated concentration of 41 femtograms per liter. Estimated total concentrations of homologs of polychlorinated dibenzo-p-dioxins and dibenzofurans in water at the study sites ranged from 3,200 to 36,000 femtograms per liter and 210-4,800 femtograms per liter, respectively. Again, homologs of polychlorinated dibenzo-p-dioxins and dibenzofurans in water were comprised largely of octachlorodibenzo-p-dioxin and dibenzofuran.

  15. Concentration-Dependent Multiple Binding Sites on Saliva-Treated Hydroxyapatite for Streptococcus sanguis

    PubMed Central

    Gibbons, R. J.; Moreno, E. C.; Etherden, I.

    1983-01-01

    The influence of bacterial cell concentration on estimates of the number of binding sites and the affinity for the adsorption of a strain of Streptococcus sanguis to saliva-treated hydroxyapatite was determined, and the possible presence of multiple binding sites for this organism was tested. The range of concentrations of available bacteria varied from 4.7 × 106 to 5,960 × 106 cells per ml. The numbers of adsorbed bacteria increased over the entire range tested, but a suggestion of a break in an otherwise smooth adsorption isotherm was evident. Values for the number of binding sites and the affinity varied considerably depending upon the range of available bacterial concentrations used to estimate them; high correlation coefficients were obtained in all cases. The use of low bacterial cell concentrations yielded lower values for the number of sites and much higher values for the affinity constant than did the use of high bacterial cell concentrations. When data covering the entire range of bacterial concentrations were employed, values for the number of sites and the affinity were similar to those obtained by using only high bacterial cell concentrations. The simplest explanation for these results is that there are multiple binding sites for S. sanguis on saliva-treated hydroxyapatite surfaces. When present in low concentration, the streptococci evidently attach to more specific high-affinity sites which become saturated when higher bacterial concentrations are employed. The possibility of multiple binding sites was substantiated by comparing estimates of the adsorption parameters from a computer-simulated isotherm with those derived from the experimentally generated isotherm. A mathematical model describing bacterial adsorption to binary binding sites was further evidence for the existence of at least two classes of binding sites for S. sanguis. Far fewer streptococci adsorbed to experimental pellicles prepared from saliva depleted of bacterial aggregating activity when low numbers of streptococci were used, but the magnitude of this difference was considerably less when high streptococcal concentrations were employed. This suggests an association between salivary components which possess bacterial-aggregating activity and bacterial adsorption to high-affinity specific binding sites on saliva-treated hydroxyapatite surfaces. PMID:6822416

  16. Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode

    2010-05-01

    The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.

  17. Maxine: A spreadsheet for estimating dose from chronic atmospheric radioactive releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, Tim; Bell, Evaleigh; Dixon, Kenneth

    MAXINE is an EXCEL© spreadsheet, which is used to estimate dose to individuals for routine and accidental atmospheric releases of radioactive materials. MAXINE does not contain an atmospheric dispersion model, but rather doses are estimated using air and ground concentrations as input. Minimal input is required to run the program and site specific parameters are used when possible. Complete code description, verification of models, and user’s manual have been included.

  18. PHYSICOCHEMICAL PROPERTY CALCULATIONS

    EPA Science Inventory

    Computer models have been developed to estimate a wide range of physical-chemical properties from molecular structure. The SPARC modeling system approaches calculations as site specific reactions (pKa, hydrolysis, hydration) and `whole molecule' properties (vapor pressure, boilin...

  19. Inference about species richness and community structure using species-specific occupancy models in the National Swiss Breeding Bird Survey MUB

    USGS Publications Warehouse

    Kery, M.; Royle, J. Andrew; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Species richness is the most widely used biodiversity measure. Virtually always, it cannot be observed but needs to be estimated because some species may be present but remain undetected. This fact is commonly ignored in ecology and management, although it will bias estimates of species richness and related parameters such as occupancy, turnover or extinction rates. We describe a species community modeling strategy based on species-specific models of occurrence, from which estimates of important summaries of community structure, e.g., species richness, occupancy, or measures of similarity among species or sites, are derived by aggregating indicators of occurrence for all species observed in the sample, and for the estimated complement of unobserved species. We use data augmentation for an efficient Bayesian approach to estimation and prediction under this model based on MCMC in WinBUGS. For illustration, we use the Swiss breeding bird survey (MHB) that conducts 2?3 territory-mapping surveys in a systematic sample of 267 1 km2 units on quadrat-specific routes averaging 5.1 km to obtain species-specific estimates of occupancy, and estimates of species richness of all diurnal species free of distorting effects of imperfect detectability. We introduce into our model species-specific covariates relevant to occupancy (elevation, forest cover, route length) and sampling (season, effort). From 1995 to 2004, 185 diurnal breeding bird species were known in Switzerland, and an additional 13 bred 1?3 times since 1900. 134 species were observed during MHB surveys in 254 quadrats surveyed in 2001, and our estimate of 169.9 (95% CI 151?195) therefore appeared sensible. The observed number of species ranged from 4 to 58 (mean 32.8), but with an estimated 0.7?11.2 (mean 2.6) further, unobserved species, the estimated proportion of detected species was 0.48?0.98 (mean 0.91). As is well known, species richness declined at higher elevation and fell above the timberline, and most species showed some preferred elevation. Route length had clear effects on occupancy, suggesting it is a proxy for the size of the effectively sampled area. Detection probability of most species showed clear seasonal patterns and increased with greater survey effort; these are important results for the planning of focused surveys. The main benefit of our model, and its implementation in WinBUGS for which we provide code, is its conceptual simplicity. Species richness is naturally expressed as the sum of occurrences of individual species. Information about species is combined across sites, which yields greater efficiency or may even enable estimation for sites with very few observed species in the first place. At the same time, species detections are clearly segregated into a true state process (occupancy) and an observation process (detection, given occupancy), and covariates can be readily introduced, which provides for efficient introduction of such additional information as well as sharp testing of such relationships.

  20. Leachate migration from an in-situ oil-shale retort near Rock Springs, Wyoming

    USGS Publications Warehouse

    Glover, Kent C.

    1988-01-01

    Hydrogeologic factors influencing leachate movement from an in-situ oil-shale retort near Rock Springs, Wyoming, were investigated through models of ground-water flow and solute transport. Leachate, indicated by the conservative ion thiocyanate, has been observed ? mile downgradient from the retort. The contaminated aquifer is part of the Green River Formation and consists of thin, permeable layers of tuff and sandstone interbedded with oil shale. Most solute migration has occurred in an 8-foot sandstone at the top of the aquifer. Ground-water flow in the study area is complexly three dimensional and is characterized by large vertical variations in hydraulic head. The solute-transport model was used to predict the concentration of thiocyanate at a point where ground water discharges to the land surface. Leachate with peak concentrations of thiocyanate--45 milligrams per liter or approximately one-half the initial concentration of retort water--was estimated to reach the discharge area during January 1985. This report describes many of th3 advantages, as well as the problems, of site-specific studies. Data such as the distribution of thin, permeable beds or fractures might introduce an unmanageable degree of complexity to basin-wide studies but can be incorporated readily into site-specific models. Solute migration in the study area occurs primarily in thin, permeable beds rather than in oil-shale strata. Because of this behavior, leachate traveled far greater distances than might otherwise have been expected. The detail possible in site-specific models permits more accurate prediction of solute transport than is possible with basin-wide models. A major problem in site-specific studies is identifying model boundaries that permit the accurate estimation of aquifer properties. If the quantity of water flowing through a study area cannot be determined prior to modeling, the hydraulic conductivity and ground-water velocity will be poorly estimated.

  1. Leachate migration from an in situ oil-shale retort near Rock Springs, Wyoming

    USGS Publications Warehouse

    Glover, K.C.

    1986-01-01

    Geohydrologic factors influencing leachate movement from an in situ oil shale retort near Rock Springs, Wyoming, were investigated by developing models of groundwater flow and solute transport. Leachate, indicated by the conservative ion thiocyanate, has been observed 1/2 mi downgradient from the retort. The contaminated aquifer is part of the Green River Formation and consists of thin, permeable layers of tuff and sandstone interbedded with oil shale. Most solute migration has occurred in an 8-ft sandstone at the top of the aquifer. Groundwater flow in the study area is complexly 3-D and is characterized by large vertical variations in hydraulic head. The solute transport model was used to predict the concentration of thiocyanate at a point where groundwater discharges to the land surface. Leachates with peak concentrations of thiocyanate--45 mg/L or approximately one-half the initial concentration of retort water--were estimated to reach the discharge area during January 1985. Advantages as well as the problems of site specific studies are described. Data such as the distribution of thin permeable beds or fractures may introduce an unmanageable degree of complexity to basin-wide studies but can be incorporated readily in site specific models. Solute migration in the study area primarily occurs in thin permeable beds rather than in oil shale strata. Because of this behavior, leachate traveled far greater distances than might otherwise have been expected. The detail possible in site specific models permits more accurate prediction of solute transport than is possible with basin-wide models. A major problem in site specific studies is identifying model boundaries that permit the accurate estimation of aquifer properties. If the quantity of water flowing through a study area cannot be determined prior to modeling, the hydraulic conductivity and groundwater velocity will be estimated poorly. (Author 's abstract)

  2. Evaluating meteorological data from weather stations, and from satellites and global models for a multi-site epidemiological study.

    PubMed

    Colston, Josh M; Ahmed, Tahmeed; Mahopo, Cloupas; Kang, Gagandeep; Kosek, Margaret; de Sousa Junior, Francisco; Shrestha, Prakash Sunder; Svensen, Erling; Turab, Ali; Zaitchik, Benjamin

    2018-04-21

    Longitudinal and time series analyses are needed to characterize the associations between hydrometeorological parameters and health outcomes. Earth Observation (EO) climate data products derived from satellites and global model-based reanalysis have the potential to be used as surrogates in situations and locations where weather-station based observations are inadequate or incomplete. However, these products often lack direct evaluation at specific sites of epidemiological interest. Standard evaluation metrics of correlation, agreement, bias and error were applied to a set of ten hydrometeorological variables extracted from two quasi-global, commonly used climate data products - the Global Land Data Assimilation System (GLDAS) and Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) - to evaluate their performance relative to weather-station derived estimates at the specific geographic locations of the eight sites in a multi-site cohort study. These metrics were calculated for both daily estimates and 7-day averages and for a rotavirus-peak-season subset. Then the variables from the two sources were each used as predictors in longitudinal regression models to test their association with rotavirus infection in the cohort after adjusting for covariates. The availability and completeness of station-based validation data varied depending on the variable and study site. The performance of the two gridded climate models varied considerably within the same location and for the same variable across locations, according to different evaluation criteria and for the peak-season compared to the full dataset in ways that showed no obvious pattern. They also differed in the statistical significance of their association with the rotavirus outcome. For some variables, the station-based records showed a strong association while the EO-derived estimates showed none, while for others, the opposite was true. Researchers wishing to utilize publicly available climate data - whether EO-derived or station based - are advised to recognize their specific limitations both in the analysis and the interpretation of the results. Epidemiologists engaged in prospective research into environmentally driven diseases should install their own weather monitoring stations at their study sites whenever possible, in order to circumvent the constraints of choosing between distant or incomplete station data or unverified EO estimates. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. 77 FR 74421 - Approval and Promulgation of Air Quality Implementation Plans for PM2.5

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-14

    ... calculation of future year PM 2.5 design values using the SMAT assumptions contained in the modeled guidance\\4... components. Future PM 2.5 design values at specified monitoring sites were estimated by adding the future... nonattainment area, all future site-specific PM 2.5 design values were below the concentration specified in the...

  4. Water-quality characteristics, trends, and nutrient and sediment loads of streams in the Treyburn development area, North Carolina, 1988–2009

    USGS Publications Warehouse

    Fine, Jason M.; Harned, Douglas A.; Oblinger, Carolyn J.

    2013-01-01

    Streamflow and water-quality data, including concentrations of nutrients, metals, and pesticides, were collected from October 1988 through September 2009 at six sites in the Treyburn development study area. A review of water-quality data for streams in and near a 5,400-acre planned, mixed-use development in the Falls Lake watershed in the upper Neuse River Basin of North Carolina indicated only small-scale changes in water quality since the previous assessment of data collected from 1988 to 1998. Loads and yields were estimated for sediment and nutrients, and temporal trends were assessed for specific conductance, pH, and concentrations of dissolved oxygen, suspended sediment, and nutrients. Water-quality conditions for the Little River tributary and Mountain Creek may reflect development within these basins. The nitrogen and phosphorus concentrations at the Treyburn sites are low compared to sites nationally. The herbicides atrazine, metolachlor, prometon, and simazine were detected frequently at Mountain Creek and Little River tributary but concentrations are low compared to sites nationally. Little River tributary had the lowest median suspended-sediment yield over the 1988–2009 study period, whereas Flat River tributary had the largest median yield. The yields estimated for suspended sediment and nutrients were low compared to yields estimated for other basins in the Southeastern United States. Recent increasing trends were detected in total nitrogen concentration and suspended-sediment concentrations for Mountain Creek, and an increasing trend was detected in specific conductance for Little River tributary. Decreasing trends were detected in dissolved nitrite plus nitrate nitrogen, total ammonia plus organic nitrogen, sediment, and specific conductance for Flat River tributary. Water chemical concentrations, loads, yields, and trends for the Treyburn study sites reflect some effects of upstream development. These measures of water quality are generally low, however, compared to regional and national averages.

  5. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  6. Red-shouldered hawk occupancy surveys in central Minnesota, USA

    USGS Publications Warehouse

    Henneman, C.; McLeod, M.A.; Andersen, D.E.

    2007-01-01

    Forest-dwelling raptors are often difficult to detect because many species occur at low density or are secretive. Broadcasting conspecific vocalizations can increase the probability of detecting forest-dwelling raptors and has been shown to be an effective method for locating raptors and assessing their relative abundance. Recent advances in statistical techniques based on presence-absence data use probabilistic arguments to derive probability of detection when it is <1 and to provide a model and likelihood-based method for estimating proportion of sites occupied. We used these maximum-likelihood models with data from red-shouldered hawk (Buteo lineatus) call-broadcast surveys conducted in central Minnesota, USA, in 1994-1995 and 2004-2005. Our objectives were to obtain estimates of occupancy and detection probability 1) over multiple sampling seasons (yr), 2) incorporating within-season time-specific detection probabilities, 3) with call type and breeding stage included as covariates in models of probability of detection, and 4) with different sampling strategies. We visited individual survey locations 2-9 times per year, and estimates of both probability of detection (range = 0.28-0.54) and site occupancy (range = 0.81-0.97) varied among years. Detection probability was affected by inclusion of a within-season time-specific covariate, call type, and breeding stage. In 2004 and 2005 we used survey results to assess the effect that number of sample locations, double sampling, and discontinued sampling had on parameter estimates. We found that estimates of probability of detection and proportion of sites occupied were similar across different sampling strategies, and we suggest ways to reduce sampling effort in a monitoring program.

  7. Prospective CO 2 saline resource estimation methodology: Refinement of existing US-DOE-NETL methods based on data availability

    DOE PAGES

    Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.

    2016-09-28

    Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less

  8. Estimating soil moisture exceedance probability from antecedent rainfall

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.

    2016-12-01

    The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.

  9. Prospective CO 2 saline resource estimation methodology: Refinement of existing US-DOE-NETL methods based on data availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodman, Angela; Sanguinito, Sean; Levine, Jonathan S.

    Carbon storage resource estimation in subsurface saline formations plays an important role in establishing the scale of carbon capture and storage activities for governmental policy and commercial project decision-making. Prospective CO 2 resource estimation of large regions or subregions, such as a basin, occurs at the initial screening stages of a project using only limited publicly available geophysical data, i.e. prior to project-specific site selection data generation. As the scale of investigation is narrowed and selected areas and formations are identified, prospective CO 2 resource estimation can be refined and uncertainty narrowed when site-specific geophysical data are available. Here, wemore » refine the United States Department of Energy – National Energy Technology Laboratory (US-DOE-NETL) methodology as the scale of investigation is narrowed from very large regional assessments down to selected areas and formations that may be developed for commercial storage. In addition, we present a new notation that explicitly identifies differences between data availability and data sources used for geologic parameters and efficiency factors as the scale of investigation is narrowed. This CO 2 resource estimation method is available for screening formations in a tool called CO 2-SCREEN.« less

  10. Preference heterogeneity in a count data model of demand for off-highway vehicle recreation

    Treesearch

    Thomas P Holmes; Jeffrey E Englin

    2010-01-01

    This paper examines heterogeneity in the preferences for OHV recreation by applying the random parameters Poisson model to a data set of off-highway vehicle (OHV) users at four National Forest sites in North Carolina. The analysis develops estimates of individual consumer surplus and finds that estimates are systematically affected by the random parameter specification...

  11. QUANTIFICATION OF METHANE EMISSIONS AND DISCUSSON OF NITROUS OXIDE, AND AMMONIA EMISSIONS FROM SEPTIC TANKS, LATRINES, AND STAGNANT OPEN SEWERS OF THE WORLD

    EPA Science Inventory

    The report gives results of a first attempt to estimate global and country-specific methane (CH4) emissons from sewers and on-site wastewater treatment systems, including latrines and septic sewage tanks. It follows a report that includes CH4 and nitrous oxide (N2O) estimates fro...

  12. Prediction of TF target sites based on atomistic models of protein-DNA complexes

    PubMed Central

    Angarica, Vladimir Espinosa; Pérez, Abel González; Vasconcelos, Ana T; Collado-Vides, Julio; Contreras-Moreira, Bruno

    2008-01-01

    Background The specific recognition of genomic cis-regulatory elements by transcription factors (TFs) plays an essential role in the regulation of coordinated gene expression. Studying the mechanisms determining binding specificity in protein-DNA interactions is thus an important goal. Most current approaches for modeling TF specific recognition rely on the knowledge of large sets of cognate target sites and consider only the information contained in their primary sequence. Results Here we describe a structure-based methodology for predicting sequence motifs starting from the coordinates of a TF-DNA complex. Our algorithm combines information regarding the direct and indirect readout of DNA into an atomistic statistical model, which is used to estimate the interaction potential. We first measure the ability of our method to correctly estimate the binding specificities of eight prokaryotic and eukaryotic TFs that belong to different structural superfamilies. Secondly, the method is applied to two homology models, finding that sampling of interface side-chain rotamers remarkably improves the results. Thirdly, the algorithm is compared with a reference structural method based on contact counts, obtaining comparable predictions for the experimental complexes and more accurate sequence motifs for the homology models. Conclusion Our results demonstrate that atomic-detail structural information can be feasibly used to predict TF binding sites. The computational method presented here is universal and might be applied to other systems involving protein-DNA recognition. PMID:18922190

  13. Developing ecologically based PCB, pesticide, and metal remedial goals for an impacted northeast wooded swamp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rury, P.M.; Turton, D.J.

    Historically, remedial goals at hazardous waste sites have been developed based on human health risk estimates. As the disciplines of remedial investigation, risk assessment, and remedial design have evolved, there has been a shift toward the development of remedial goals that are protective of both human health and the environment. This has increased the need for sound quantitative ecological risk methodologies from which to derive ecologically protective remedial goals. The foundation of many ecological risk assessment models is the bioconcentration or bioaccumulation factor that estimates the partitioning of the compound of concern between the media (e.g., water, soil, or food)more » and the organism. Simple dietary food-chain models are then used to estimate the dose and resulting risk to higher trophic levels. For a Superfund site that encompassed a northeastern wooded swamp, a PCB pesticide and metal uptake and toxicity study was conducted on the earthworm commonly known as the red wiggler (Eisenea foetida). The study resulted in site-specific sediment to earthworm bioconcentration factors for PCBs and a range of pesticides and metals. In addition, largemouth bass and yellow perch were collected from an impacted pond to identify PCB and pesticide concentrations in mink (Mustela vison) prey. Utilizing the empirical data and site-specific bioconcentration factors in food-chain models, potential risks to the American woodcock (Scolopax minor) and mink were assessed, and ecologically protective PCB, pesticide, and metal remedial goals for the sediments of the wooded swamp were developed.« less

  14. Tree-, stand- and site-specific controls on landscape-scale patterns of transpiration

    NASA Astrophysics Data System (ADS)

    Hassler, Sibylle; Markus, Weiler; Theresa, Blume

    2017-04-01

    Transpiration is a key process in the hydrological cycle and a sound understanding and quantification of transpiration and its spatial variability is essential for management decisions as well as for improving the parameterisation of hydrological and soil-vegetation-atmosphere transfer models. For individual trees, transpiration is commonly estimated by measuring sap flow. Besides evaporative demand and water availability, tree-specific characteristics such as species, size or social status control sap flow amounts of individual trees. Within forest stands, properties such as species composition, basal area or stand density additionally affect sap flow, for example via competition mechanisms. Finally, sap flow patterns might also be influenced by landscape-scale characteristics such as geology, slope position or aspect because they affect water and energy availability; however, little is known about the dynamic interplay of these controls. We studied the relative importance of various tree-, stand- and site-specific characteristics with multiple linear regression models to explain the variability of sap velocity measurements in 61 beech and oak trees, located at 24 sites spread over a 290 km2-catchment in Luxembourg. For each of 132 consecutive days of the growing season of 2014 we modelled the daily sap velocities of these 61 trees and determined the importance of the different predictors. Results indicate that a combination of tree-, stand- and site-specific factors controls sap velocity patterns in the landscape, namely tree species, tree diameter, the stand density, geology and aspect. Compared to these predictors, spatial variability of atmospheric demand and soil moisture explains only a small fraction of the variability in the daily datasets. However, the temporal dynamics of the explanatory power of the tree-specific characteristics, especially species, are correlated to the temporal dynamics of potential evaporation. Thus, transpiration estimates at the landscape scale would benefit from not only considering hydro-meteorological drivers, but also including tree, stand and site characteristics in order to improve the spatial representation of transpiration for hydrological and soil-vegetation-atmosphere transfer models.

  15. Does Infection Site Matter? A Systematic Review of Infection Site Mortality in Sepsis.

    PubMed

    Motzkus, Christine A; Luckmann, Roger

    2017-09-01

    Sepsis treatment protocols emphasize source control with empiric antibiotics and fluid resuscitation. Previous reviews have examined the impact of infection site and specific pathogens on mortality from sepsis; however, no recent review has addressed the infection site. This review focuses on the impact of infection site on hospital mortality among patients with sepsis. The PubMed database was searched for articles from 2001 to 2014. Studies were eligible if they included (1) one or more statistical models with hospital mortality as the outcome and considered infection site for inclusion in the model and (2) adult patients with sepsis, severe sepsis, or septic shock. Data abstracted included stage of sepsis, infection site, and raw and adjusted effect estimates. Nineteen studies were included. Infection sites most studied included respiratory (n = 19), abdominal (n = 19), genitourinary (n = 18), and skin and soft tissue infections (n = 11). Several studies found a statistically significant lower mortality risk for genitourinary infections on hospital mortality when compared to respiratory infections. Based on studies included in this review, the impact of infection site in patients with sepsis on hospital mortality could not be reliably estimated. Misclassification among infections and disease states remains a serious possibility in studies on this topic.

  16. A k-nearest neighbor approach for estimation of single-tree biomass

    Treesearch

    Lutz Fehrmann; Christoph Kleinn

    2007-01-01

    Allometric biomass models are typically site and species specific. They are mostly based on a low number of independent variables such as diameter at breast height and tree height. Because of relatively small datasets, their validity is limited to the set of conditions of the study, such as site conditions and diameter range. One challenge in the context of the current...

  17. Evaluation of a simple, point-scale hydrologic model in simulating soil moisture using the Delaware environmental observing system

    NASA Astrophysics Data System (ADS)

    Legates, David R.; Junghenn, Katherine T.

    2018-04-01

    Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.

  18. Evaluating noninvasive genetic sampling techniques to estimate large carnivore abundance.

    PubMed

    Mumma, Matthew A; Zieminski, Chris; Fuller, Todd K; Mahoney, Shane P; Waits, Lisette P

    2015-09-01

    Monitoring large carnivores is difficult because of intrinsically low densities and can be dangerous if physical capture is required. Noninvasive genetic sampling (NGS) is a safe and cost-effective alternative to physical capture. We evaluated the utility of two NGS methods (scat detection dogs and hair sampling) to obtain genetic samples for abundance estimation of coyotes, black bears and Canada lynx in three areas of Newfoundland, Canada. We calculated abundance estimates using program capwire, compared sampling costs, and the cost/sample for each method relative to species and study site, and performed simulations to determine the sampling intensity necessary to achieve abundance estimates with coefficients of variation (CV) of <10%. Scat sampling was effective for both coyotes and bears and hair snags effectively sampled bears in two of three study sites. Rub pads were ineffective in sampling coyotes and lynx. The precision of abundance estimates was dependent upon the number of captures/individual. Our simulations suggested that ~3.4 captures/individual will result in a < 10% CV for abundance estimates when populations are small (23-39), but fewer captures/individual may be sufficient for larger populations. We found scat sampling was more cost-effective for sampling multiple species, but suggest that hair sampling may be less expensive at study sites with limited road access for bears. Given the dependence of sampling scheme on species and study site, the optimal sampling scheme is likely to be study-specific warranting pilot studies in most circumstances. © 2015 John Wiley & Sons Ltd.

  19. ACID RAIN MODELING

    EPA Science Inventory

    This paper provides an overview of existing statistical methodologies for the estimation of site-specific and regional trends in wet deposition. The interaction of atmospheric processes and emissions tend to produce wet deposition data patterns that show large spatial and tempora...

  20. Remedial options for creosote-contaminated sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, W.J.; Delshad, M.; Oolman, T.

    2000-03-31

    Free-phase DNAPL recovery operations are becoming increasingly prevalent at creosote-contaminated aquifer sites. This paper illustrates the potential of both classical and innovative recovery methods. The UTCHEM multiphase flow and transport numerical simulator was used to predict the migration of creosote DNAPL during a hypothetical spill event, during a long-term redistribution after the spill, and for a variety of subsequent free-phase DNAPL recovery operations. The physical parameters used for the DNAPL and the aquifer in the model are estimates for the DNAPL and the aquifer in the model are estimates for a specific creosote DNAPL site. Other simulations were also conductedmore » using physical parameters that are typical of a trichloroethene (TCE) DNAPL. Dramatic differences in DNAPL migration were observed between these simulations.« less

  1. Simulation of groundwater flow and streamflow depletion in the Branch Brook, Merriland River, and parts of the Mousam River watersheds in southern Maine

    USGS Publications Warehouse

    Nielsen, Martha G.; Locke, Daniel B.

    2015-01-01

    The study evaluated two different methods of calculating in-stream flow requirements for Branch Brook and the Merriland River—a set of statewide equations used to calculate monthly median flows and the MOVE.1 record-extension technique used on site-specific streamflow measurements. The August median in-stream flow requirement in the Merriland River was calculated as 7.18 ft3/s using the statewide equations but was 3.07 ft3/s using the MOVE.1 analysis. In Branch Brook, the August median in-stream flow requirements were calculated as 20.3 ft3/s using the statewide equations and 11.8 ft3/s using the MOVE.1 analysis. In each case, using site-specific data yields an estimate of in-stream flow that is much lower than an estimate the statewide equations provide.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seiple, Timothy E.; Coleman, André M.; Skaggs, Richard L.

    Within the United States and Puerto Rico, publicly owned treatment works (POTWs) process 130.5 Gl/d (34.5 Bgal/d) of wastewater, producing sludge as a waste product. Emerging technologies offer novel waste-to-energy pathways through whole sludge conversion into biofuels. Assessing the feasibility, scalability and tradeoffs of various energy conversion pathways is difficult in the absence of highly spatially resolved estimates of sludge production. In this study, average wastewater solids concentrations and removal rates, and site specific daily average influent flow are used to estimate site specific annual sludge production on a dry weight basis for >15,000 POTWs. Current beneficial uses, regional productionmore » hotspots and feedstock aggregation potential are also assessed. Analyses indicate 1) POTWs capture 12.56 Tg/y (13.84 MT/y) of dry solids; 2) 50% are not beneficially utilized, and 3) POTWs can support seven regions that aggregate >910 Mg/d (1000 T/d) of sludge within a travel distance of 100 km.« less

  3. [Nitrogen stress measurement of canola based on multi-spectral charged coupled device imaging sensor].

    PubMed

    Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong

    2006-09-01

    Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.

  4. Measuring and modeling near surface reflected and emitted radiation fluxes at the FIFE site

    NASA Technical Reports Server (NTRS)

    Blad, Blaine L.; Norman, John M.; Walter-Shea, Elizabeth; Starks, Patrick; Vining, Roel; Hays, Cynthia

    1988-01-01

    Research was conducted during the four Intensive Field Campaigns (IFC) of the FIFE project in 1987. The research was done on a tall grass prairie with specific measurement sites on and near the Konza Prairie in Kansas. Measurements were made to help meet the following objectives: determination of the variability in reflected and emitted radiation fluxes in selected spectral wavebands as a function of topography and vegetative community; development of techniques to account for slope and sun angle effects on the radiation fluxes; estimation of shortwave albedo and net radiation fluxes using the reflected and emitted spectral measurements described; estimation of leaf and canopy spectral properties from calculated normalized differences coupled with off-nadir measurements using inversion techniques; estimation of plant water status at several locations with indices utilizing plant temperature and other environmental parameters; and determination of relationships between estimated plant water status and measured soil water content. Results are discussed.

  5. U.S. Department of Energy worker health risk evaluation methodology for assessing risks associated with environmental restoration and waste management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaylock, B.P.; Legg, J.; Travis, C.C.

    1995-06-01

    This document describes a worker health risk evaluation methodology for assessing risks associated with Environmental Restoration (ER) and Waste Management (WM). The methodology is appropriate for estimating worker risks across the Department of Energy (DOE) Complex at both programmatic and site-specific levels. This document supports the worker health risk methodology used to perform the human health risk assessment portion of the DOE Programmatic Environmental Impact Statement (PEIS) although it has applications beyond the PEIS, such as installation-wide worker risk assessments, screening-level assessments, and site-specific assessments.

  6. Nest success, cause-specific nest failure, and hatchability of aquatic birds at selenium-contaminated Kesterson Reservoir and a reference site

    USGS Publications Warehouse

    Ohlendorf, Harry M.; Hothem, Roger L.; Welsh, Daniel

    1989-01-01

    During 1983-1985, we studied the reproductive success of several species of aquatic birds (coots, ducks, shorebirds, and grebes) nesting at two sites in Merced County, California: a selenium-contaminated site (Kesterson Reservoir) and a nearby reference site (Volta Wildlife Area). We used a computer program (MICROMORT) developed for the analysis of radiotelemetry data (Heisey and Fuller 1985) to estimate nest success and cause-specific failure rates, and then compared these parameters and hatchability between sites and among years. Nest success and causes of failure varied by species, site, and year. The most important causes of nest failure were usually predation, desertion, and water-level changes. However, embryotoxicosis (mortality, deformity, and lack of embryonic development) was the most important cause of nest failure in Eared Grebes (Podiceps nigricollis) at Kesterson Reservoir. Embryotoxicosis also reduced the hatchability of eggs of all other species at Kesterson in one or more years; embryonic mortality occurred rarely at Volta, and abnormalities were not observed.

  7. Near infrared spectroscopy for the nondestructive estimation of clear wood properties of Pinus taeda L. from the southern United States

    Treesearch

    Laurence R. Schimleck; P. David Jones; Alexander Clark; Richard F. Daniels; Gary F. Peter

    2005-01-01

    The estimation of specific gravity (SG), modulus of elasticity (MOE), and modulus of rupture (MOR) of loblolly pine (Pinus taeda L.) clear wood samples from a diverse range of sites across the southern United States was investigated using near infrared (NIR) spectroscopy. NIR spectra were obtained from the radial and cross sectional (original, rough...

  8. Improving the Navy’s Passive Underwater Acoustic Monitoring of Marine Mammal Populations

    DTIC Science & Technology

    2013-09-30

    passive acoustic monitoring: Correcting humpback whale call detections for site-specific and time-dependent environmental characteristics ,” JASA Exp...marine mammal species using passive acoustic monitoring, with application to obtaining density estimates of transiting humpback whale populations in...minimize the variance of the density estimates, 3) to apply the numerical modeling methods for humpback whale vocalizations to understand distortions

  9. Prediction of Cancer Incidence and Mortality in Korea, 2018.

    PubMed

    Jung, Kyu-Won; Won, Young-Joo; Kong, Hyun-Joo; Lee, Eun Sook

    2018-04-01

    This study aimed to report on cancer incidence and mortality for the year 2018 to estimate Korea's current cancer burden. Cancer incidence data from 1999 to 2015 were obtained from the Korea National Cancer Incidence Database, and cancer mortality data from 1993 to 2016 were acquired from Statistics Korea. Cancer incidence and mortality were projected by fitting a linear regression model to observed age-specific cancer rates against observed years, then multiplying the projected age-specific rates by the age-specific population. The Joinpoint regression model was used to determine at which year the linear trend changed significantly, we only used the data of the latest trend. A total of 204,909 new cancer cases and 82,155 cancer deaths are expected to occur in Korea in 2018. The most common cancer sites were lung, followed by stomach, colorectal, breast and liver. These five cancers represent half of the overall burden of cancer in Korea. For mortality, the most common sites were lung cancer, followed by liver, colorectal, stomach and pancreas. The incidence rate of all cancer in Korea are estimated to decrease gradually, mainly due to decrease of thyroid cancer. These up-to-date estimates of the cancer burden in Korea could be an important resource for planning and evaluation of cancer-control programs.

  10. Are Synonymous Sites in Primates and Rodents Functionally Constrained?

    PubMed

    Price, Nicholas; Graur, Dan

    2016-01-01

    It has been claimed that synonymous sites in mammals are under selective constraint. Furthermore, in many studies the selective constraint at such sites in primates was claimed to be more stringent than that in rodents. Given the larger effective population sizes in rodents than in primates, the theoretical expectation is that selection in rodents would be more effective than that in primates. To resolve this contradiction between expectations and observations, we used processed pseudogenes as a model for strict neutral evolution, and estimated selective constraint on synonymous sites using the rate of substitution at pseudosynonymous and pseudononsynonymous sites in pseudogenes as the neutral expectation. After controlling for the effects of GC content, our results were similar to those from previous studies, i.e., synonymous sites in primates exhibited evidence for higher selective constraint that those in rodents. Specifically, our results indicated that in primates up to 24% of synonymous sites could be under purifying selection, while in rodents synonymous sites evolved neutrally. To further control for shifts in GC content, we estimated selective constraint at fourfold degenerate sites using a maximum parsimony approach. This allowed us to estimate selective constraint using mutational patterns that cause a shift in GC content (GT ↔ TG, CT ↔ TC, GA ↔ AG, and CA ↔ AC) and ones that do not (AT ↔ TA and CG ↔ GC). Using this approach, we found that synonymous sites evolve neutrally in both primates and rodents. Apparent deviations from neutrality were caused by a higher rate of C → A and C → T mutations in pseudogenes. Such differences are most likely caused by the shift in GC content experienced by pseudogenes. We conclude that previous estimates according to which 20-40% of synonymous sites in primates were under selective constraint were most likely artifacts of the biased pattern of mutation.

  11. Construction of a North American Cancer Survival Index to Measure Progress of Cancer Control Efforts

    PubMed Central

    Weir, Hannah K; Mariotto, Angela; Wilson, Reda; Nishri, Diane

    2017-01-01

    Introduction Population-based cancer survival data provide insight into the effectiveness of health care delivery. Comparing survival for all cancer sites combined is challenging, because the primary cancer site and age distribution of patients may differ among areas or change over time. Cancer survival indices (CSIs) are summary measures of survival for cancers of all sites combined and are used in England and Europe to monitor temporal trends and examine geographic differences in survival. We describe the construction of the North American Cancer Survival Index and demonstrate how it can be used to compare survival by geographic area and by race. Methods We used data from 36 US cancer registries to estimate relative survival ratios for people diagnosed with cancer from 2006 through 2012 to create the CSI: the weighted sum of age-standardized, site-specific, relative survival ratios, with weights derived from the distribution of incident cases by sex and primary site from 2006 through 2008. The CSI was calculated for 32 registries for all races, 31 registries for whites, and 12 registries for blacks. Results The survival estimates standardized by age only versus age-, sex-, and site-standardized (CSI) were 64.1% (95% confidence interval [CI], 64.1%–64.2%) and 63.9% (95% CI, 63.8%–63.9%), respectively, for the United States for all races combined. The inter-registry ranges in unstandardized and CSI estimates decreased from 12.3% to 5.0% for whites, and from 5.4% to 3.9% for blacks. We found less inter-registry variation in CSI estimates than in unstandardized all-sites survival estimates, but disparities by race persisted. Conclusions CSIs calculated for different jurisdictions or periods are directly comparable, because they are standardized by age, sex, and primary site. A national CSI could be used to measure temporal progress in meeting public health objectives, such as Healthy People 2030. PMID:28910593

  12. Repeat 24-hour recalls and locally developed food composition databases: a feasible method to estimate dietary adequacy in a multi-site preconception maternal nutrition RCT.

    PubMed

    Lander, Rebecca L; Hambidge, K Michael; Krebs, Nancy F; Westcott, Jamie E; Garces, Ana; Figueroa, Lester; Tejeda, Gabriela; Lokangaka, Adrien; Diba, Tshilenge S; Somannavar, Manjunath S; Honnayya, Ranjitha; Ali, Sumera A; Khan, Umber S; McClure, Elizabeth M; Thorsten, Vanessa R; Stolka, Kristen B

    2017-01-01

    Background : Our aim was to utilize a feasible quantitative methodology to estimate the dietary adequacy of >900 first-trimester pregnant women in poor rural areas of the Democratic Republic of the Congo, Guatemala, India and Pakistan. This paper outlines the dietary methods used. Methods : Local nutritionists were trained at the sites by the lead study nutritionist and received ongoing mentoring throughout the study. Training topics focused on the standardized conduct of repeat multiple-pass 24-hr dietary recalls, including interview techniques, estimation of portion sizes, and construction of a unique site-specific food composition database (FCDB). Each FCDB was based on 13 food groups and included values for moisture, energy, 20 nutrients (i.e. macro- and micronutrients), and phytate (an anti-nutrient). Nutrient values for individual foods or beverages were taken from recently developed FAO-supported regional food composition tables or the USDA national nutrient database. Appropriate adjustments for differences in moisture and application of nutrient retention and yield factors after cooking were applied, as needed. Generic recipes for mixed dishes consumed by the study population were compiled at each site, followed by calculation of a median recipe per 100 g. Each recipe's nutrient values were included in the FCDB. Final site FCDB checks were planned according to FAO/INFOODS guidelines. Discussion : This dietary strategy provides the opportunity to assess estimated mean group usual energy and nutrient intakes and estimated prevalence of the population 'at risk' of inadequate intakes in first-trimester pregnant women living in four low- and middle-income countries. While challenges and limitations exist, this methodology demonstrates the practical application of a quantitative dietary strategy for a large international multi-site nutrition trial, providing within- and between-site comparisons. Moreover, it provides an excellent opportunity for local capacity building and each site FCDB can be easily modified for additional research activities conducted in other populations living in the same area.

  13. Efficient Site-Specific Labeling of Proteins via Cysteines

    PubMed Central

    Kim, Younggyu; Ho, Sam O.; Gassman, Natalie R.; Korlann, You; Landorf, Elizabeth V.; Collart, Frank R.; Weiss, Shimon

    2011-01-01

    Methods for chemical modifications of proteins have been crucial for the advancement of proteomics. In particular, site-specific covalent labeling of proteins with fluorophores and other moieties has permitted the development of a multitude of assays for proteome analysis. A common approach for such a modification is solvent-accessible cysteine labeling using thiol-reactive dyes. Cysteine is very attractive for site-specific conjugation due to its relative rarity throughout the proteome and the ease of its introduction into a specific site along the protein's amino acid chain. This is achieved by site-directed mutagenesis, most often without perturbing the protein's function. Bottlenecks in this reaction, however, include the maintenance of reactive thiol groups without oxidation before the reaction, and the effective removal of unreacted molecules prior to fluorescence studies. Here, we describe an efficient, specific, and rapid procedure for cysteine labeling starting from well-reduced proteins in the solid state. The efficacy and specificity of the improved procedure are estimated using a variety of single-cysteine proteins and thiol-reactive dyes. Based on UV/vis absorbance spectra, coupling efficiencies are typically in the range 70–90%, and specificities are better than ~95%. The labeled proteins are evaluated using fluorescence assays, proving that the covalent modification does not alter their function. In addition to maleimide-based conjugation, this improved procedure may be used for other thiol-reactive conjugations such as haloacetyl, alkyl halide, and disulfide interchange derivatives. This facile and rapid procedure is well suited for high throughput proteome analysis. PMID:18275130

  14. Efficient site-specific labeling of proteins via cysteines.

    PubMed

    Kim, Younggyu; Ho, Sam O; Gassman, Natalie R; Korlann, You; Landorf, Elizabeth V; Collart, Frank R; Weiss, Shimon

    2008-03-01

    Methods for chemical modifications of proteins have been crucial for the advancement of proteomics. In particular, site-specific covalent labeling of proteins with fluorophores and other moieties has permitted the development of a multitude of assays for proteome analysis. A common approach for such a modification is solvent-accessible cysteine labeling using thiol-reactive dyes. Cysteine is very attractive for site-specific conjugation due to its relative rarity throughout the proteome and the ease of its introduction into a specific site along the protein's amino acid chain. This is achieved by site-directed mutagenesis, most often without perturbing the protein's function. Bottlenecks in this reaction, however, include the maintenance of reactive thiol groups without oxidation before the reaction, and the effective removal of unreacted molecules prior to fluorescence studies. Here, we describe an efficient, specific, and rapid procedure for cysteine labeling starting from well-reduced proteins in the solid state. The efficacy and specificity of the improved procedure are estimated using a variety of single-cysteine proteins and thiol-reactive dyes. Based on UV/vis absorbance spectra, coupling efficiencies are typically in the range 70-90%, and specificities are better than approximately 95%. The labeled proteins are evaluated using fluorescence assays, proving that the covalent modification does not alter their function. In addition to maleimide-based conjugation, this improved procedure may be used for other thiol-reactive conjugations such as haloacetyl, alkyl halide, and disulfide interchange derivatives. This facile and rapid procedure is well suited for high throughput proteome analysis.

  15. Understanding the Socioeconomic Effects of Wildfires on Western U.S. Public Lands

    NASA Astrophysics Data System (ADS)

    Sanchez, J. J.; Srivastava, L.; Marcos-Martinez, R.

    2017-12-01

    Climate change has resulted in the increased severity and frequency of forest disturbances due to wildfires, droughts, pests and diseases that compromise the sustainable provision of forest ecosystem services (e.g., water quantity and quality, carbon sequestration, recreation). A better understanding of the environmental and socioeconomic consequences of forest disturbances (i.e., wildfires) could improve the management and protection of public lands. We used a single-site benefit transfer function and spatially explicit information for demographic, socioeconomic, and site-specific characteristics to estimate the monetized value of market and non-market ecosystem services provided by forests on Western US public lands. These estimates are then used to approximate the costs of forest disturbances caused by wildfires of varying frequency and intensity, and across sites with heterogeneous characteristics and protection and management strategies. Our analysis provides credible estimates of the benefits of the forest for land management by the United States Forest Service, thereby assisting forest managers in planning resourcing and budgeting priorities.

  16. Archaeology Through Computational Linguistics: Inscription Statistics Predict Excavation Sites of Indus Valley Artifacts.

    PubMed

    Recchia, Gabriel L; Louwerse, Max M

    2016-11-01

    Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. Copyright © 2015 Cognitive Science Society, Inc.

  17. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    PubMed

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paton, Ian

    The Rocky Flats Environmental Technology Site (RFETS) is a Department of Energy facility located approximately 16 miles northwest of Denver, Colorado. Processing and fabrication of nuclear weapons components occurred at Rocky Flats from 1952 through 1989. Operations at the Site included the use of several radionuclides, including plutonium-239/240 (Pu), americium-241 (Am), and various uranium (U) isotopes, as well as several types of chlorinated solvents. The historic operations resulted in legacy contamination, including contaminated facilities, process waste lines, buried wastes and surface soil contamination. Decontamination and removal of buildings at the site was completed in late 2005, culminating more than tenmore » years of active environmental remediation work. The Corrective Action Decision/Record of Decision was subsequently approved in 2006, signifying regulatory approval and closure of the site. The use of RFETS as a National Wildlife Refuge is scheduled to be in full operation by 2012. To develop a plan for remediating different types of radionuclide contaminants present in the RFETS environment required understanding the different environmental transport pathways for the various actinides. Developing this understanding was the primary objective of the Actinide Migration Evaluation (AME) project. Findings from the AME studies were used in the development of RFETS remediation strategies. The AME project focused on issues of actinide behavior and mobility in surface water, groundwater, air, soil and biota at RFETS. For the purposes of the AME studies, actinide elements addressed included Pu, Am, and U. The AME program, funded by DOE, brought together personnel with a broad range of relevant expertise in technical investigations. The AME advisory panel identified research investigations and approaches that could be used to solve issues related to actinide migration at the Site. An initial step of the AME was to develop a conceptual model to provide a qualitative description of the relationships among potential actinide sources and transport pathways at RFETS. One conceptual model was developed specifically for plutonium and americium, because of their similar geochemical and transport properties. A separate model was developed for uranium because of its different properties and mobility in the environment. These conceptual models were guidelines for quantitative analyses described in the RFETS Pathway Analysis Report, which used existing data from the literature as well as site-specific analyses, including field, laboratory and modeling studies to provide quantitative estimates of actinide migration in the RFETS environment. For pathways where more than one method was used to estimate offsite loads for a specific pathway, the method yielding the highest estimated off-site was used for comparison purposes. For all actinides studied, for pre-remediation conditions, air and surface water were identified to be the dominant transport mechanisms. The estimated annual airborne plutonium-239/240 load transported off site exceeded the surface water load by roughly a factor of 40. However, despite being the largest transport pathway, airborne radionuclide concentrations at the monitoring location with the highest measurements during the period studied were less than two percent of the allowable 10 milli-rem standard governing DOE facilities. Estimated actinide loads for other pathways were much less. Shallow groundwater was approximately two orders of magnitude lower, or 1/100 of the load conveyed in surface water. The estimated biological pathway load for plutonium was approximately five orders of magnitude less, or 1/100,000, of the load estimated for surface-water. The pathway analysis results were taken into consideration during subsequent remediation activities that occurred at the site. For example, when the 903 Pad area was remediated to address elevated concentrations of Pu and Am in the surface soil, portable tent structures were constructed to prevent wind and water erosion from occurring while remediation activities took place. Following remediation of the 903 Pad and surrounding area, coconut erosion blankets were installed to mitigate erosion effects while vegetation was reestablished [2]. These measures were effective tools to address the primary transport mechanisms identified, coupling the scientific understanding of the site with the remediation strategy.« less

  19. Development of estimation method for crop yield using MODIS satellite imagery data and process-based model for corn and soybean in US Corn-Belt region

    NASA Astrophysics Data System (ADS)

    Lee, J.; Kang, S.; Jang, K.; Ko, J.; Hong, S.

    2012-12-01

    Crop productivity is associated with the food security and hence, several models have been developed to estimate crop yield by combining remote sensing data with carbon cycle processes. In present study, we attempted to estimate crop GPP and NPP using algorithm based on the LUE model and a simplified respiration model. The state of Iowa and Illinois was chosen as the study site for estimating the crop yield for a period covering the 5 years (2006-2010), as it is the main Corn-Belt area in US. Present study focuses on developing crop-specific parameters for corn and soybean to estimate crop productivity and yield mapping using satellite remote sensing data. We utilized a 10 km spatial resolution daily meteorological data from WRF to provide cloudy-day meteorological variables but in clear-say days, MODIS-based meteorological data were utilized to estimate daily GPP, NPP, and biomass. County-level statistics on yield, area harvested, and productions were used to test model predicted crop yield. The estimated input meteorological variables from MODIS and WRF showed with good agreements with the ground observations from 6 Ameriflux tower sites in 2006. For examples, correlation coefficients ranged from 0.93 to 0.98 for Tmin and Tavg ; from 0.68 to 0.85 for daytime mean VPD; from 0.85 to 0.96 for daily shortwave radiation, respectively. We developed county-specific crop conversion coefficient, i.e. ratio of yield to biomass on 260 DOY and then, validated the estimated county-level crop yield with the statistical yield data. The estimated corn and soybean yields at the county level ranged from 671 gm-2 y-1 to 1393 gm-2 y-1 and from 213 gm-2 y-1 to 421 gm-2 y-1, respectively. The county-specific yield estimation mostly showed errors less than 10%. Furthermore, we estimated crop yields at the state level which were validated against the statistics data and showed errors less than 1%. Further analysis for crop conversion coefficient was conducted for 200 DOY and 280 DOY. For the case of 280 DOY, Crop yield estimation showed better accuracy for soybean at county level. Though the case of 200 DOY resulted in less accuracy (i.e. 20% mean bias), it provides a useful tool for early forecasting of crop yield. We improved the spatial accuracy of estimated crop yield at county level by developing county-specific crop conversion coefficient. Our results indicate that the aboveground crop biomass can be estimated successfully with the simple LUE and respiration models combined with MODIS data and then, county-specific conversion coefficient can be different with each other across different counties. Hence, applying region-specific conversion coefficient is necessary to estimate crop yield with better accuracy.

  20. Multi-scale occupancy estimation and modelling using multiple detection methods

    USGS Publications Warehouse

    Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.

    2008-01-01

    Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.

  1. Development of National Program of Cancer Registries SAS Tool for Population-Based Cancer Relative Survival Analysis.

    PubMed

    Dong, Xing; Zhang, Kevin; Ren, Yuan; Wilson, Reda; O'Neil, Mary Elizabeth

    2016-01-01

    Studying population-based cancer survival by leveraging the high-quality cancer incidence data collected by the Centers for Disease Control and Prevention's National Program of Cancer Registries (NPCR) can offer valuable insight into the cancer burden and impact in the United States. We describe the development and validation of a SASmacro tool that calculates population-based cancer site-specific relative survival estimates comparable to those obtained through SEER*Stat. The NPCR relative survival analysis SAS tool (NPCR SAS tool) was developed based on the relative survival method and SAS macros developed by Paul Dickman. NPCR cancer incidence data from 25 states submitted in November 2012 were used, specifically cases diagnosed from 2003 to 2010 with follow-up through 2010. Decennial and annual complete life tables published by the National Center for Health Statistics (NCHS) for 2000 through 2009 were used. To assess comparability between the 2 tools, 5-year relative survival rates were calculated for 25 cancer sites by sex, race, and age group using the NPCR SAS tool and the National Cancer Institute's SEER*Stat 8.1.5 software. A module to create data files for SEER*Stat was also developed for the NPCR SAS tool. Comparison of the results produced by both SAS and SEER*Stat showed comparable and reliable relative survival estimates for NPCR data. For a majority of the sites, the net differences between the NPCR SAS tool and SEER*Stat-produced relative survival estimates ranged from -0.1% to 0.1%. The estimated standard errors were highly comparable between the 2 tools as well. The NPCR SAS tool will allow researchers to accurately estimate cancer 5-year relative survival estimates that are comparable to those produced by SEER*Stat for NPCR data. Comparison of output from the NPCR SAS tool and SEER*Stat provided additional quality control capabilities for evaluating data prior to producing NPCR relative survival estimates.

  2. Chemical characterization and quantitativ e assessment of source-specific health risk of trace metals in PM1.0 at a road site of Delhi, India.

    PubMed

    Prakash, Jai; Lohia, Tarachand; Mandariya, Anil K; Habib, Gazala; Gupta, Tarun; Gupta, Sanjay K

    2018-03-01

    This study presents the concentration of submicron aerosol (PM 1.0 ) collected during November, 2009 to March, 2010 at two road sites near the Indian Institute of Technology Delhi campus. In winter, PM 1.0 composed 83% of PM 2.5 indicating the dominance of combustion activity-generated particles. Principal component analysis (PCA) proved secondary aerosol formation as a dominant process in enhancing aerosol concentration at a receptor site along with biomass burning, vehicle exhaust, road dust, engine and tire tear wear, and secondary ammonia. The non-carcinogenic and excess cancer risk for adults and children were estimated for trace element data set available for road site and at elevated site from another parallel work. The decrease in average hazard quotient (HQ) for children and adults was estimated in following order: Mn > Cr > Ni > Pb > Zn > Cu both at road and elevated site. For children, the mean HQs were observed in safe level for Cu, Ni, Zn, and Pb; however, values exceeded safe limit for Cr and Mn at road site. The average highest hazard index values for children and adults were estimated as 22 and 10, respectively, for road site and 7 and 3 for elevated site. The road site average excess cancer risk (ECR) risk of Cr and Ni was close to tolerable limit (10 -4 ) for adults and it was 13-16 times higher than the safe limit (10 -6 ) for children. The ECR of Ni for adults and children was 102 and 14 times higher at road site compared to elevated site. Overall, the observed ECR values far exceed the acceptable level.

  3. Survival, growth, and movement of subadult humpback chub, Gila cypha, in the Little Colorado River, Arizona

    USGS Publications Warehouse

    Dzul, Maria C.; Yackulic, Charles B.; Stone, Dennis M.; Van Haverbeke, David R.

    2016-01-01

    Ecologists estimate vital rates, such as growth and survival, to better understand population dynamics and identify sensitive life history parameters for species or populations of concern. Here, we assess spatiotemporal variation in growth, movement, density, and survival of subadult humpback chub living in the Little Colorado River, Grand Canyon, AZ from 2001–2002 and 2009–2013. We divided the Little Colorado River into three reaches and used a multistate mark-recapture model to determine rates of movement and differences in survival and density between sites for different cohorts. Additionally, site-specific and year-specific effects on growth were evaluated using a linear model. Results indicate that summer growth was higher for upstream sites compared with downstream sites. In contrast, there was not a consistent spatial pattern across years in winter growth; however, river-wide winter growth was negatively related to the duration of floods from 1 October to 15 May. Apparent survival was estimated to be lower at the most downstream site compared with the upstream sites; however, this could be because in part of increased emigration into the Colorado River at downstream sites. Furthermore, the 2010 cohort (i.e. fish that are age 1 in 2010) exhibited high apparent survival relative to other years. Movement between reaches varied with year, and some years exhibited preferential upstream displacement. Improving understanding of spatiotemporal effects on age 1 humpback chub survival can help inform current management efforts to translocate humpback chub into new locations and give us a better understanding of the factors that may limit this tributary's carrying capacity for humpback chub.

  4. Radionuclide transfer in marine coastal ecosystems, a modelling study using metabolic processes and site data.

    PubMed

    Konovalenko, L; Bradshaw, C; Kumblad, L; Kautsky, U

    2014-07-01

    This study implements new site-specific data and improved process-based transport model for 26 elements (Ac, Ag, Am, Ca, Cl, Cm, Cs, Ho, I, Nb, Ni, Np, Pa, Pb, Pd, Po, Pu, Ra, Se, Sm, Sn, Sr, Tc, Th, U, Zr), and validates model predictions with site measurements and literature data. The model was applied in the safety assessment of a planned nuclear waste repository in Forsmark, Öregrundsgrepen (Baltic Sea). Radionuclide transport models are central in radiological risk assessments to predict radionuclide concentrations in biota and doses to humans. Usually concentration ratios (CRs), the ratio of the measured radionuclide concentration in an organism to the concentration in water, drive such models. However, CRs vary with space and time and CR estimates for many organisms are lacking. In the model used in this study, radionuclides were assumed to follow the circulation of organic matter in the ecosystem and regulated by radionuclide-specific mechanisms and metabolic rates of the organisms. Most input parameters were represented by log-normally distributed probability density functions (PDFs) to account for parameter uncertainty. Generally, modelled CRs for grazers, benthos, zooplankton and fish for the 26 elements were in good agreement with site-specific measurements. The uncertainty was reduced when the model was parameterized with site data, and modelled CRs were most similar to measured values for particle reactive elements and for primary consumers. This study clearly demonstrated that it is necessary to validate models with more than just a few elements (e.g. Cs, Sr) in order to make them robust. The use of PDFs as input parameters, rather than averages or best estimates, enabled the estimation of the probable range of modelled CR values for the organism groups, an improvement over models that only estimate means. Using a mechanistic model that is constrained by ecological processes enables (i) the evaluation of the relative importance of food and water uptake pathways and processes such as assimilation and excretion, (ii) the possibility to extrapolate within element groups (a common requirement in many risk assessments when initial model parameters are scarce) and (iii) predictions of radionuclide uptake in the ecosystem after changes in ecosystem structure or environmental conditions. These features are important for the longterm (>1000 year) risk assessments that need to be considered for a deep nuclear waste repository. Copyright © 2013. Published by Elsevier Ltd.

  5. Tree-, stand- and site-specific controls on landscape-scale patterns of transpiration

    NASA Astrophysics Data System (ADS)

    Kathrin Hassler, Sibylle; Weiler, Markus; Blume, Theresa

    2018-01-01

    Transpiration is a key process in the hydrological cycle, and a sound understanding and quantification of transpiration and its spatial variability is essential for management decisions as well as for improving the parameterisation and evaluation of hydrological and soil-vegetation-atmosphere transfer models. For individual trees, transpiration is commonly estimated by measuring sap flow. Besides evaporative demand and water availability, tree-specific characteristics such as species, size or social status control sap flow amounts of individual trees. Within forest stands, properties such as species composition, basal area or stand density additionally affect sap flow, for example via competition mechanisms. Finally, sap flow patterns might also be influenced by landscape-scale characteristics such as geology and soils, slope position or aspect because they affect water and energy availability; however, little is known about the dynamic interplay of these controls.We studied the relative importance of various tree-, stand- and site-specific characteristics with multiple linear regression models to explain the variability of sap velocity measurements in 61 beech and oak trees, located at 24 sites across a 290 km2 catchment in Luxembourg. For each of 132 consecutive days of the growing season of 2014 we modelled the daily sap velocity and derived sap flow patterns of these 61 trees, and we determined the importance of the different controls.Results indicate that a combination of mainly tree- and site-specific factors controls sap velocity patterns in the landscape, namely tree species, tree diameter, geology and aspect. For sap flow we included only the stand- and site-specific predictors in the models to ensure variable independence. Of those, geology and aspect were most important. Compared to these predictors, spatial variability of atmospheric demand and soil moisture explains only a small fraction of the variability in the daily datasets. However, the temporal dynamics of the explanatory power of the tree-specific characteristics, especially species, are correlated to the temporal dynamics of potential evaporation. We conclude that transpiration estimates on the landscape scale would benefit from not only consideration of hydro-meteorological drivers, but also tree, stand and site characteristics in order to improve the spatial and temporal representation of transpiration for hydrological and soil-vegetation-atmosphere transfer models.

  6. Identification of Cyclin-dependent Kinase 1 Specific Phosphorylation Sites by an In Vitro Kinase Assay.

    PubMed

    Cui, Heying; Loftus, Kyle M; Noell, Crystal R; Solmaz, Sozanne R

    2018-05-03

    Cyclin-dependent kinase 1 (Cdk1) is a master controller for the cell cycle in all eukaryotes and phosphorylates an estimated 8 - 13% of the proteome; however, the number of identified targets for Cdk1, particularly in human cells is still low. The identification of Cdk1-specific phosphorylation sites is important, as they provide mechanistic insights into how Cdk1 controls the cell cycle. Cell cycle regulation is critical for faithful chromosome segregation, and defects in this complicated process lead to chromosomal aberrations and cancer. Here, we describe an in vitro kinase assay that is used to identify Cdk1-specific phosphorylation sites. In this assay, a purified protein is phosphorylated in vitro by commercially available human Cdk1/cyclin B. Successful phosphorylation is confirmed by SDS-PAGE, and phosphorylation sites are subsequently identified by mass spectrometry. We also describe purification protocols that yield highly pure and homogeneous protein preparations suitable for the kinase assay, and a binding assay for the functional verification of the identified phosphorylation sites, which probes the interaction between a classical nuclear localization signal (cNLS) and its nuclear transport receptor karyopherin α. To aid with experimental design, we review approaches for the prediction of Cdk1-specific phosphorylation sites from protein sequences. Together these protocols present a very powerful approach that yields Cdk1-specific phosphorylation sites and enables mechanistic studies into how Cdk1 controls the cell cycle. Since this method relies on purified proteins, it can be applied to any model organism and yields reliable results, especially when combined with cell functional studies.

  7. MRI-based intelligence quotient (IQ) estimation with sparse learning.

    PubMed

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.

  8. A simplified gross primary production and evapotranspiration model for boreal coniferous forests - is a generic calibration sufficient?

    NASA Astrophysics Data System (ADS)

    Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.

    2015-07-01

    The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.

  9. Surface conformations of an anti-ricin aptamer and its affinity for ricin determined by atomic force microscopy and surface plasmon resonance.

    PubMed

    Wang, B; Lou, Z; Park, B; Kwon, Y; Zhang, H; Xu, B

    2015-01-07

    We used atomic force microscopy (AFM) and surface plasmon resonance (SPR) to study the surface conformations of an anti-ricin aptamer and its specific binding affinity for ricin molecules. The effect of surface modification of the Au(111) substrate on the aptamer affinity was also estimated. The AFM topography images had a resolution high enough to distinguish different aptamer conformations. The specific binding site on the aptamer molecule was clearly located by the AFM recognition images. The aptamer on a Au(111) surface modified with carboxymethylated-dextran (CD) showed both similarities to and differences from the one without CD modification. The influence of CD modification was evaluated using AFM images of various aptamer conformations on the Au(111) surface. The affinity between ricin and the anti-ricin aptamer was estimated using the off-rate values measured using AFM and SPR. The SPR measurements of the ricin sample were conducted in the range from 83.3 pM to 8.33 nM, and the limit of detection was estimated as 25 pM (1.5 ng mL(-1)). The off-rate values of the ricin-aptamer interactions were estimated using both single-molecule dynamic force spectroscopy (DFS) and SPR as (7.3 ± 0.4) × 10(-4) s(-1) and (1.82 ± 0.067) × 10(-2) s(-1), respectively. The results show that single-molecule measurements can obtain different reaction parameters from bulk solution measurements. In AFM single-molecule measurements, the various conformations of the aptamer immobilized on the gold surface determined the availability of each specific binding site to the ricin molecules. The SPR bulk solution measurements averaged the signals from specific and non-specific interactions. AFM images and DFS measurements provide more specific information on the interactions of individual aptamer and ricin molecules.

  10. The public health impact of malaria vaccine RTS,S in malaria endemic Africa: country-specific predictions using 18 month follow-up Phase III data and simulation models.

    PubMed

    Penny, Melissa A; Galactionova, Katya; Tarantino, Michael; Tanner, Marcel; Smith, Thomas A

    2015-07-29

    The RTS,S/AS01 malaria vaccine candidate recently completed Phase III trials in 11 African sites. Recommendations for its deployment will partly depend on predictions of public health impact in endemic countries. Previous predictions of these used only limited information on underlying vaccine properties and have not considered country-specific contextual data. Each Phase III trial cohort was simulated explicitly using an ensemble of individual-based stochastic models, and many hypothetical vaccine profiles. The true profile was estimated by Bayesian fitting of these models to the site- and time-specific incidence of clinical malaria in both trial arms over 18 months of follow-up. Health impacts of implementation via two vaccine schedules in 43 endemic sub-Saharan African countries, using country-specific prevalence, access to care, immunisation coverage and demography data, were predicted via weighted averaging over many simulations. The efficacy against infection of three doses of vaccine was initially approximately 65 % (when immunising 6-12 week old infants) and 80 % (children 5-17 months old), with a 1 year half-life (exponential decay). Either schedule will avert substantial disease, but predicted impact strongly depends on the decay rate of vaccine effects and average transmission intensity. For the first time Phase III site- and time-specific data were available to estimate both the underlying profile of RTS,S/AS01 and likely country-specific health impacts. Initial efficacy will probably be high, but decay rapidly. Adding RTS,S to existing control programs, assuming continuation of current levels of malaria exposure and of health system performance, will potentially avert 100-580 malaria deaths and 45,000 to 80,000 clinical episodes per 100,000 fully vaccinated children over an initial 10-year phase.

  11. A Model-based Approach to Scaling GPP and NPP in Support of MODIS Land Product Validation

    NASA Astrophysics Data System (ADS)

    Turner, D. P.; Cohen, W. B.; Gower, S. T.; Ritts, W. D.

    2003-12-01

    Global products from the Earth-orbiting MODIS sensor include land cover, leaf area index (LAI), FPAR, 8-day gross primary production (GPP), and annual net primary production (NPP) at the 1 km spatial resolution. The BigFoot Project was designed specifically to validate MODIS land products, and has initiated ground measurements at 9 sites representing a wide array of vegetation types. An ecosystem process model (Biome-BGC) is used to generate estimates of GPP and NPP for each 5 km x 5 km BigFoot site. Model inputs include land cover and LAI (from Landsat ETM+), daily meteorological data (from a centrally located eddy covariance flux tower), and soil characteristics. Model derived outputs are validated against field-measured NPP and flux tower-derived GPP. The resulting GPP and NPP estimates are then aggregated to the 1 km resolution for direct spatial comparison with corresponding MODIS products. At the high latitude sites (tundra and boreal forest), the MODIS GPP phenology closely tracks the BigFoot GPP, but there is a high bias in the MODIS GPP. In the temperate zone sites, problems with the timing and magnitude of the MODIS FPAR introduce differences in MODIS GPP compared to the validation data at some sites. However, the MODIS LAI/FPAR data are currently being reprocessed (=Collection 4) and new comparisons will be made for 2002. The BigFoot scaling approach permits precise overlap in spatial and temporal resolution between the MODIS products and BigFoot products, and thus permits the evaluation of specific components of the MODIS NPP algorithm. These components include meteorological inputs from the NASA Data Assimilation Office, LAI and FPAR from other MODIS algorithms, and biome-specific parameters for base respiration rate and light use efficiency.

  12. Risk assessment of water quality in three North Carolina, USA, streams supporting federally endangered freshwater mussels (Unionidae)

    USGS Publications Warehouse

    Ward, S.; Augspurger, T.; Dwyer, F.J.; Kane, C.; Ingersoll, C.G.

    2007-01-01

    Water quality data were collected from three drainages supporting the endangered Carolina heelsplitter (Lasmigona decorata) and dwarf wedgemussel (Alasmidonta heterodon) to determine the potential for impaired water quality to limit the recovery of these freshwater mussels in North Carolina, USA. Total recoverable copper, total residual chlorine, and total ammonia nitrogen were measured every two months for approximately a year at sites bracketing wastewater sources and mussel habitat. These data and state monitoring datasets were compared with ecological screening values, including estimates of chemical concentrations likely to be protective of mussels, and federal ambient water quality criteria to assess site risks following a hazard quotient approach. In one drainage, the site-specific ammonia ecological screening value for acute exposures was exceeded in 6% of the samples, and 15% of samples exceeded the chronic ecological screening value; however, ammonia concentrations were generally below levels of concern in other drainages. In all drainages, copper concentrations were higher than ecological screening values most frequently (exceeding the ecological screening values for acute exposures in 65-94% of the samples). Chlorine concentrations exceeding the acute water quality criterion were observed in 14 and 35% of samples in two of three drainages. The ecological screening values were exceeded most frequently in Goose Creek and the Upper Tar River drainages; concentrations rarely exceeded ecological screening values in the Swift Creek drainage except for copper. The site-specific risk assessment approach provides valuable information (including site-specific risk estimates and ecological screening values for protection) that can be applied through regulatory and nonregulatory means to improve water quality for mussels where risks are indicated and pollutant threats persist. ?? 2007 SETAC.

  13. Results of double ring infiltrometer investigations, Rocky Mountain Arsenal, July 13-15, 1983

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This investigation was the result of the continued interest in infiltration potential in certain portions of the RMA. Previous estimates of infiltration were based on data presented in Soil Survey of Adams County, Co. (USDA Soil Conservation Service and Colorado Ag. Experiment Station, 1974). In order to obtain more site-specific data, ten sites were selected by RMA personnel where double-ring infiltrometers would be installed and then left in place. These sites are in the South Plants Area and in Basin A.

  14. Adaptive Sampling approach to environmental site characterization at Joliet Army Ammunition Plant: Phase 2 demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bujewski, G.E.; Johnson, R.L.

    1996-04-01

    Adaptive sampling programs provide real opportunities to save considerable time and money when characterizing hazardous waste sites. This Strategic Environmental Research and Development Program (SERDP) project demonstrated two decision-support technologies, SitePlanner{trademark} and Plume{trademark}, that can facilitate the design and deployment of an adaptive sampling program. A demonstration took place at Joliet Army Ammunition Plant (JAAP), and was unique in that it was tightly coupled with ongoing Army characterization work at the facility, with close scrutiny by both state and federal regulators. The demonstration was conducted in partnership with the Army Environmental Center`s (AEC) Installation Restoration Program and AEC`s Technology Developmentmore » Program. AEC supported researchers from Tufts University who demonstrated innovative field analytical techniques for the analysis of TNT and DNT. SitePlanner{trademark} is an object-oriented database specifically designed for site characterization that provides an effective way to compile, integrate, manage and display site characterization data as it is being generated. Plume{trademark} uses a combination of Bayesian analysis and geostatistics to provide technical staff with the ability to quantitatively merge soft and hard information for an estimate of the extent of contamination. Plume{trademark} provides an estimate of contamination extent, measures the uncertainty associated with the estimate, determines the value of additional sampling, and locates additional samples so that their value is maximized.« less

  15. Short-term and long-term evapotranspiration rates at ecological restoration sites along a large river receiving rare flow events

    USGS Publications Warehouse

    Shanafield, Margaret; Jurado, Hugo Gutierrez; Burgueño, Jesús Eliana Rodríguez; Hernández, Jorge Ramírez; Jarchow, Christopher; Nagler, Pamela L.

    2017-01-01

    Many large rivers around the world no longer flow to their deltas, due to ever greater water withdrawals and diversions for human needs. However, the importance of riparian ecosystems is drawing increasing recognition, leading to the allocation of environmental flows to restore river processes. Accurate estimates of riparian plant evapotranspiration (ET) are needed to understand how the riverine system responds to these rare events and achieve the goals of environmental flows. In 2014, historic environmental flows were released into the Lower Colorado River at Morelos Dam (Mexico); this once perennial but now dry reach is the final stretch to the mighty Colorado River Delta. One of the primary goals was to supply native vegetation restoration sites along the reach with water to help seedlings establish and boost groundwater levels to foster the planted saplings. Patterns in ET before, during, and after the flows are useful for evaluating whether this goal was met and understanding the role that ET plays in this now ephemeral river system. Here, diurnal fluctuations in groundwater levels and MODIS data were used to compare estimates of ET specifically at three native vegetation restoration sites during 2014 planned flow events, while MODIS data was used to evaluate long-term (2002 – 2016) ET responses to restoration efforts at these sites. Overall, ET was generally 0 - 10 mm d-1 across sites and although daily ET values from groundwater data were highly variable, weekly averaged estimates were highly correlated with MODIS-derived estimates at most sites. The influence of the 2014 flow events was not immediately apparent in the results, although the process of clearing vegetation and planting native vegetation at the restoration sites was clearly visible in the results.

  16. Accounting for substitution and spatial heterogeneity in a labelled choice experiment.

    PubMed

    Lizin, S; Brouwer, R; Liekens, I; Broeckx, S

    2016-10-01

    Many environmental valuation studies using stated preferences techniques are single-site studies that ignore essential spatial aspects, including possible substitution effects. In this paper substitution effects are captured explicitly in the design of a labelled choice experiment and the inclusion of different distance variables in the choice model specification. We test the effect of spatial heterogeneity on welfare estimates and transfer errors for minor and major river restoration works, and the transferability of river specific utility functions, accounting for key variables such as site visitation, spatial clustering and income. River specific utility functions appear to be transferable, resulting in low transfer errors. However, ignoring spatial heterogeneity increases transfer errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Using RSAT to scan genome sequences for transcription factor binding sites and cis-regulatory modules.

    PubMed

    Turatsinze, Jean-Valery; Thomas-Chollier, Morgane; Defrance, Matthieu; van Helden, Jacques

    2008-01-01

    This protocol shows how to detect putative cis-regulatory elements and regions enriched in such elements with the regulatory sequence analysis tools (RSAT) web server (http://rsat.ulb.ac.be/rsat/). The approach applies to known transcription factors, whose binding specificity is represented by position-specific scoring matrices, using the program matrix-scan. The detection of individual binding sites is known to return many false predictions. However, results can be strongly improved by estimating P value, and by searching for combinations of sites (homotypic and heterotypic models). We illustrate the detection of sites and enriched regions with a study case, the upstream sequence of the Drosophila melanogaster gene even-skipped. This protocol is also tested on random control sequences to evaluate the reliability of the predictions. Each task requires a few minutes of computation time on the server. The complete protocol can be executed in about one hour.

  18. Central blood pressure in children and adolescents: non-invasive development and testing of novel transfer functions.

    PubMed

    Cai, T Y; Qasem, A; Ayer, J G; Butlin, M; O'Meagher, S; Melki, C; Marks, G B; Avolio, A; Celermajer, D S; Skilton, M R

    2017-12-01

    Central blood pressure can be estimated from peripheral pulses in adults using generalised transfer functions (TF). We sought to create and test age-specific non-invasively developed TFs in children, with comparison to a pre-existing adult TF. We studied healthy children from two sites at two time points, 8 and 14 years of age, split by site into development and validation groups. Radial and carotid pressure waveforms were obtained by applanation tonometry. Central systolic pressure was derived from carotid waveforms calibrated to brachial mean and diastolic pressures. Age-specific TFs created in the development groups (n=50) were tested in the validation groups aged 8 (n=137) and 14 years (n=85). At 8 years of age, the age-specific TF estimated 82, 99 and 100% of central systolic pressure values within 5, 10 and 15 mm Hg of their measured values, respectively. This TF overestimated central systolic pressure by 2.2 (s.d. 3.7) mm Hg, compared to being underestimated by 5.6 (s.d. 3.9) mm Hg with the adult TF. At 14 years of age, the age-specific TF estimated 60, 87 and 95% of values within 5, 10 and 15 mm Hg of their measured values, respectively. This TF underestimated central systolic pressure by 0.5 (s.d. 6.7) mm Hg, while the adult TF underestimated it by 6.8 (s.d. 6.0) mm Hg. In conclusion, age-specific TFs more accurately predict central systolic pressure measured at the carotid artery in children than an existing adult TF.

  19. A computer program for predicting recharge with a master recession curve

    USGS Publications Warehouse

    Heppner, Christopher S.; Nimmo, John R.

    2005-01-01

    Water-table fluctuations occur in unconfined aquifers owing to ground-water recharge following precipitation and infiltration, and ground-water discharge to streams between storm events. Ground-water recharge can be estimated from well hydrograph data using the water-table fluctuation (WTF) principle, which states that recharge is equal to the product of the water-table rise and the specific yield of the subsurface porous medium. The water-table rise, however, must be expressed relative to the water level that would have occurred in the absence of recharge. This requires a means for estimating the recession pattern of the water-table at the site. For a given site there is often a characteristic relation between the water-table elevation and the water-table decline rate following a recharge event. A computer program was written which extracts the relation between decline rate and water-table elevation from well hydrograph data and uses it to construct a master recession curve (MRC). The MRC is a characteristic water-table recession hydrograph, representing the average behavior for a declining water-table at that site. The program then calculates recharge using the WTF method by comparing the measured well hydrograph with the hydrograph predicted by the MRC and multiplying the difference at each time step by the specific yield. This approach can be used to estimate recharge in a continuous fashion from long-term well records. Presented here is a description of the code including the WTF theory and instructions for running it to estimate recharge with continuous well hydrograph data.

  20. Overwintering strategies of migratory birds: a novel approach for estimating seasonal movement patterns of residents and transients

    USGS Publications Warehouse

    Ruiz-Gutierrez, Viviana; Kendall, William L.; Saracco, James F.; White, Gary C.

    2016-01-01

    Our understanding of movement patterns in wildlife populations has played an important role in current ecological knowledge and can inform landscape conservation decisions. Direct measures of movement can be obtained using marked individuals, but this requires tracking individuals across a landscape or multiple sites.We demonstrate how movements can be estimated indirectly using single-site, capture–mark–recapture (CMR) data with a multi-state open robust design with state uncertainty model (MSORD-SU). We treat residence and transience as two phenotypic states of overwintering migrants and use time- and state-dependent probabilities of site entry and persistence as indirect measures of movement. We applied the MSORD-SU to data on eight species of overwintering Neotropical birds collected in 14 countries between 2002 and 2011. In addition to entry and persistence probabilities, we estimated the proportions of residents at a study site and mean residence times.We identified overwintering movement patterns and residence times that contrasted with prior categorizations of territoriality. Most species showed an evidence of residents entering sites at multiple time intervals, with transients tending to enter between peak resident movement times. Persistence and the proportion of residents varied by latitude, but were not always positively correlated for a given species.Synthesis and applications. Our results suggest that migratory songbirds commonly move among habitats during the overwintering period. Substantial proportions of populations appear to be comprised of transient individuals, and residents tend to persist at specific sites for relatively short periods of time. This information on persistence and movement patterns should be explored for specific habitats to guide landscape management on the wintering grounds, such as determining which habitats are conserved or restored as part of certification programmes of tropical agroforestry crops. We suggest that research and conservation efforts on Neotropical migrant songbirds focus on identifying landscape configurations and regional habitat networks that support these diverse overwintering strategies to secure full life cycle conservation.

  1. Development of a calculation method for estimating specific energy distribution in complex radiation fields.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Niita, Koji

    2006-01-01

    Estimation of the specific energy distribution in a human body exposed to complex radiation fields is of great importance in the planning of long-term space missions and heavy ion cancer therapies. With the aim of developing a tool for this estimation, the specific energy distributions in liquid water around the tracks of several HZE particles with energies up to 100 GeV n(-1) were calculated by performing track structure simulation with the Monte Carlo technique. In the simulation, the targets were assumed to be spherical sites with diameters from 1 nm to 1 microm. An analytical function to reproduce the simulation results was developed in order to predict the distributions of all kinds of heavy ions over a wide energy range. The incorporation of this function into the Particle and Heavy Ion Transport code System (PHITS) enables us to calculate the specific energy distributions in complex radiation fields in a short computational time.

  2. Coral bleaching response index: a new tool to standardize and compare susceptibility to thermal bleaching.

    PubMed

    Swain, Timothy D; Vega-Perkins, Jesse B; Oestreich, William K; Triebold, Conrad; DuBois, Emily; Henss, Jillian; Baird, Andrew; Siple, Margaret; Backman, Vadim; Marcelino, Luisa

    2016-07-01

    As coral bleaching events become more frequent and intense, our ability to predict and mitigate future events depends upon our capacity to interpret patterns within previous episodes. Responses to thermal stress vary among coral species; however the diversity of coral assemblages, environmental conditions, assessment protocols, and severity criteria applied in the global effort to document bleaching patterns creates challenges for the development of a systemic metric of taxon-specific response. Here, we describe and validate a novel framework to standardize bleaching response records and estimate their measurement uncertainties. Taxon-specific bleaching and mortality records (2036) of 374 coral taxa (during 1982-2006) at 316 sites were standardized to average percent tissue area affected and a taxon-specific bleaching response index (taxon-BRI) was calculated by averaging taxon-specific response over all sites where a taxon was present. Differential bleaching among corals was widely variable (mean taxon-BRI = 25.06 ± 18.44%, ±SE). Coral response may differ because holobionts are biologically different (intrinsic factors), they were exposed to different environmental conditions (extrinsic factors), or inconsistencies in reporting (measurement uncertainty). We found that both extrinsic and intrinsic factors have comparable influence within a given site and event (60% and 40% of bleaching response variance of all records explained, respectively). However, when responses of individual taxa are averaged across sites to obtain taxon-BRI, differential response was primarily driven by intrinsic differences among taxa (65% of taxon-BRI variance explained), not conditions across sites (6% explained), nor measurement uncertainty (29% explained). Thus, taxon-BRI is a robust metric of intrinsic susceptibility of coral taxa. Taxon-BRI provides a broadly applicable framework for standardization and error estimation for disparate historical records and collection of novel data, allowing for unprecedented accuracy in parameterization of mechanistic and predictive models and conservation plans. © 2016 John Wiley & Sons Ltd.

  3. Coral bleaching response index: a new tool to standardize and compare susceptibility to thermal bleaching

    PubMed Central

    SWAIN, TIMOTHY D.; VEGA-PERKINS, JESSE B.; OESTREICH, WILLIAM K.; TRIEBOLD, CONRAD; DUBOIS, EMILY; HENSS, JILLIAN; BAIRD, ANDREW; SIPLE, MARGARET; BACKMAN, VADIM; MARCELINO, LUISA

    2017-01-01

    As coral bleaching events become more frequent and intense, our ability to predict and mitigate future events depends upon our capacity to interpret patterns within previous episodes. Responses to thermal stress vary among coral species; however the diversity of coral assemblages, environmental conditions, assessment protocols, and severity criteria applied in the global effort to document bleaching patterns creates challenges for the development of a systemic metric of taxon-specific response. Here, we describe and validate a novel framework to standardize bleaching response records and estimate their measurement uncertainties. Taxon-specific bleaching and mortality records (2036) of 374 coral taxa (during 1982–2006) at 316 sites were standardized to average percent tissue area affected and a taxon-specific bleaching response index (taxon-BRI) was calculated by averaging taxon-specific response over all sites where a taxon was present. Differential bleaching among corals was widely variable (mean taxon-BRI = 25.06 ± 18.44%, ± SE). Coral response may differ because holobionts are biologically different (intrinsic factors), they were exposed to different environmental conditions (extrinsic factors), or inconsistencies in reporting (measurement uncertainty). We found that both extrinsic and intrinsic factors have comparable influence within a given site and event (60% and 40% of bleaching response variance of all records explained, respectively). However, when responses of individual taxa are averaged across sites to obtain taxon-BRI, differential response was primarily driven by intrinsic differences among taxa (65% of taxon-BRI variance explained), not conditions across sites (6% explained), nor measurement uncertainty (29% explained). Thus, taxon-BRI is a robust metric of intrinsic susceptibility of coral taxa. Taxon-BRI provides a broadly applicable framework for standardization and error estimation for disparate historical records and collection of novel data, allowing for unprecedented accuracy in parameterization of mechanistic and predictive models and conservation plans. PMID:27074334

  4. Variation in abundance of Pacific Blue Mussel (Mytilus trossulus) in the Northern Gulf of Alaska, 2006-2015

    NASA Astrophysics Data System (ADS)

    Bodkin, James L.; Coletti, Heather A.; Ballachey, Brenda E.; Monson, Daniel H.; Esler, Daniel; Dean, Thomas A.

    2018-01-01

    Mussels are conspicuous and ecologically important components of nearshore marine communities around the globe. Pacific blue mussels (Mytilus trossulus) are common residents of intertidal habitats in protected waters of the North Pacific, serving as a conduit of primary production to a wide range of nearshore consumers including predatory invertebrates, sea ducks, shorebirds, sea otters, humans, and other terrestrial mammals. We monitored seven metrics of intertidal Pacific blue mussel abundance at five sites in each of three regions across the northern Gulf of Alaska: Katmai National Park and Preserve (Katmai) (2006-2015), Kenai Fjords National Park (Kenai Fjords) (2008-2015) and western Prince William Sound (WPWS) (2007-2015). Metrics included estimates of: % cover at two tide heights in randomly selected rocky intertidal habitat; and in selected mussel beds estimates of: the density of large mussels (≥ 20 mm); density of all mussels > 2 mm estimated from cores extracted from those mussel beds; bed size; and total abundance of large and all mussels, i.e. the product of density and bed size. We evaluated whether these measures of mussel abundance differed among sites or regions, whether mussel abundance varied over time, and whether temporal patterns in abundance were site specific, or synchronous at regional or Gulf-wide spatial scales. We found that, for all metrics, mussel abundance varied on a site-by-site basis. After accounting for site differences, we found similar temporal patterns in several measures of abundance (both % cover metrics, large mussel density, large mussel abundance, and mussel abundance estimated from cores), in which abundance was initially high, declined significantly over several years, and subsequently recovered. Averaged across all sites, we documented declines of 84% in large mussel abundance through 2013 with recovery to 41% of initial abundance by 2015. These findings suggest that factors operating across the northern Gulf of Alaska were affecting mussel survival and subsequently abundance. In contrast, density of primarily small mussels obtained from cores (as an index of recruitment), varied markedly by site, but did not show meaningful temporal trends. We interpret this to indicate that settlement was driven by site-specific features rather than Gulf wide factors. By extension, we hypothesize that temporal changes in mussel abundance observed was not a result of temporal variation in larval supply leading to variation in recruitment, but rather suggestive of mortality as a primary demographic factor driving mussel abundance. Our results highlight the need to better understand underlying mechanisms of change in mussels, as well as implications of that change to nearshore consumers.

  5. Variation in abundance of Pacific Blue Mussel (Mytilus trossulus) in the Northern Gulf of Alaska, 2006–2015

    USGS Publications Warehouse

    Bodkin, James L.; Coletti, Heather A.; Ballachey, Brenda E.; Monson, Daniel; Esler, Daniel N.; Dean, Thomas A.

    2017-01-01

    Mussels are conspicuous and ecologically important components of nearshore marine communities around the globe. Pacific blue mussels (Mytilus trossulus) are common residents of intertidal habitats in protected waters of the North Pacific, serving as a conduit of primary production to a wide range of nearshore consumers including predatory invertebrates, sea ducks, shorebirds, sea otters, humans, and other terrestrial mammals. We monitored seven metrics of intertidal Pacific blue mussel abundance at five sites in each of three regions across the northern Gulf of Alaska: Katmai National Park and Preserve (Katmai) (2006–2015), Kenai Fjords National Park (Kenai Fjords) (2008–2015) and western Prince William Sound (WPWS) (2007–2015). Metrics included estimates of: % cover at two tide heights in randomly selected rocky intertidal habitat; and in selected mussel beds estimates of: the density of large mussels (≥ 20 mm); density of all mussels > 2 mm estimated from cores extracted from those mussel beds; bed size; and total abundance of large and all mussels, i.e. the product of density and bed size. We evaluated whether these measures of mussel abundance differed among sites or regions, whether mussel abundance varied over time, and whether temporal patterns in abundance were site specific, or synchronous at regional or Gulf-wide spatial scales. We found that, for all metrics, mussel abundance varied on a site-by-site basis. After accounting for site differences, we found similar temporal patterns in several measures of abundance (both % cover metrics, large mussel density, large mussel abundance, and mussel abundance estimated from cores), in which abundance was initially high, declined significantly over several years, and subsequently recovered. Averaged across all sites, we documented declines of 84% in large mussel abundance through 2013 with recovery to 41% of initial abundance by 2015. These findings suggest that factors operating across the northern Gulf of Alaska were affecting mussel survival and subsequently abundance. In contrast, density of primarily small mussels obtained from cores (as an index of recruitment), varied markedly by site, but did not show meaningful temporal trends. We interpret this to indicate that settlement was driven by site-specific features rather than Gulf wide factors. By extension, we hypothesize that temporal changes in mussel abundance observed was not a result of temporal variation in larval supply leading to variation in recruitment, but rather suggestive of mortality as a primary demographic factor driving mussel abundance. Our results highlight the need to better understand underlying mechanisms of change in mussels, as well as implications of that change to nearshore consumers.

  6. National Stormwater Calculator: Low Impact Development ...

    EPA Pesticide Factsheets

    The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm

  7. An assessment of correlations between chlorinated VOC concentrations in tree tissue and groundwater for phytoscreening applications.

    PubMed

    Duncan, Candice M; Brusseau, Mark L

    2018-03-01

    The majority of prior phytoscreening applications have employed the method as a tool to qualitatively determine the presence of contamination in the subsurface. Although qualitative data is quite useful, this study explores the potential for using phytoscreening quantitatively. The existence of site-specific and non-site-specific (master) correlations between VOC concentrations in tree tissue and groundwater is investigated using data collected from several phytoscreening studies. The aggregated data comprise 100 measurements collected from 12 sites that span a wide range of site conditions. Significant site-specific correlations are observed between tetrachloroethene (PCE) and trichloroethene (TCE) concentrations measured for tree tissue and those measured in groundwater for three sites. A moderately significant correlation (r 2 =0.56) exists for the entire aggregate data set. Parsing the data by groundwater depth produced a highly significant correlation (r 2 =0.88) for sites with shallow (<4m) groundwater. Such a significant correlation for data collected by different investigators from multiple sites with a wide range of tree species and subsurface conditions indicates that groundwater concentration is the predominant factor mediating tree-tissue concentrations for these sites. This may be a result of trees likely directly tapping groundwater for these shallow groundwater conditions. This master correlation may provide reasonable order-of-magnitude estimates of VOC concentrations in groundwater for such sites, thereby allowing the use of phytoscreening in a more quantitative mode. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. On the choice of statistical models for estimating occurrence and extinction from animal surveys

    USGS Publications Warehouse

    Dorazio, R.M.

    2007-01-01

    In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.

  9. Daily estimates of soil ingestion in children.

    PubMed Central

    Stanek, E J; Calabrese, E J

    1995-01-01

    Soil ingestion estimates play an important role in risk assessment of contaminated sites, and estimates of soil ingestion in children are of special interest. Current estimates of soil ingestion are trace-element specific and vary widely among elements. Although expressed as daily estimates, the actual estimates have been constructed by averaging soil ingestion over a study period of several days. The wide variability has resulted in uncertainty as to which method of estimation of soil ingestion is best. We developed a methodology for calculating a single estimate of soil ingestion for each subject for each day. Because the daily soil ingestion estimate represents the median estimate of eligible daily trace-element-specific soil ingestion estimates for each child, this median estimate is not trace-element specific. Summary estimates for individuals and weeks are calculated using these daily estimates. Using this methodology, the median daily soil ingestion estimate for 64 children participating in the 1989 Amherst soil ingestion study is 13 mg/day or less for 50% of the children and 138 mg/day or less for 95% of the children. Mean soil ingestion estimates (for up to an 8-day period) were 45 mg/day or less for 50% of the children, whereas 95% of the children reported a mean soil ingestion of 208 mg/day or less. Daily soil ingestion estimates were used subsequently to estimate the mean and variance in soil ingestion for each child and to extrapolate a soil ingestion distribution over a year, assuming that soil ingestion followed a log-normal distribution. Images Figure 1. Figure 2. Figure 3. Figure 4. PMID:7768230

  10. Mass absorption efficiency of elemental carbon over Van Vihar National Park, Bhopal, India: Temporal variability and implications to estimates of black carbon radiative forcing

    NASA Astrophysics Data System (ADS)

    Samiksha, S.; Raman, R. S.; Singh, A.

    2016-12-01

    It is now well recognized that black carbon (a component of aerosols that is similar but not identical to elemental carbon) is an important contributor to global warming, second only to CO2.However, the most popular methods for estimation of black carbon rely on accurate estimates of its mass absorption efficiency (MAE) to convert optical attenuation measurements to black carbon concentrations. Often a constant manufacturer specified MAE is used for this purposes. Recent literature has unequivocally established that MAE shows large spatio-temporal heterogeneities. This is so because MAE depends on emission sources, chemical composition, and mixing state of aerosols. In this study, ambient PM2.5 samples were collected over an ecologically sensitive zone (Van Vihar National Park) in Bhopal, Central India for two years (01 January, 2012 to 31 December, 2013). Samples were collected on Teflon, Nylon, and Tissue quartz filter substrates. Punches of quartz fibre filter were analysed for organic and elemental carbon (OC/EC) by a thermal-optical-transmittance/reflectance (TOT-TOR) analyser operating with a 632 nm laser diode. Teflon filters were also used to interdependently measure PM2.5 attenuation (at 370 nm and 800 nm) by transmissometry. Site-specific mass absorption efficiency (MAE) for elemental carbon over the study site will be derived using a combination of measurements from the TOT/TOR analyser and transmissometer. An assessment of site-specific MAE values, its temporal variability and implications to black carbon radiative forcing will be discussed. It is now well recognized that black carbon (a component of aerosols that is similar but not identical to elemental carbon) is an important contributor to global warming, second only to CO2. However, the most popular methods for estimation of black carbon rely on accurate estimates of its mass absorption efficiency (MAE) to convert optical attenuation measurements to black carbon concentrations. Often a constant manufacturer specified MAE is used for this purposes. Recent literature has unequivocally established that MAE shows large spatio-temporal heterogeneities. This is so because MAE depends on emission sources, chemical composition, and mixing state of aerosols. In this study, ambient PM2.5 samples were collected over an ecologically sensitive zone (Van Vihar National Park) in Bhopal, Central India for two years (01 January, 2012 to 31 December, 2013). Samples were collected on Teflon, Nylon, and Tissue quartz filter substrates. Punches of quartz fibre filter were analysed for organic and elemental carbon (OC/EC) by a thermal-optical-transmittance/reflectance (TOT-TOR) analyser operating with a 632 nm laser diode. Teflon filters were also used to interdependently measure PM2.5 attenuation (at 370 nm and 800 nm) by transmissometry. Site-specific mass absorption efficiency (MAE) for elemental carbon over the study site will be derived using a combination of measurements from the TOT/TOR analyser and transmissometer. An assessment of site-specific MAE values, its temporal variability and implications to black carbon radiative forcing will be discussed.

  11. Investigating the effects of the fixed and varying dispersion parameters of Poisson-gamma models on empirical Bayes estimates.

    PubMed

    Lord, Dominique; Park, Peter Young-Jin

    2008-07-01

    Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.

  12. Estimating occupancy rates with imperfect detection under complex survey designs

    EPA Science Inventory

    Monitoring the occurrence of specific amphibian species is of interest. Typically, the monitoring design is a complex design that involves stratification and unequal probability of selection. When conducting field visits to selected sites, a common problem is that during a singl...

  13. MAXIMIZE THE EFFICIENCY OF PUMP AND TREAT SYSTEMS

    EPA Science Inventory

    This paper focuses on methodology for determing extent of hydraulic control and remediation effectiveness of site specific pump and treat systems. Maximum potential well yield is estimated on the basis of hydraulic characteristics described by the cooper and Jacob Equation. A ma...

  14. EPA Science Matters Newsletter: Stormwater Calculator Helps Communities Take Action to Reduce Runoff (Published April 2014)

    EPA Pesticide Factsheets

    Learn about the Stormwater Calculator that provides estimates for stormwater runoff from a specific site. Users can input any location within the U.S. and select different scenarios to see how it affects runoff volumes.

  15. Continuous water-quality monitoring and regression analysis to estimate constituent concentrations and loads in the Sheyenne River, North Dakota, 1980-2006

    USGS Publications Warehouse

    Ryberg, Karen R.

    2007-01-01

    This report presents the results of a study by the U.S. Geological Survey, done in cooperation with the North Dakota State Water Commission, to estimate water-quality constituent concentrations at seven sites on the Sheyenne River, N. Dak. Regression analysis of water-quality data collected in 1980-2006 was used to estimate concentrations for hardness, dissolved solids, calcium, magnesium, sodium, and sulfate. The explanatory variables examined for the regression relations were continuously monitored streamflow, specific conductance, and water temperature. For the conditions observed in 1980-2006, streamflow was a significant explanatory variable for some constituents. Specific conductance was a significant explanatory variable for all of the constituents, and water temperature was not a statistically significant explanatory variable for any of the constituents in this study. The regression relations were evaluated using common measures of variability, including R2, the proportion of variability in the estimated constituent concentration explained by the explanatory variables and regression equation. R2 values ranged from 0.784 for calcium to 0.997 for dissolved solids. The regression relations also were evaluated by calculating the median relative percentage difference (RPD) between measured constituent concentration and the constituent concentration estimated by the regression equations. Median RPDs ranged from 1.7 for dissolved solids to 11.5 for sulfate. The regression relations also may be used to estimate daily constituent loads. The relations should be monitored for change over time, especially at sites 2 and 3 which have a short period of record. In addition, caution should be used when the Sheyenne River is affected by ice or when upstream sites are affected by isolated storm runoff. Almost all of the outliers and highly influential samples removed from the analysis were made during periods when the Sheyenne River might be affected by ice.

  16. An integrated modelling and multicriteria analysis approach to managing nitrate diffuse pollution: 2. A case study for a chalk catchment in England.

    PubMed

    Koo, B K; O'Connell, P E

    2006-04-01

    The site-specific land use optimisation methodology, suggested by the authors in the first part of this two-part paper, has been applied to the River Kennet catchment at Marlborough, Wiltshire, UK, for a case study. The Marlborough catchment (143 km(2)) is an agriculture-dominated rural area over a deep chalk aquifer that is vulnerable to nitrate pollution from agricultural diffuse sources. For evaluation purposes, the catchment was discretised into a network of 1 kmx1 km grid cells. For each of the arable-land grid cells, seven land use alternatives (four arable-land alternatives and three grassland alternatives) were evaluated for their environmental and economic potential. For environmental evaluation, nitrate leaching rates of land use alternatives were estimated using SHETRAN simulations and groundwater pollution potential was evaluated using the DRASTIC index. For economic evaluation, economic gross margins were estimated using a simple agronomic model based on nitrogen response functions and agricultural land classification grades. In order to see whether the site-specific optimisation is efficient at the catchment scale, land use optimisation was carried out for four optimisation schemes (i.e. using four sets of criterion weights). Consequently, four land use scenarios were generated and the site-specifically optimised land use scenario was evaluated as the best compromise solution between long term nitrate pollution and agronomy at the catchment scale.

  17. Predicting Trihalomethanes (THMs) in the New York City Water Supply

    NASA Astrophysics Data System (ADS)

    Mukundan, R.; Van Dreason, R.

    2013-12-01

    Chlorine, a commonly used disinfectant in most water supply systems, can combine with organic carbon to form disinfectant byproducts including carcinogenic trihalomethanes (THMs). We used water quality data from 24 monitoring sites within the New York City (NYC) water supply distribution system, measured between January 2009 and April 2012, to develop site-specific empirical models for predicting total trihalomethane (TTHM) levels. Terms in the model included various combinations of the following water quality parameters: total organic carbon, pH, specific conductivity, and water temperature. Reasonable estimates of TTHM levels were achieved with overall R2 of about 0.87 and predicted values within 5 μg/L of measured values. The relative importance of factors affecting TTHM formation was estimated by ranking the model regression coefficients. Site-specific models showed improved model performance statistics compared to a single model for the entire system most likely because the single model did not consider locational differences in the water treatment process. Although never out of compliance in 2011, the TTHM levels in the water supply increased following tropical storms Irene and Lee with 45% of the samples exceeding the 80 μg/L Maximum Contaminant Level (MCL) in October and November. This increase was explained by changes in water quality parameters, particularly by the increase in total organic carbon concentration and pH during this period.

  18. Site-specific high-resolution models of the monsoon for Africa and Asia

    NASA Astrophysics Data System (ADS)

    Bryson, R. A.; Bryson, R. U.

    2000-11-01

    Using the macrophysical climate model of Bryson [Bryson, R.A., 1992. A macrophysical model of the Holocene intertropical convergence and jetstream positions and rainfall for the Saharan region. Meteorol. Atmos. Phys., 47, pp. 247-258], it is possible to calculate the monthly latitude of the jetstream and the latitude of the subtropical anticyclones. From these and modern climatic data, it is possible to model the two-century mean latitude of the intertropical convergence (ITC) month by month and estimate the monthly monsoon rainfall using the ITC-Rainfall model of Ilesanmi [Ilesanmi, O.O., 1971. An empirical formulation of an ITD rainfall model for the tropics — a case study of Nigeria. J. Appl. Meteorol., 10, pp. 882-891] and similar relationships. Input to this model is only calculated radiation and atmospheric optical depth estimated from a database of global volcanicity. Recent work has shown that it is possible to extend these estimates to both precipitation and temperature at specific sites, even in mountainous terrain. Testing of the model against archaeological records and climatic proxies is now underway, as well as refining the fundamental model. Preliminary indications are that the timing of fluctuations in the local climate is very well modeled. Especially well matched are the modeled Nile flood based on calculated rainfall on the Blue and White Nile watersheds and the level of Lake Moeris [Hassan, F., 1985. Holocene lakes and prehistoric settlements of the Western Faiyum, Egypt. J. Archaeol. Res., 13, pp. 483-501]. Modeled precipitation histories for specific sites in China, Thailand, the Arabian Peninsula, and North Africa will be presented and contrasted with the simulated rainfall history of Mesopotamia.

  19. Dual-mode fluorophore-doped nickel nitrilotriacetic acid-modified silica nanoparticles combine histidine-tagged protein purification with site-specific fluorophore labeling.

    PubMed

    Kim, Sung Hoon; Jeyakumar, M; Katzenellenbogen, John A

    2007-10-31

    We present the first example of a fluorophore-doped nickel chelate surface-modified silica nanoparticle that functions in a dual mode, combining histidine-tagged protein purification with site-specific fluorophore labeling. Tetramethylrhodamine (TMR)-doped silica nanoparticles, estimated to contain 700-900 TMRs per ca. 23 nm particle, were surface modified with nitrilotriacetic acid (NTA), producing TMR-SiO2-NTA-Ni2+. Silica-embedded TMR retains very high quantum yield, is resistant to quenching by buffer components, and is modestly quenched and only to a certain depth (ca. 2 nm) by surface-attached Ni2+. When exposed to a bacterial lysate containing estrogen receptor alpha ligand binding domain (ERalpha) as a minor component, these beads showed very high specificity binding, enabling protein purification in one step. The capacity and specificity of these beads for binding a his-tagged protein were characterized by electrophoresis, radiometric counting, and MALDI-TOF MS. ERalpha, bound to TMR-SiO2-NTA-Ni++ beads in a site-specific manner, exhibited good activity for ligand binding and for ligand-induced binding to coactivators in solution FRET experiments and protein microarray fluorometric and FRET assays. This dual-mode type TMR-SiO2-NTA-Ni2+ system represents a powerful combination of one-step histidine-tagged protein purification and site-specific labeling with multiple fluorophore species.

  20. Photophysics and photochemistry of dyes bound to human serum albumin are determined by the dye localization.

    PubMed

    Alarcón, Emilio; Edwards, Ana Maria; Aspee, Alexis; Moran, Faustino E; Borsarelli, Claudio D; Lissi, Eduardo A; Gonzalez-Nilo, Danilo; Poblete, Horacio; Scaiano, J C

    2010-01-01

    The photophysics and photochemistry of rose bengal (RB) and methylene blue (MB) bound to human serum albumin (HSA) have been investigated under a variety of experimental conditions. Distribution of the dyes between the external solvent and the protein has been estimated by physical separation and fluorescence measurements. The main localization of protein-bound dye molecules was estimated by the intrinsic fluorescence quenching, displacement of fluorescent probes bound to specific protein sites, and by docking modelling. All the data indicate that, at low occupation numbers, RB binds strongly to the HSA site I, while MB localizes predominantly in the protein binding site II. This different localization explains the observed differences in the dyes' photochemical behaviour. In particular, the environment provided by site I is less polar and considerably less accessible to oxygen. The localization of RB in site I also leads to an efficient quenching of the intrinsic protein fluorescence (ascribed to the nearby Trp residue) and the generation of intra-protein singlet oxygen, whose behaviour is different to that observed in the external solvent or when it is generated by bound MB.

  1. Empirical evidence for acceleration-dependent amplification factors

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-specific amplification factors, Fa and Fv, used in current U.S. building codes decrease with increasing base acceleration level as implied by the Loma Prieta earthquake at 0.1g and extrapolated using numerical models and laboratory results. The Northridge earthquake recordings of 17 January 1994 and subsequent geotechnical data permit empirical estimates of amplification at base acceleration levels up to 0.5g. Distance measures and normalization procedures used to infer amplification ratios from soil-rock pairs in predetermined azimuth-distance bins significantly influence the dependence of amplification estimates on base acceleration. Factors inferred using a hypocentral distance norm do not show a statistically significant dependence on base acceleration. Factors inferred using norms implied by the attenuation functions of Abrahamson and Silva show a statistically significant decrease with increasing base acceleration. The decrease is statistically more significant for stiff clay and sandy soil (site class D) sites than for stiffer sites underlain by gravely soils and soft rock (site class C). The decrease in amplification with increasing base acceleration is more pronounced for the short-period amplification factor, Fa, than for the midperiod factor, Fv.

  2. Impact of wildfires on regional air pollution | Science Inventory ...

    EPA Pesticide Factsheets

    We examine the impact of wildfires and agricultural/prescribed burning on regional air pollution and Air Quality Index (AQI) between 2006 and 2013. We define daily regional air pollution using monitoring sites for ozone (n=1595), PM2.5 collected by Federal Reference Method (n=1058), and constituents of PM2.5 from the Interagency Monitoring of PROtected Visual Environment (IMPROVE) network (n=264) and use satellite image analysis from the NOAA Hazard Mapping System (HMS) to determine days on which visible smoke plumes are detected in the vertical column of the monitoring site. To examine the impact of smoke from these fires on regional air pollution we use a two stage approach, accounting for within site (1st stage) and between site (2nd stage) variations. At the first stage we estimate a monitor-specific plume day effect describing the relative change in pollutant concentrations on the days impacted by smoke plume while accounting for confounding effects of season and temperature_. At the second stage we combine monitor-specific plume day effects with a Bayesian hierarchical model and estimate a pooled nationally-averaged effect. HMS visible smoke plumes were detected on 6% of ozone, 8% of PM2.5 and 6% of IMPROVE network monitoring days. Our preliminary results indicate that the long range transport of air pollutants from wildfires and prescribed burns increase ozone concentration by 11% and PM2.5 mass by 34%. On all of the days where monitoring sites were AQI

  3. Estimation of demographic parameters in a tiger population from long-term camera trap data

    USGS Publications Warehouse

    Karanth, K. Ullas; Nichols, James D.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Chapter 7 (Karanth et al.) illustrated the use of camera trapping in combination with closed population capture–recapture (CR) models to estimate densities of tigers Panthera tigris. Such estimates can be very useful for investigating variation across space for a particular species (e.g., Karanth et al. 2004) or variation among species at a specific location. In addition, estimates of density continued at the same site(s) over multiple years are very useful for understanding and managing populations of large carnivores. Such multi-year studies can yield estimates of rates of change in abundance. Additionally, because the fates of marked individuals are tracked through time, biologists can delve deeper into factors driving changes in abundance such as rates of survival, recruitment and movement (Williams et al. 2002). Fortunately, modern CR approaches permit the modeling of populations that change between sampling occasions as a result of births, deaths, immigration and emigration (Pollock et al. 1990; Nichols 1992). Some of these early “open population” models focused on estimation of survival rates and, to a lesser extent, abundance, but more recent models permit estimation of recruitment and movement rates as well.

  4. Performance of species occurrence estimators when basic assumptions are not met: a test using field data where true occupancy status is known

    USGS Publications Warehouse

    Miller, David A. W.; Bailey, Larissa L.; Grant, Evan H. Campbell; McClintock, Brett T.; Weir, Linda A.; Simons, Theodore R.

    2015-01-01

    Our results demonstrate that even small probabilities of misidentification and among-site detection heterogeneity can have severe effects on estimator reliability if ignored. We challenge researchers to place greater attention on both heterogeneity and false positives when designing and analysing occupancy studies. We provide 9 specific recommendations for the design, implementation and analysis of occupancy studies to better meet this challenge.

  5. Prediction of Cancer Incidence and Mortality in Korea, 2018

    PubMed Central

    Jung, Kyu-Won; Won, Young-Joo; Kong, Hyun-Joo; Lee, Eun Sook

    2018-01-01

    Purpose This study aimed to report on cancer incidence and mortality for the year 2018 to estimate Korea’s current cancer burden. Materials and Methods Cancer incidence data from 1999 to 2015 were obtained from the Korea National Cancer Incidence Database, and cancer mortality data from 1993 to 2016 were acquired from Statistics Korea. Cancer incidence and mortality were projected by fitting a linear regression model to observed age-specific cancer rates against observed years, then multiplying the projected age-specific rates by the age-specific population. The Joinpoint regression model was used to determine at which year the linear trend changed significantly, we only used the data of the latest trend. Results A total of 204,909 new cancer cases and 82,155 cancer deaths are expected to occur in Korea in 2018. The most common cancer sites were lung, followed by stomach, colorectal, breast and liver. These five cancers represent half of the overall burden of cancer in Korea. For mortality, the most common sites were lung cancer, followed by liver, colorectal, stomach and pancreas. Conclusion The incidence rate of all cancer in Korea are estimated to decrease gradually, mainly due to decrease of thyroid cancer. These up-to-date estimates of the cancer burden in Korea could be an important resource for planning and evaluation of cancer-control programs. PMID:29566480

  6. Prefusion F-specific antibodies determine the magnitude of RSV neutralizing activity in human sera.

    PubMed

    Ngwuta, Joan O; Chen, Man; Modjarrad, Kayvon; Joyce, M Gordon; Kanekiyo, Masaru; Kumar, Azad; Yassine, Hadi M; Moin, Syed M; Killikelly, April M; Chuang, Gwo-Yu; Druz, Aliaksandr; Georgiev, Ivelin S; Rundlet, Emily J; Sastry, Mallika; Stewart-Jones, Guillaume B E; Yang, Yongping; Zhang, Baoshan; Nason, Martha C; Capella, Cristina; Peeples, Mark E; Ledgerwood, Julie E; McLellan, Jason S; Kwong, Peter D; Graham, Barney S

    2015-10-14

    Respiratory syncytial virus (RSV) is estimated to claim more lives among infants <1 year old than any other single pathogen, except malaria, and poses a substantial global health burden. Viral entry is mediated by a type I fusion glycoprotein (F) that transitions from a metastable prefusion (pre-F) to a stable postfusion (post-F) trimer. A highly neutralization-sensitive epitope, antigenic site Ø, is found only on pre-F. We determined what fraction of neutralizing (NT) activity in human sera is dependent on antibodies specific for antigenic site Ø or other antigenic sites on F in healthy subjects from ages 7 to 93 years. Adsorption of individual sera with stabilized pre-F protein removed >90% of NT activity and depleted binding antibodies to both F conformations. In contrast, adsorption with post-F removed ~30% of NT activity, and binding antibodies to pre-F were retained. These findings were consistent across all age groups. Protein competition neutralization assays with pre-F mutants in which sites Ø or II were altered to knock out binding of antibodies to the corresponding sites showed that these sites accounted for ~35 and <10% of NT activity, respectively. Binding competition assays with monoclonal antibodies (mAbs) indicated that the amount of site Ø-specific antibodies correlated with NT activity, whereas the magnitude of binding competed by site II mAbs did not correlate with neutralization. Our results indicate that RSV NT activity in human sera is primarily derived from pre-F-specific antibodies, and therefore, inducing or boosting NT activity by vaccination will be facilitated by using pre-F antigens that preserve site Ø. Copyright © 2015, American Association for the Advancement of Science.

  7. Analysis of stream quality in the Yampa River Basin, Colorado and Wyoming

    USGS Publications Warehouse

    Wentz, Dennis A.; Steele, Timothy Doak

    1980-01-01

    Historic data show no significant water-temperature changes since 1951 for the Little Snake or Yampa Rivers, the two major streams of the Yampa River basin in Colorado and Wyoming. Regional analyses indicate that harmonic-mean temperature is negatively correlated with altitude. No change in specific conductance since 1951 was noted for the Little Snake River; however, specific conductance in the Yampa River has increaed 14 % since that time and is attributed to increased agricultural and municipal use of water. Site-specific relationships between major inorganic constituents and specific conductance for the Little Snake and Yampa Rivers were similar to regional relationships developed from both historic and recent (1975) data. These relationships provide a means for estimating concentrations of major inorganic constituents from specific conductance, which is easily measured. Trace-element and nutrient data collected from August 1975 through September 1976 at 92 sites in the Yampa River basin indicate that water-quality degradation occurred upstream from 3 sites. The degradation resulted from underground drainage from pyritic materials that probably are associated with coal at one site, discharge from powerplant cooling-tower blowdown water at a second site, and runoff from a small watershed containing a gas field at the third site. Ambient concentrations of dissolved and total iron and manganese frequently exceeded proposed Colorado water-quality standards. The concentrations of many dissolved and total trace elements and nutrients were greatest during March 1976. These were associated with larger suspended-sediment concentrations and smaller pH values than at other times of the year. (USGS)

  8. Assessing FPAR Source and Parameter Optimization Scheme in Application of a Diagnostic Carbon Flux Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D P; Ritts, W D; Wharton, S

    2009-02-26

    The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less

  9. Regional ITS architecture guidance : developing, using, and maintaining an ITS architecture for your region

    DOT National Transportation Integrated Search

    1998-08-01

    The purpose of this TechBrief is to discuss one of the Long Term Pavement Performance (LTPP) program's stringent data requirements - site-specific measurements for estimating pavement loadings - and to illustrate the effects of traffic loading data e...

  10. Risk Prediction Models for Other Cancers or Multiple Sites

    Cancer.gov

    Developing statistical models that estimate the probability of developing other multiple cancers over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. APPROXIMATION OF BIODEGRADATION RATE CONSTANTS FOR MONOAROMATIC HYDROCARBONS (BTEX) IN GROUND WATER

    EPA Science Inventory

    Two methods were used to approximate site-specific biodegradation rates of monoaromatic hydrocarbons (benzene, toluene, ethylbenzene, and xylenes [BTEX]) dissolved in ground water. Both use data from monitoring wells and the hydrologic properties of the quifer to estimate a biode...

  12. Inversion of multi-frequency electromagnetic induction data for 3D characterization of hydraulic conductivity

    USGS Publications Warehouse

    Brosten, Troy R.; Day-Lewis, Frederick D.; Schultz, Gregory M.; Curtis, Gary P.; Lane, John W.

    2011-01-01

    Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of − 0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)–ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~ 0.5 m followed by a gradual correlation loss of 90% at 2.3 m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter–receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0 ± 0.5 m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation.

  13. Inversion of multi-frequency electromagnetic induction data for 3D characterization of hydraulic conductivity

    USGS Publications Warehouse

    Brosten, T.R.; Day-Lewis, F. D.; Schultz, G.M.; Curtis, G.P.; Lane, J.W.

    2011-01-01

    Electromagnetic induction (EMI) instruments provide rapid, noninvasive, and spatially dense data for characterization of soil and groundwater properties. Data from multi-frequency EMI tools can be inverted to provide quantitative electrical conductivity estimates as a function of depth. In this study, multi-frequency EMI data collected across an abandoned uranium mill site near Naturita, Colorado, USA, are inverted to produce vertical distribution of electrical conductivity (EC) across the site. The relation between measured apparent electrical conductivity (ECa) and hydraulic conductivity (K) is weak (correlation coefficient of 0.20), whereas the correlation between the depth dependent EC obtained from the inversions, and K is sufficiently strong to be used for hydrologic estimation (correlation coefficient of -0.62). Depth-specific EC values were correlated with co-located K measurements to develop a site-specific ln(EC)-ln(K) relation. This petrophysical relation was applied to produce a spatially detailed map of K across the study area. A synthetic example based on ECa values at the site was used to assess model resolution and correlation loss given variations in depth and/or measurement error. Results from synthetic modeling indicate that optimum correlation with K occurs at ~0.5m followed by a gradual correlation loss of 90% at 2.3m. These results are consistent with an analysis of depth of investigation (DOI) given the range of frequencies, transmitter-receiver separation, and measurement errors for the field data. DOIs were estimated at 2.0??0.5m depending on the soil conductivities. A 4-layer model, with varying thicknesses, was used to invert the ECa to maximize available information within the aquifer region for improved correlations with K. Results show improved correlation between K and the corresponding inverted EC at similar depths, underscoring the importance of inversion in using multi-frequency EMI data for hydrologic estimation. ?? 2011.

  14. Municipal wastewater sludge as a sustainable bioresource in the United States

    DOE PAGES

    Seiple, Timothy E.; Coleman, André M.; Skaggs, Richard L.

    2017-04-20

    Within the United States and Puerto Rico, publicly owned treatment works (POTWs) process 130.5 Gl/d (34.5 Bgal/d) of wastewater, producing sludge as a waste product. Emerging technologies offer novel waste-to-energy pathways through whole sludge conversion into biofuels. Assessing the feasibility, scalability and tradeoffs of various energy conversion pathways is difficult in the absence of highly spatially resolved estimates of sludge production. In this study, average wastewater solids concentrations and removal rates, and site specific daily average influent flow are used to estimate site specific annual sludge production on a dry weight basis for >15,000 POTWs. Current beneficial uses, regional productionmore » hotspots and feedstock aggregation potential are also assessed. Analyses indicate 1) POTWs capture 12.56 Tg/y (13.84 MT/y) of dry solids; 2) 50% are not beneficially utilized, and 3) POTWs can support seven regions that aggregate >910 Mg/d (1000 T/d) of sludge within a travel distance of 100 km.« less

  15. Morphology and the Strength of Intermolecular Contact in Protein Crystals

    NASA Technical Reports Server (NTRS)

    Matsuura, Yoshiki; Chernov, Alexander A.

    2002-01-01

    The strengths of intermolecular contacts (macrobonds) in four lysozyme crystals were estimated based on the strengths of individual intermolecular interatomic interaction pairs. The periodic bond chain of these macrobonds accounts for the morphology of protein crystals as shown previously. Further in this paper, the surface area of contact, polar coordinate representation of contact site, Coulombic contribution on the macrobond strength, and the surface energy of the crystal have been evaluated. Comparing location of intermolecular contacts in different polymorphic crystal modifications, we show that these contacts can form a wide variety of patches on the molecular surface. The patches are located practically everywhere on this surface except for the concave active site. The contacts frequently include water molecules, with specific intermolecular hydrogen-bonds on the background of non-specific attractive interactions. The strengths of macrobonds are also compared to those of other protein complex systems. Making use of the contact strengths and taking into account bond hydration we also estimated crystal-water interfacial energies for different crystal faces.

  16. Technical Review of SRS Dose Reconstrruction Methods Used By CDC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkins, Ali, A

    2005-07-20

    At the request of the Centers for Disease Control and Prevention (CDC), a subcontractor Advanced Technologies and Laboratories International, Inc.(ATL) issued a draft report estimating offsite dose as a result of Savannah River Site operations for the period 1954-1992 in support of Phase III of the SRS Dose Reconstruction Project. The doses reported by ATL differed than those previously estimated by Savannah River Site SRS dose modelers for a variety of reasons, but primarily because (1) ATL used different source terms, (2) ATL considered trespasser/poacher scenarios and (3) ATL did not consistently use site-specific parameters or correct usage parameters. Themore » receptors with the highest dose from atmospheric and liquid pathways were within about a factor of four greater than dose values previously reported by SRS. A complete set of technical comments have also been included.« less

  17. Normalization of energy-dependent gamma survey data.

    PubMed

    Whicker, Randy; Chambers, Douglas

    2015-05-01

    Instruments and methods for normalization of energy-dependent gamma radiation survey data to a less energy-dependent basis of measurement are evaluated based on relevant field data collected at 15 different sites across the western United States along with a site in Mongolia. Normalization performance is assessed relative to measurements with a high-pressure ionization chamber (HPIC) due to its "flat" energy response and accurate measurement of the true exposure rate from both cosmic and terrestrial radiation. While analytically ideal for normalization applications, cost and practicality disadvantages have increased demand for alternatives to the HPIC. Regression analysis on paired measurements between energy-dependent sodium iodide (NaI) scintillation detectors (5-cm by 5-cm crystal dimensions) and the HPIC revealed highly consistent relationships among sites not previously impacted by radiological contamination (natural sites). A resulting generalized data normalization factor based on the average sensitivity of NaI detectors to naturally occurring terrestrial radiation (0.56 nGy hHPIC per nGy hNaI), combined with the calculated site-specific estimate of cosmic radiation, produced reasonably accurate predictions of HPIC readings at natural sites. Normalization against two to potential alternative instruments (a tissue-equivalent plastic scintillator and energy-compensated NaI detector) did not perform better than the sensitivity adjustment approach at natural sites. Each approach produced unreliable estimates of HPIC readings at radiologically impacted sites, though normalization against the plastic scintillator or energy-compensated NaI detector can address incompatibilities between different energy-dependent instruments with respect to estimation of soil radionuclide levels. The appropriate data normalization method depends on the nature of the site, expected duration of the project, survey objectives, and considerations of cost and practicality.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, Jim; Penuelas, J.; Guenther, Alex B.

    To survey landscape-scale fluxes of biogenic gases, a100-meterTeflon tube was attached to a tethered balloon as a sampling inlet for a fast response Proton Transfer Reaction Mass Spectrometer (PTRMS). Along with meteorological instruments deployed on the tethered balloon and at 3-mand outputs from a regional weather model, these observations were used to estimate landscape scale biogenic volatile organic compound fluxes with two micrometeorological techniques: mixed layer variance and surface layer gradients. This highly mobile sampling system was deployed at four field sites near Barcelona to estimate landscape-scale BVOC emission factors in a relatively short period (3 weeks). The two micrometeorologicalmore » techniques agreed within the uncertainty of the flux measurements at all four sites even though the locations had considerable heterogeneity in species distribution and complex terrain. The observed fluxes were significantly different than emissions predicted with an emission model using site-specific emission factors and land-cover characteristics. Considering the wide range in reported BVOC emission factors of VOCs for individual vegetation species (more than an order of magnitude), this flux estimation technique is useful for constraining BVOC emission factors used as model inputs.« less

  19. Quality-assurance results for field pH and specific-conductance measurements, and for laboratory analysis, National Atmospheric Deposition Program and National Trends Network; January 1980-September 1984

    USGS Publications Warehouse

    Schroder, L.J.; Brooks, M.H.; Malo, B.A.; Willoughby, T.C.

    1986-01-01

    Five intersite comparison studies for the field determination of pH and specific conductance, using simulated-precipitation samples, were conducted by the U.S.G.S. for the National Atmospheric Deposition Program and National Trends Network. These comparisons were performed to estimate the precision of pH and specific conductance determinations made by sampling-site operators. Simulated-precipitation samples were prepared from nitric acid and deionized water. The estimated standard deviation for site-operator determination of pH was 0.25 for pH values ranging from 3.79 to 4.64; the estimated standard deviation for specific conductance was 4.6 microsiemens/cm at 25 C for specific-conductance values ranging from 10.4 to 59.0 microsiemens/cm at 25 C. Performance-audit samples with known analyte concentrations were prepared by the U.S.G.S.and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The differences between the National Atmospheric Deposition Program and national Trends Network-reported analyte concentrations and known analyte concentrations were calculated, and the bias and precision were determined. For 1983, concentrations of calcium, magnesium, sodium, and chloride were biased at the 99% confidence limit; concentrations of potassium and sulfate were unbiased at the 99% confidence limit. Four analytical laboratories routinely analyzing precipitation were evaluated in their analysis of identical natural- and simulated precipitation samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple-range test on data produced by these laboratories, from the analysis of identical simulated-precipitation samples. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Interlaboratory comparability results may be used to normalize natural-precipitation chemistry data obtained from two or more of these laboratories. (Author 's abstract)

  20. Localization of premature ventricular contractions from the papillary muscles using the standard 12-lead electrocardiogram: a feasibility study using a novel cardiac isochrone positioning system.

    PubMed

    van Dam, Peter M; Boyle, Noel G; Laks, Michael M; Tung, Roderick

    2016-12-01

    The precise localization of the site of origin of a premature ventricular contraction (PVC) prior to ablation can facilitate the planning and execution of the electrophysiological procedure. In clinical practice, the targeted ablation site is estimated from the standard 12-lead ECG. The accuracy of this qualitative estimation has limitations, particularly in the localization of PVCs originating from the papillary muscles. Clinical available electrocardiographic imaging (ECGi) techniques that incorporate patient-specific anatomy may improve the localization of these PVCs, but require body surface maps with greater specificity for the epicardium. The purpose of this report is to demonstrate that a novel cardiac isochrone positioning system (CIPS) program can accurately detect the specific location of the PVC on the papillary muscle using only a 12-lead ECG. Cardiac isochrone positioning system uses three components: (i) endocardial and epicardial cardiac anatomy and torso geometry derived from MRI, (ii) the patient-specific electrode positions derived from an MRI model registered 3D image, and (iii) the 12-lead ECG. CIPS localizes the PVC origin by matching the anatomical isochrone vector with the ECG vector. The predicted PVC origin was compared with the site of successful ablation or stimulation. Three patients who underwent electrophysiological mapping and ablation of PVCs originating from the papillary muscles were studied. CIPS localized the PVC origin for all three patients to the correct papillary muscle and specifically to the base, mid, or apical region. A simplified form of ECGi utilizing only 12 standard electrocardiographic leads may facilitate accurate localization of the origin of papillary muscle PVCs. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For Permissions, please email: journals.permissions@oup.com.

  1. MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning

    PubMed Central

    Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851

  2. Assessment of fish assemblages and minimum sampling effort required to determine botic integrity of large rivers in southern Idaho, 2002

    USGS Publications Warehouse

    Maret, Terry R.; Ott, D.S.

    2004-01-01

    width was determined to be sufficient for collecting an adequate number of fish to estimate species richness and evaluate biotic integrity. At most sites, about 250 fish were needed to effectively represent 95 percent of the species present. Fifty-three percent of the sites assessed, using an IBI developed specifically for large Idaho rivers, received scores of less than 50, indicating poor biotic integrity.

  3. Repeat 24-hour recalls and locally developed food composition databases: a feasible method to estimate dietary adequacy in a multi-site preconception maternal nutrition RCT

    PubMed Central

    Lander, Rebecca L.; Hambidge, K. Michael; Krebs, Nancy F.; Westcott, Jamie E.; Garces, Ana; Figueroa, Lester; Tejeda, Gabriela; Lokangaka, Adrien; Diba, Tshilenge S.; Somannavar, Manjunath S.; Honnayya, Ranjitha; Ali, Sumera A.; Khan, Umber S.; McClure, Elizabeth M.; Thorsten, Vanessa R.; Stolka, Kristen B.

    2017-01-01

    ABSTRACT Background: Our aim was to utilize a feasible quantitative methodology to estimate the dietary adequacy of >900 first-trimester pregnant women in poor rural areas of the Democratic Republic of the Congo, Guatemala, India and Pakistan. This paper outlines the dietary methods used. Methods: Local nutritionists were trained at the sites by the lead study nutritionist and received ongoing mentoring throughout the study. Training topics focused on the standardized conduct of repeat multiple-pass 24-hr dietary recalls, including interview techniques, estimation of portion sizes, and construction of a unique site-specific food composition database (FCDB). Each FCDB was based on 13 food groups and included values for moisture, energy, 20 nutrients (i.e. macro- and micronutrients), and phytate (an anti-nutrient). Nutrient values for individual foods or beverages were taken from recently developed FAO-supported regional food composition tables or the USDA national nutrient database. Appropriate adjustments for differences in moisture and application of nutrient retention and yield factors after cooking were applied, as needed. Generic recipes for mixed dishes consumed by the study population were compiled at each site, followed by calculation of a median recipe per 100 g. Each recipe’s nutrient values were included in the FCDB. Final site FCDB checks were planned according to FAO/INFOODS guidelines. Discussion: This dietary strategy provides the opportunity to assess estimated mean group usual energy and nutrient intakes and estimated prevalence of the population ‘at risk’ of inadequate intakes in first-trimester pregnant women living in four low- and middle-income countries. While challenges and limitations exist, this methodology demonstrates the practical application of a quantitative dietary strategy for a large international multi-site nutrition trial, providing within- and between-site comparisons. Moreover, it provides an excellent opportunity for local capacity building and each site FCDB can be easily modified for additional research activities conducted in other populations living in the same area. PMID:28469549

  4. Cancer incidence estimates at the national and district levels in Colombia.

    PubMed

    Piñeros, Marion; Ferlay, Jacques; Murillo, Raúl

    2006-01-01

    To estimate national and district cancer incidence for 18 major cancer sites in Colombia. National and district incidence was estimated by applying a set of age, sex and site-specific incidence/mortality ratios, obtained from a population-based cancer registry, to national and regional mortality. The work was done in Bogotá (Colombia) and Lyon (France) between May 2003 and August 2004. The annual total number of cases expected (all cancers but skin) was 17 819 in men and 18 772 in women. Among males the most frequent cancers were those of the prostate (45.8 per 100 000), stomach (36.0), and lung (20.0). In females the most frequent were those of the cervix uteri (36.8 per 100 000), breast (30.0), and stomach (20.7). Districts with the lowest death certification coverage yielded the highest incidence rates. In the absence of national population-based cancer registry data, estimates of incidence provide valuable information at national and regional levels. As mortality data are an important source for the estimation,the quality of death certification should be considered as a possible cause of bias.

  5. Estimated Perennial Streams of Idaho and Related Geospatial Datasets

    USGS Publications Warehouse

    Rea, Alan; Skinner, Kenneth D.

    2009-01-01

    The perennial or intermittent status of a stream has bearing on many regulatory requirements. Because of changing technologies over time, cartographic representation of perennial/intermittent status of streams on U.S. Geological Survey (USGS) topographic maps is not always accurate and (or) consistent from one map sheet to another. Idaho Administrative Code defines an intermittent stream as one having a 7-day, 2-year low flow (7Q2) less than 0.1 cubic feet per second. To establish consistency with the Idaho Administrative Code, the USGS developed regional regression equations for Idaho streams for several low-flow statistics, including 7Q2. Using these regression equations, the 7Q2 streamflow may be estimated for naturally flowing streams anywhere in Idaho to help determine perennial/intermittent status of streams. Using these equations in conjunction with a Geographic Information System (GIS) technique known as weighted flow accumulation allows for an automated and continuous estimation of 7Q2 streamflow at all points along a stream, which in turn can be used to determine if a stream is intermittent or perennial according to the Idaho Administrative Code operational definition. The selected regression equations were applied to create continuous grids of 7Q2 estimates for the eight low-flow regression regions of Idaho. By applying the 0.1 ft3/s criterion, the perennial streams have been estimated in each low-flow region. Uncertainty in the estimates is shown by identifying a 'transitional' zone, corresponding to flow estimates of 0.1 ft3/s plus and minus one standard error. Considerable additional uncertainty exists in the model of perennial streams presented in this report. The regression models provide overall estimates based on general trends within each regression region. These models do not include local factors such as a large spring or a losing reach that may greatly affect flows at any given point. Site-specific flow data, assuming a sufficient period of record, generally would be considered to represent flow conditions better at a given site than flow estimates based on regionalized regression models. The geospatial datasets of modeled perennial streams are considered a first-cut estimate, and should not be construed to override site-specific flow data.

  6. Quantitative tolerance values for common stream benthic macroinvertebrates in the Yangtze River Delta, Eastern China.

    PubMed

    Qin, Chun-Yan; Zhou, Jin; Cao, Yong; Zhang, Yong; Hughes, Robert M; Wang, Bei-Xin

    2014-09-01

    Aquatic organisms' tolerance to water pollution is widely used to monitor and assess freshwater ecosystem health. Tolerance values (TVs) estimated based on statistical analyses of species-environment relationships are more objective than those assigned by expert opinion. Region-specific TVs are the basis for developing accurate bioassessment metrics particularly in developing countries, where both aquatic biota and their responses to human disturbances have been poorly documented. We used principal component analysis to derive a synthetic gradient for four stressor variables (total nitrogen, total phosphorus, dissolved oxygen, and % silt) based on 286 sampling sites in the Taihu Lake and Qiantang River basins (Yangtze River Delta), China. We used the scores of taxa on the first principal component (PC1), which explained 49.8% of the variance, to estimate the tolerance values (TV(r)) of 163 macroinvertebrates taxa that were collected from at least 20 sites, 81 of which were not included in the Hilsenhoff TV lists (TV(h)) of 1987. All estimates were scaled into the range of 1-10 as in TV(h). Of all the taxa with different TVs, 46.3% of TV(r) were lower and 52.4% were higher than TV(h). TV(r) were significantly (p < 0.01, Fig. 2), but weakly (r(2) = 0.34), correlated with TVh. Seven biotic metrics based on TVr were more strongly correlated with the main stressors and were more effective at discriminating references sites from impacted sites than those based on TV(h). Our results highlight the importance of developing region-specific TVs for macroinvertebrate-based bioassessment and to facilitate assessment of streams in China, particularly in the Yangtze River Delta.

  7. Phenotypic dissection of bone mineral density reveals skeletal site specificity and facilitates the identification of novel loci in the genetic regulation of bone mass attainment.

    PubMed

    Kemp, John P; Medina-Gomez, Carolina; Estrada, Karol; St Pourcain, Beate; Heppe, Denise H M; Warrington, Nicole M; Oei, Ling; Ring, Susan M; Kruithof, Claudia J; Timpson, Nicholas J; Wolber, Lisa E; Reppe, Sjur; Gautvik, Kaare; Grundberg, Elin; Ge, Bing; van der Eerden, Bram; van de Peppel, Jeroen; Hibbs, Matthew A; Ackert-Bicknell, Cheryl L; Choi, Kwangbom; Koller, Daniel L; Econs, Michael J; Williams, Frances M K; Foroud, Tatiana; Zillikens, M Carola; Ohlsson, Claes; Hofman, Albert; Uitterlinden, André G; Davey Smith, George; Jaddoe, Vincent W V; Tobias, Jonathan H; Rivadeneira, Fernando; Evans, David M

    2014-06-01

    Heritability of bone mineral density (BMD) varies across skeletal sites, reflecting different relative contributions of genetic and environmental influences. To quantify the degree to which common genetic variants tag and environmental factors influence BMD, at different sites, we estimated the genetic (rg) and residual (re) correlations between BMD measured at the upper limbs (UL-BMD), lower limbs (LL-BMD) and skull (SK-BMD), using total-body DXA scans of ∼ 4,890 participants recruited by the Avon Longitudinal Study of Parents and their Children (ALSPAC). Point estimates of rg indicated that appendicular sites have a greater proportion of shared genetic architecture (LL-/UL-BMD rg = 0.78) between them, than with the skull (UL-/SK-BMD rg = 0.58 and LL-/SK-BMD rg = 0.43). Likewise, the residual correlation between BMD at appendicular sites (r(e) = 0.55) was higher than the residual correlation between SK-BMD and BMD at appendicular sites (r(e) = 0.20-0.24). To explore the basis for the observed differences in rg and re, genome-wide association meta-analyses were performed (n ∼ 9,395), combining data from ALSPAC and the Generation R Study identifying 15 independent signals from 13 loci associated at genome-wide significant level across different skeletal regions. Results suggested that previously identified BMD-associated variants may exert site-specific effects (i.e. differ in the strength of their association and magnitude of effect across different skeletal sites). In particular, variants at CPED1 exerted a larger influence on SK-BMD and UL-BMD when compared to LL-BMD (P = 2.01 × 10(-37)), whilst variants at WNT16 influenced UL-BMD to a greater degree when compared to SK- and LL-BMD (P = 2.31 × 10(-14)). In addition, we report a novel association between RIN3 (previously associated with Paget's disease) and LL-BMD (rs754388: β = 0.13, SE = 0.02, P = 1.4 × 10(-10)). Our results suggest that BMD at different skeletal sites is under a mixture of shared and specific genetic and environmental influences. Allowing for these differences by performing genome-wide association at different skeletal sites may help uncover new genetic influences on BMD.

  8. Phenotypic Dissection of Bone Mineral Density Reveals Skeletal Site Specificity and Facilitates the Identification of Novel Loci in the Genetic Regulation of Bone Mass Attainment

    PubMed Central

    Estrada, Karol; St Pourcain, Beate; Heppe, Denise H. M.; Warrington, Nicole M.; Oei, Ling; Ring, Susan M.; Kruithof, Claudia J.; Timpson, Nicholas J.; Wolber, Lisa E.; Reppe, Sjur; Gautvik, Kaare; Grundberg, Elin; Ge, Bing; van der Eerden, Bram; van de Peppel, Jeroen; Hibbs, Matthew A.; Ackert-Bicknell, Cheryl L.; Choi, Kwangbom; Koller, Daniel L.; Econs, Michael J.; Williams, Frances M. K.; Foroud, Tatiana; Carola Zillikens, M.; Ohlsson, Claes; Hofman, Albert; Uitterlinden, André G.; Davey Smith, George; Jaddoe, Vincent W. V.; Tobias, Jonathan H.; Rivadeneira, Fernando; Evans, David M.

    2014-01-01

    Heritability of bone mineral density (BMD) varies across skeletal sites, reflecting different relative contributions of genetic and environmental influences. To quantify the degree to which common genetic variants tag and environmental factors influence BMD, at different sites, we estimated the genetic (rg) and residual (re) correlations between BMD measured at the upper limbs (UL-BMD), lower limbs (LL-BMD) and skull (SK-BMD), using total-body DXA scans of ∼4,890 participants recruited by the Avon Longitudinal Study of Parents and their Children (ALSPAC). Point estimates of rg indicated that appendicular sites have a greater proportion of shared genetic architecture (LL-/UL-BMD rg = 0.78) between them, than with the skull (UL-/SK-BMD rg = 0.58 and LL-/SK-BMD rg = 0.43). Likewise, the residual correlation between BMD at appendicular sites (re = 0.55) was higher than the residual correlation between SK-BMD and BMD at appendicular sites (re = 0.20–0.24). To explore the basis for the observed differences in rg and re, genome-wide association meta-analyses were performed (n∼9,395), combining data from ALSPAC and the Generation R Study identifying 15 independent signals from 13 loci associated at genome-wide significant level across different skeletal regions. Results suggested that previously identified BMD-associated variants may exert site-specific effects (i.e. differ in the strength of their association and magnitude of effect across different skeletal sites). In particular, variants at CPED1 exerted a larger influence on SK-BMD and UL-BMD when compared to LL-BMD (P = 2.01×10−37), whilst variants at WNT16 influenced UL-BMD to a greater degree when compared to SK- and LL-BMD (P = 2.31×10−14). In addition, we report a novel association between RIN3 (previously associated with Paget's disease) and LL-BMD (rs754388: β = 0.13, SE = 0.02, P = 1.4×10−10). Our results suggest that BMD at different skeletal sites is under a mixture of shared and specific genetic and environmental influences. Allowing for these differences by performing genome-wide association at different skeletal sites may help uncover new genetic influences on BMD. PMID:24945404

  9. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills.

    PubMed

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-06-01

    Methane (CH₄) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required categories. In general, the single-phase model, LandGEM, significantly overestimated CH₄generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH₄potential (BMP) and CH₄generation rate constant (k-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH₄generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only the total disposed waste amount, but also various waste categories. By using laboratory-determined BMPs and k-values for shredder, sludge, mixed bulky waste, and street-cleaning waste, the Afvalzorg model was revised. The revised model estimated smaller cumulative CH₄generation results at the four Danish landfills (from the start of disposal until 2020 and until 2100). Through a CH₄mass balance approach, fugitive CH₄emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field measurements, indicating that the revised Afvalzorg model could provide practical and accurate estimation for Danish LFG emissions. This study is valuable for both researchers and engineers aiming to predict, control, and mitigate fugitive CH₄emissions from landfills receiving low-organic waste. Landfill operators use the first-order decay (FOD) models to estimate methane (CH₄) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in overestimation when handling a low-organic waste scenario. Site-specific data were important and capable of calibrating key parameter values in FOD models. The comparison study of the revised Afvalzorg model outcomes and field measurements at four Danish landfills provided a guideline for revising the Pollutants Release and Transfer Registers (PRTR) model, as well as indicating noteworthy waste fractions that could emit CH₄at modern landfills.

  10. Estimating evapotranspiration in natural and constructed wetlands

    USGS Publications Warehouse

    Lott, R. Brandon; Hunt, Randall J.

    2001-01-01

    Difficulties in accurately calculating evapotranspiration (ET) in wetlands can lead to inaccurate water balances—information important for many compensatory mitigation projects. Simple meteorological methods or off-site ET data often are used to estimate ET, but these approaches do not include potentially important site-specific factors such as plant community, root-zone water levels, and soil properties. The objective of this study was to compare a commonly used meterological estimate of potential evapotranspiration (PET) with direct measurements of ET (lysimeters and water-table fluctuations) and small-scale root-zone geochemistry in a natural and constructed wetland system. Unlike what has been commonly noted, the results of the study demonstrated that the commonly used Penman combination method of estimating PET underestimated the ET that was measured directly in the natural wetland over most of the growing season. This result is likely due to surface heterogeneity and related roughness efffects not included in the simple PET estimate. The meterological method more closely approximated season-long measured ET rates in the constructed wetland but may overestimate the ET rate late in the growing season. ET rates also were temporally variable in wetlands over a range of time scales because they can be influenced by the relation of the water table to the root zone and the timing of plant senescence. Small-scale geochemical sampling of the shallow root zone was able to provide an independent evaluation of ET rates, supporting the identification of higher ET rates in the natural wetlands and differences in temporal ET rates due to the timing of senescence. These discrepancies illustrate potential problems with extrapolating off-site estimates of ET or single measurements of ET from a site over space or time.

  11. An assessment of catalytic residue 3D ensembles for the prediction of enzyme function.

    PubMed

    Žváček, Clemens; Friedrichs, Gerald; Heizinger, Leonhard; Merkl, Rainer

    2015-11-04

    The central element of each enzyme is the catalytic site, which commonly catalyzes a single biochemical reaction with high specificity. It was unclear to us how often sites that catalyze the same or highly similar reactions evolved on different, i. e. non-homologous protein folds and how similar their 3D poses are. Both similarities are key criteria for assessing the usability of pose comparison for function prediction. We have analyzed the SCOP database on the superfamily level in order to estimate the number of non-homologous enzymes possessing the same function according to their EC number. 89% of the 873 substrate-specific functions (four digit EC number) assigned to mono-functional, single-domain enzymes were only found in one superfamily. For a reaction-specific grouping (three digit EC number), this value dropped to 35%, indicating that in approximately 65% of all enzymes the same function evolved in two or more non-homologous proteins. For these isofunctional enzymes, structural similarity of the catalytic sites may help to predict function, because neither high sequence similarity nor identical folds are required for a comparison. To assess the specificity of catalytic 3D poses, we compiled the redundancy-free set ENZ_SITES, which comprises 695 sites, whose composition and function are well-defined. We compared their poses with the help of the program Superpose3D and determined classification performance. If the sites were from different superfamilies, the number of true and false positive predictions was similarly high, both for a coarse and a detailed grouping of enzyme function. Moreover, classification performance did not improve drastically, if we additionally used homologous sites to predict function. For a large number of enzymatic functions, dissimilar sites evolved that catalyze the same reaction and it is the individual substrate that determines the arrangement of the catalytic site and its local environment. These substrate-specific requirements turn the comparison of catalytic residues into a weak classifier for the prediction of enzyme function.

  12. ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...

  13. UNCERTAINTY AND THE JOHNSON-ETTINGER MODEL FOR VAPOR INTRUSION CALCULATIONS

    EPA Science Inventory

    The Johnson-Ettinger Model is widely used for assessing the impacts of contaminated vapors on residential air quality. Typical use of this model relies on a suite of estimated data, with few site-specific measurements. Software was developed to provide the public with automate...

  14. Hydrologic downscaling of soil moisture using global data without site-specific calibration

    USDA-ARS?s Scientific Manuscript database

    Numerous applications require fine-resolution (10-30 m) soil moisture patterns, but most satellite remote sensing and land-surface models provide coarse-resolution (9-60 km) soil moisture estimates. The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales soil moistu...

  15. Comprehensive analysis of proton range uncertainties related to patient stopping-power-ratio estimation using the stoichiometric calibration

    PubMed Central

    Yang, M; Zhu, X R; Park, PC; Titt, Uwe; Mohan, R; Virshup, G; Clayton, J; Dong, L

    2012-01-01

    The purpose of this study was to analyze factors affecting proton stopping-power-ratio (SPR) estimations and range uncertainties in proton therapy planning using the standard stoichiometric calibration. The SPR uncertainties were grouped into five categories according to their origins and then estimated based on previously published reports or measurements. For the first time, the impact of tissue composition variations on SPR estimation was assessed and the uncertainty estimates of each category were determined for low-density (lung), soft, and high-density (bone) tissues. A composite, 95th percentile water-equivalent-thickness uncertainty was calculated from multiple beam directions in 15 patients with various types of cancer undergoing proton therapy. The SPR uncertainties (1σ) were quite different (ranging from 1.6% to 5.0%) in different tissue groups, although the final combined uncertainty (95th percentile) for different treatment sites was fairly consistent at 3.0–3.4%, primarily because soft tissue is the dominant tissue type in human body. The dominant contributing factor for uncertainties in soft tissues was the degeneracy of Hounsfield Numbers in the presence of tissue composition variations. To reduce the overall uncertainties in SPR estimation, the use of dual-energy computed tomography is suggested. The values recommended in this study based on typical treatment sites and a small group of patients roughly agree with the commonly referenced value (3.5%) used for margin design. By using tissue-specific range uncertainties, one could estimate the beam-specific range margin by accounting for different types and amounts of tissues along a beam, which may allow for customization of range uncertainty for each beam direction. PMID:22678123

  16. Air-water exchange of PAHs and OPAHs at a superfund mega-site.

    PubMed

    Tidwell, Lane G; Blair Paulik, L; Anderson, Kim A

    2017-12-15

    Chemical fate is a concern at environmentally contaminated sites, but characterizing that fate can be difficult. Identifying and quantifying the movement of chemicals at the air-water interface are important steps in characterizing chemical fate. Superfund sites are often suspected sources of air pollution due to legacy sediment and water contamination. A quantitative assessment of polycyclic aromatic hydrocarbons (PAHs) and oxygenated PAH (OPAHs) diffusive flux in a river system that contains a Superfund Mega-site, and passes through residential, urban and agricultural land, has not been reported before. Here, passive sampling devices (PSDs) were used to measure 60 polycyclic aromatic hydrocarbons (PAHs) and 22 oxygenated PAH (OPAHs) in air and water. From these concentrations the magnitude and direction of contaminant flux between these two compartments was calculated. The magnitude of PAH flux was greater at sites near or within the Superfund Mega-site than outside of the Superfund Mega-site. The largest net individual PAH deposition at a single site was naphthalene at a rate of -14,200 (±5780) (ng/m 2 )/day. The estimated one-year total flux of phenanthrene was -7.9×10 5 (ng/m 2 )/year. Human health risk associated with inhalation of vapor phase PAHs and dermal exposure to PAHs in water were assessed by calculating benzo[a]pyrene equivalent concentrations. Excess lifetime cancer risk estimates show potential increased risk associated with exposure to PAHs at sites within and in close proximity to the Superfund Mega-site. Specifically, estimated excess lifetime cancer risk associated with dermal exposure and inhalation of PAHs was above 1 in 1 million within the Superfund Mega-site. The predominant depositional flux profile observed in this study suggests that the river water in this Superfund site is largely a sink for airborne PAHs, rather than a source. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Heat, chloride, and specific conductance as ground water tracers near streams

    USGS Publications Warehouse

    Cox, M.H.; Su, G.W.; Constantz, J.

    2007-01-01

    Commonly measured water quality parameters were compared to heat as tracers of stream water exchange with ground water. Temperature, specific conductance, and chloride were sampled at various frequencies in the stream and adjacent wells over a 2-year period. Strong seasonal variations in stream water were observed for temperature and specific conductance. In observation wells where the temperature response correlated to stream water, chloride and specific conductance values were similar to stream water values as well, indicating significant stream water exchange with ground water. At sites where ground water temperature fluctuations were negligible, chloride and/or specific conductance values did not correlate to stream water values, indicating that ground water was not significantly influenced by exchange with stream water. Best-fit simulation modeling was performed at two sites to derive temperature-based estimates of hydraulic conductivities of the alluvial sediments between the stream and wells. These estimates were used in solute transport simulations for a comparison of measured and simulated values for chloride and specific conductance. Simulation results showed that hydraulic conductivities vary seasonally and annually. This variability was a result of seasonal changes in temperature-dependent hydraulic conductivity and scouring or clogging of the streambed. Specific conductance fits were good, while chloride data were difficult to fit due to the infrequent (quarterly) stream water chloride measurements during the study period. Combined analyses of temperature, chloride, and specific conductance led to improved quantification of the spatial and temporal variability of stream water exchange with shallow ground water in an alluvial system. ?? 2007 National Ground Water Association.

  18. Recharge rates and aquifer hydraulic characteristics for selected drainage basins in middle and east Tennessee

    USGS Publications Warehouse

    Hoos, A.B.

    1990-01-01

    Quantitative information concerning aquifer hydrologic and hydraulic characteristics is needed to manage the development of ground-water resources. These characteristics are poorly defined for the bedrock aquifers in Middle and East Tennessee where demand for water is increasing. This report presents estimates of recharge rate, storage coefficient, diffusivity, and transmissivity for representative drainage basins in Middle and East Tennessee, as determined from analyses of stream-aquifer interactions. The drainage basins have been grouped according to the underlying major aquifer, then statistical descriptions applied to each group, in order to define area1 distribution of these characteristics. Aquifer recharge rates are estimated for representative low, average, and high flow years for 63 drainage basins using hydrograph analysis techniques. Net annual recharge during average flow years for all basins ranges from 4.1 to 16.8 in/yr (inches per year), with a mean value of 7.3 in. In general, recharge rates are highest for basins underlain by the Blue Ridge aquifer (mean value11.7 in/yr) and lowest for basins underlain by the Central Basin aquifer (mean value 5.6 in/yr). Mean recharge values for the Cumberland Plateau, Highland Rim, and Valley and Ridge aquifers are 6.5, 7.4, and 6.6 in/yr, respectively. Gravity drainage characterizes ground-water flow in most surficial bedrock aquifer in Tennessee. Accordingly, a gravity yield analysis, which compares concurrent water-level and streamflow hydrographs, was used to estimate aquifer storage coefficient for nine study basins. The basin estimates range from 0.002 to 0.140; however, most estimates are within a narrow range of values, from 0.01 to 0.025. Accordingly, storage coefficient is estimated to be 0.01 for all aquifers in Middle and East Tennessee, with the exception of the aquifer in the inner part of the Central Basin, for which storage coefficient is estimated to be 0.002. Estimates of aquifer hydraulic diffusivity are derived from estimates of the streamflow recession index and drainage density for 75 drainage basins; values range from 3,300 to 130,000 ft^2/d (feet squared per day). Basin-specific and site-specific estimates of transmissivity are computed from estimates of hydraulic diffusivity and specific-capacity test data, respectively. Basin-specific, or areal, estimates of transmissivity range from 22 to 1,300 ft^2/d, with a mean of 240 ft^2/d In general, areal transmissivity is highest for basins underlain by the Cumberland Plateau aquifer (mean value 480 ft^2/d) and lowest for basins underlain by the Central Basin aquifer (mean value 79 ft^2/d). Mean transmissivity values for the Highland Rim, Valley and Ridge, and Blue Ridge aquifer are 320,140, and 120 ft^2/d respectively. Site-specific estimates of transmissivity, computed from specific-capacity data from 118 test wells in Middle and East Tennessee range from 2 to 93,000 ft^2/d with a mean of 2,600 ft^2/d Mean transmissivity values for the Cumberland Plateau, Highland Rim, Central Basin, Valley and Ridge, and Blue Ridge aquifers are 2,800,1,200, 7,800, 390, and 65Oft Id, respectively.

  19. Use of NMR logging to obtain estimates of hydraulic conductivity in the High Plains aquifer, Nebraska, USA

    USGS Publications Warehouse

    Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John

    2013-01-01

    Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.

  20. Comparison and continuous estimates of fecal coliform and Escherichia coli bacteria in selected Kansas streams, May 1999 through April 2002

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Ziegler, Andrew C.

    2003-01-01

    The sanitary quality of water and its use as a public-water supply and for recreational activities, such as swimming, wading, boating, and fishing, can be evaluated on the basis of fecal coliform and Escherichia coli (E. coli) bacteria densities. This report describes the overall sanitary quality of surface water in selected Kansas streams, the relation between fecal coliform and E. coli, the relation between turbidity and bacteria densities, and how continuous bacteria estimates can be used to evaluate the water-quality conditions in selected Kansas streams. Samples for fecal coliform and E. coli were collected at 28 surface-water sites in Kansas. Of the 318 samples collected, 18 percent exceeded the current Kansas Department of Health and Environment (KDHE) secondary contact recreational, single-sample criterion for fecal coliform (2,000 colonies per 100 milliliters of water). Of the 219 samples collected during the recreation months (April 1 through October 31), 21 percent exceeded the current (2003) KDHE single-sample fecal coliform criterion for secondary contact rec-reation (2,000 colonies per 100 milliliters of water) and 36 percent exceeded the U.S. Environmental Protection Agency (USEPA) recommended single-sample primary contact recreational criterion for E. coli (576 colonies per 100 milliliters of water). Comparisons of fecal coliform and E. coli criteria indicated that more than one-half of the streams sampled could exceed USEPA recommended E. coli criteria more frequently than the current KDHE fecal coliform criteria. In addition, the ratios of E. coli to fecal coliform (EC/FC) were smallest for sites with slightly saline water (specific conductance greater than 1,000 microsiemens per centimeter at 25 degrees Celsius), indicating that E. coli may not be a good indicator of sanitary quality for those streams. Enterococci bacteria may provide a more accurate assessment of the potential for swimming-related illnesses in these streams. Ratios of EC/FC and linear regression models were developed for estimating E. coli densities on the basis of measured fecal coliform densities for six individual and six groups of surface-water sites. Regression models developed for the six individual surface-water sites and six groups of sites explain at least 89 percent of the variability in E. coli densities. The EC/FC ratios and regression models are site specific and make it possible to convert historic fecal coliform bacteria data to estimated E. coli densities for the selected sites. The EC/FC ratios can be used to estimate E. coli for any range of historical fecal coliform densities, and in some cases with less error than the regression models. The basin- and statewide regression models explained at least 93 percent of the variance and best represent the sites where a majority of the data used to develop the models were collected (Kansas and Little Arkansas Basins). Comparison of the current (2003) KDHE geometric-mean primary contact criterion for fecal coliform bacteria of 200 col/100 mL to the 2002 USEPA recommended geometric-mean criterion of 126 col/100 mL for E. coli results in an EC/FC ratio of 0.63. The geometric-mean EC/FC ratio for all sites except Rattlesnake Creek (site 21) is 0.77, indicating that considerably more than 63 percent of the fecal coliform is E. coli. This potentially could lead to more exceedances of the recommended E. coli criterion, where the water now meets the current (2003) 200-col/100 mL fecal coliform criterion. In this report, turbidity was found to be a reliable estimator of bacteria densities. Regression models are provided for estimating fecal coliform and E. coli bacteria densities using continuous turbidity measurements. Prediction intervals also are provided to show the uncertainty associated with using the regression models. Eighty percent of all measured sample densities and individual turbidity-based estimates from the regression models were in agreement as exceedi

  1. Global atmospheric emissions and transport of polycyclic aromatic hydrocarbons: Evaluation of modeling and transboundary pollution

    NASA Astrophysics Data System (ADS)

    Shen, Huizhong; Tao, Shu

    2014-05-01

    Global atmospheric emissions of 16 polycyclic aromatic hydrocarbons (PAHs) from 69 major sources were estimated for a period from 1960 to 2030. Regression models and a technology split method were used to estimated country and time specific emission factors, resulting in a new estimate of PAH emission factor variation among different countries and over time. PAH emissions in 2007 were spatially resolved to 0.1° × 0.1° grids based on a newly developed global high-resolution fuel combustion inventory (PKU-FUEL-2007). MOZART-4 (The Model for Ozone and Related Chemical Tracers, version 4) was applied to simulate the global tropospheric transport of Benzo(a)pyrene, one of the high molecular weight carcinogenic PAHs, at a horizontal resolution of 1.875° (longitude) × 1.8947° (latitude). The reaction with OH radical, gas/particle partitioning, wet deposition, dry deposition, and dynamic soil/ocean-air exchange of PAHs were considered. The simulation was validated by observations at both background and non-background sites, including Alert site in Canadian High Arctic, EMEP sites in Europe, and other 254 urban/rural sites reported from literatures. Key factors effecting long-range transport of BaP were addressed, and transboundary pollution was discussed.

  2. Application of Acoustic and Optic Methods for Estimating Suspended-Solids Concentrations in the St. Lucie River Estuary, Florida

    USGS Publications Warehouse

    Patino, Eduardo; Byrne, Michael J.

    2004-01-01

    Acoustic and optic methods were applied to estimate suspended-solids concentrations in the St. Lucie River Estuary, southeastern Florida. Acoustic Doppler velocity meters were installed at the North Fork, Speedy Point, and Steele Point sites within the estuary. These sites provide varying flow, salinity, water-quality, and channel cross-sectional characteristics. The monitoring site at Steele Point was not used in the analyses because repeated instrument relocations (due to bridge construction) prevented a sufficient number of samples from being collected at the various locations. Acoustic and optic instruments were installed to collect water velocity, acoustic backscatter strength (ABS), and turbidity data that were used to assess the feasibility of estimating suspended-solids concentrations in the estuary. Other data collected at the monitoring sites include tidal stage, salinity, temperature, and periodic discharge measurements. Regression analyses were used to determine the relations of suspended-solids concentration to ABS and suspended-solids concentration to turbidity at the North Fork and Speedy Point sites. For samples used in regression analyses, measured suspended-solids concentrations at the North Fork and Speedy Point sites ranged from 3 to 37 milligrams per liter, and organic content ranged from 50 to 83 percent. Corresponding salinity for these samples ranged from 0.12 to 22.7 parts per thousand, and corresponding temperature ranged from 19.4 to 31.8 ?C. Relations determined using this technique are site specific and only describe suspended-solids concentrations at locations where data were collected. The suspended-solids concentration to ABS relation resulted in correlation coefficients of 0.78 and 0.63 at the North Fork and Speedy Point sites, respectively. The suspended-solids concentration to turbidity relation resulted in correlation coefficients of 0.73 and 0.89 at the North Fork and Speedy Point sites, respectively. The adequacy of the empirical equations seems to be limited by the number and distribution of suspended-solids samples collected throughout the expected concentration range at the North Fork and Speedy Point sites. Additionally, the ABS relations for both sites seem to overestimate at the low end and underestimate at the high end of the concentration range. Based on the sensitivity analysis, temperature had a greater effect than salinity on estimated suspended-solids concentrations. Temperature also appeared to affect ABS data, perhaps by changing the absorptive and reflective characteristics of the suspended material. Salinity and temperature had no observed effects on the turbidity relation at the North Fork and Speedy Point sites. Estimates of suspended-solids concentrations using ABS data were less 'erratic' than estimates using turbidity data. Combining ABS and turbidity data into one equation did not improve the accuracy of results, and therefore, was not considered.

  3. Trends in port-site metastasis after laparoscopic resection of incidental gallbladder cancer: A systematic review.

    PubMed

    Berger-Richardson, David; Chesney, Tyler R; Englesakis, Marina; Govindarajan, Anand; Cleary, Sean P; Swallow, Carol J

    2017-03-01

    The risk of port-site metastasis after laparoscopic removal of incidental gallbladder cancer was previously estimated to be 14-30%. The present study was designed to determine the incidence of port-site metastasis in incidental gallbladder cancer in the modern era (2000-2014) versus the historic era (1991-1999). We also investigated the site of port-site metastasis. Using PRISMA, a systematic review was conducted to identify papers that addressed the development of port-site metastasis after laparoscopic resection of incidental gallbladder cancer. Studies that described cancer-specific outcomes in ≥5 patients were included. A validated quality appraisal tool was used, and a weighted estimate of the incidence of port-site metastasis was calculated. Based on data extracted from 27 papers that met inclusion criteria, the incidence of port-site metastasis in incidental gallbladder cancer has decreased from 18.6% prior to 2000 (95% confidence interval 15.3-21.9%, n = 7) to 10.3% since then (95% confidence interval 7.9-12.7%, n = 20) (P < .001). The extraction site is at significantly higher risk than nonextraction sites. The incidence of port-site metastasis in incidental gallbladder cancer has decreased but remains high relative to other primary tumors. Any preoperative finding that raises the suspicion of gallbladder cancer should prompt further investigation and referral to a hepato-pancreato-biliary specialist. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Variation in detection among passive infrared triggered-cameras used in wildlife research

    USGS Publications Warehouse

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  5. Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria

    NASA Astrophysics Data System (ADS)

    Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.

    2009-04-01

    Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and 1000m² from the sample plots together with the heterogeneous landscape may qualify the low correlation, particularly as the correlation increases when strongly fragmented sites are left out.

  6. Quantum and classical dynamics of water dissociation on Ni(111): A test of the site-averaging model in dissociative chemisorption of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bin; Department of Chemical Physics, University of Science and Technology of China, Hefei 230026; Guo, Hua, E-mail: hguo@unm.edu

    Recently, we reported the first highly accurate nine-dimensional global potential energy surface (PES) for water interacting with a rigid Ni(111) surface, built on a large number of density functional theory points [B. Jiang and H. Guo, Phys. Rev. Lett. 114, 166101 (2015)]. Here, we investigate site-specific reaction probabilities on this PES using a quasi-seven-dimensional quantum dynamical model. It is shown that the site-specific reactivity is largely controlled by the topography of the PES instead of the barrier height alone, underscoring the importance of multidimensional dynamics. In addition, the full-dimensional dissociation probability is estimated by averaging fixed-site reaction probabilities with appropriatemore » weights. To validate this model and gain insights into the dynamics, additional quasi-classical trajectory calculations in both full and reduced dimensions have also been performed and important dynamical factors such as the steering effect are discussed.« less

  7. REAL-TIME MODELING AND MEASUREMENT OF MOBILE SOURCE POLLUTANT CONCENTRATIONS FOR ESTIMATING HUMAN EXPOSURES IN COMMUNITIES NEAR ROADWAYS

    EPA Science Inventory

    The United States Environmental Protection Agency's (EPA) National Exposure Research Laboratory (NERL) is pursuing a project to improve the methodology for real-time site specific modeling of human exposure to pollutants from motor vehicles. The overall project goal is to deve...

  8. Independent data validation of an in vitro method for prediction of relative bioavailability of arsenic in contaminated soils

    EPA Science Inventory

    In vitro bioaccessibility assays (IVBA) estimate arsenic (As) relative bioavailability (RBA) in contaminated soils to improve the accuracy of site-specific human exposure assessments and risk calculations. For an IVBA assay to gain acceptance for use in risk assessment, it must ...

  9. Applying WEPP technologies to western alkaline surface coal mines

    Treesearch

    J. Q. Wu; S. Dun; H. Rhee; X. Liu; W. J. Elliot; T. Golnar; J. R. Frankenberger; D. C. Flanagan; P. W. Conrad; R. L. McNearny

    2011-01-01

    One aspect of planning surface mining operations, regulated by the National Pollutant Discharge Elimination System (NPDES), is estimating potential environmental impacts during mining operations and the reclamation period that follows. Practical computer simulation tools are effective for evaluating site-specific sediment control and reclamation plans for the NPDES....

  10. Field test of the superconducting gravimeter as a hydrologic sensor.

    PubMed

    Wilson, Clark R; Scanlon, Bridget; Sharp, John; Longuevergne, Laurent; Wu, Hongqiu

    2012-01-01

    We report on a field test of a transportable version of a superconducting gravimeter (SG) intended for groundwater storage monitoring. The test was conducted over a 6-month period at a site adjacent to a well in the recharge zone of the karstic Edwards Aquifer, a major groundwater resource in central Texas. The purpose of the study was to assess requirements for unattended operation of the SG in a field setting and to obtain a gravimetric estimate of aquifer specific yield. The experiment confirmed successful operation of the SG, but water level changes were small (<0.3 m) leading to uncertainty in the estimate of specific yield. Barometric pressure changes were the dominant cause of both water level variations and non-tidal gravity changes. The specific yield estimate (0.26) is larger than most published values and dependent mainly on low frequency variations in residual gravity and water level time series. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  11. Predatory fish depletion and recovery potential on Caribbean reefs.

    PubMed

    Valdivia, Abel; Cox, Courtney Ellen; Bruno, John Francis

    2017-03-01

    The natural, prehuman abundance of most large predators is unknown because of the lack of historical data and a limited understanding of the natural factors that control their populations. Determining the supportable predator biomass at a given location (that is, the predator carrying capacity) would help managers to optimize protection and would provide site-specific recovery goals. We assess the relationship between predatory reef fish biomass and several anthropogenic and environmental variables at 39 reefs across the Caribbean to (i) estimate their roles determining local predator biomass and (ii) determine site-specific recovery potential if fishing was eliminated. We show that predatory reef fish biomass tends to be higher in marine reserves but is strongly negatively related to human activities, especially coastal development. However, human activities and natural factors, including reef complexity and prey abundance, explain more than 50% of the spatial variation in predator biomass. Comparing site-specific predator carrying capacities to field observations, we infer that current predatory reef fish biomass is 60 to 90% lower than the potential supportable biomass in most sites, even within most marine reserves. We also found that the scope for recovery varies among reefs by at least an order of magnitude. This suggests that we could underestimate unfished biomass at sites that provide ideal conditions for predators or greatly overestimate that of seemingly predator-depleted sites that may have never supported large predator populations because of suboptimal environmental conditions.

  12. Photoaffinity labeling of protoporphyrinogen oxidase, the molecular target of diphenylether-type herbicides.

    PubMed

    Camadro, J M; Matringe, M; Thome, F; Brouillet, N; Mornet, R; Labbe, P

    1995-05-01

    Diphenylether-type herbicides are extremely potent inhibitors of protoporphyrinogen oxidase, a membrane-bound enzyme involved in the heme and chlorophyll biosynthesis pathways. Tritiated acifluorfen and a diazoketone derivative of tritiated acifluorfen were specifically bound to a single class of high-affinity binding sites on yeast mitochondrial membranes with apparent dissociation constants of 7 nM and 12.5 nM, respectively. The maximum density of specific binding sites, determined by Scatchard analysis, was 3 pmol.mg-1 protein. Protoporphyrinogen oxidase specific activity was estimated to be 2500 nmol protoporphyrinogen oxidized h-1.mol-1 enzyme. The diazoketone derivative of tritiated acifluorfen was used to specifically photolabel yeast protoporphyrinogen oxidase. The specifically labeled polypeptide in wild-type mitochondrial membranes had an apparent molecular mass of 55 kDa, identical to the molecular mass of the purified enzyme. This photolabeled polypeptide was not detected in a protoporphyrinogen-oxidase-deficient yeast strain, but the membranes contained an equivalent amount of inactive immunoreactive protoporphyrinogen oxidase protein.

  13. Site-specific cancer risk in the Baltic cohort of Chernobyl cleanup workers, 1986–2007

    PubMed Central

    Rahu, Kaja; Hakulinen, Timo; Smailyte, Giedre; Stengrevics, Aivars; Auvinen, Anssi; Inskip, Peter D.; Boice, John D.; Rahu, Mati

    2013-01-01

    Objective To assess site-specific cancer risk in the Baltic cohort of Chernobyl cleanup workers 1986–2007. Methods The Baltic cohort includes 17,040 men from Estonia, Latvia and Lithuania who participated in the environmental cleanup after the accident at the Chernobyl Nuclear Power Station in 1986–1991, and who were followed for cancer incidence until the end of 2007. Cancer cases diagnosed in the cohort and in the male population of each country were identified from the respective national cancer registers. The proportional incidence ratio (PIR) with 95% confidence interval (CI) was used to estimate the site-specific cancer risk in the cohort. For comparison and as it was possible, the site-specific standardized incidence ratio (SIR) was calculated for the Estonian sub-cohort, which was not feasible for the other countries. Results Overall, 756 cancer cases were reported during 1986–2007. A higher proportion of thyroid cancers in relation to the male population was found (PIR=2.76; 95%CI 1.63–4.36), especially among those who started their mission shortly after the accident, in April–May 1986 (PIR=6.38; 95% CI 2.34–13.89). Also, an excess of oesophageal cancers was noted (PIR=1.52; 95% CI 1.06–2.11). No increased PIRs for leukaemia or radiation-related cancer sites combined were observed. PIRs and SIRs for the Estonian sub-cohort demonstrated the same site-specific cancer risk pattern. Conclusion Consistent evidence of an increase in radiation-related cancers in the Baltic cohort was not observed with the possible exception of thyroid cancer, where conclusions are hampered by known medical examination including thyroid screening among cleanup workers. PMID:23683549

  14. Site-specific cancer risk in the Baltic cohort of Chernobyl cleanup workers, 1986-2007.

    PubMed

    Rahu, Kaja; Hakulinen, Timo; Smailyte, Giedre; Stengrevics, Aivars; Auvinen, Anssi; Inskip, Peter D; Boice, John D; Rahu, Mati

    2013-09-01

    To assess site-specific cancer risk in the Baltic cohort of Chernobyl cleanup workers, 1986-2007. The Baltic cohort includes 17,040 men from Estonia, Latvia and Lithuania who participated in the environmental cleanup after the accident at the Chernobyl Nuclear Power Station in 1986-1991 and who were followed up for cancer incidence until the end of 2007. Cancer cases diagnosed in the cohort and in the male population of each country were identified from the respective national cancer registers. The proportional incidence ratio (PIR) with 95% confidence interval (CI) was used to estimate the site-specific cancer risk in the cohort. For comparison and as it was possible, the site-specific standardised incidence ratio (SIR) was calculated for the Estonian sub-cohort, which was not feasible for the other countries. Overall, 756 cancer cases were reported during 1986-2007. A higher proportion of thyroid cancers in relation to the male population was found (PIR=2.76; 95%CI 1.63-4.36), especially among those who started their mission shortly after the accident, in April-May 1986 (PIR=6.38; 95%CI 2.34-13.89). Also, an excess of oesophageal cancers was noted (PIR=1.52; 95% CI 1.06-2.11). No increased PIRs for leukaemia or radiation-related cancer sites combined were observed. PIRs and SIRs for the Estonian sub-cohort demonstrated the same site-specific cancer risk pattern. Consistent evidence of an increase in radiation-related cancers in the Baltic cohort was not observed with the possible exception of thyroid cancer, where conclusions are hampered by known medical examination including thyroid screening among cleanup workers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Life cycle based risk assessment of recycled materials in roadway construction.

    PubMed

    Carpenter, A C; Gardner, K H; Fopiano, J; Benson, C H; Edil, T B

    2007-01-01

    This paper uses a life-cycle assessment (LCA) framework to characterize comparative environmental impacts from the use of virgin aggregate and recycled materials in roadway construction. To evaluate site-specific human toxicity potential (HTP) in a more robust manner, metals release data from a demonstration site were combined with an unsaturated contaminant transport model to predict long-term impacts to groundwater. The LCA determined that there were reduced energy and water consumption, air emissions, Pb, Hg and hazardous waste generation and non-cancer HTP when bottom ash was used in lieu of virgin crushed rock. Conversely, using bottom ash instead of virgin crushed rock increased the cancer HTP risk due to potential leachate generation by the bottom ash. At this scale of analysis, the trade-offs are clearly between the cancer HTP (higher for bottom ash) and all of the other impacts listed above (lower for bottom ash). The site-specific analysis predicted that the contaminants (Cd, Cr, Se and Ag for this study) transported from the bottom ash to the groundwater resulted in very low unsaturated zone contaminant concentrations over a 200 year period due to retardation in the vadose zone. The level of contaminants predicted to reach the groundwater after 200 years was significantly less than groundwater maximum contaminant levels (MCL) set by the US Environmental Protection Agency for drinking water. Results of the site-specific contaminant release estimates vary depending on numerous site and material specific factors. However, the combination of the LCA and the site specific analysis can provide an appropriate context for decision making. Trade-offs are inherent in making decisions about recycled versus virgin material use, and regulatory frameworks should recognize and explicitly acknowledge these trade-offs in decision processes.

  16. Evaluation and application of regional turbidity-sediment regression models in Virginia

    USGS Publications Warehouse

    Hyer, Kenneth; Jastram, John D.; Moyer, Douglas; Webber, James S.; Chanat, Jeffrey G.

    2015-01-01

    Conventional thinking has long held that turbidity-sediment surrogate-regression equations are site specific and that regression equations developed at a single monitoring station should not be applied to another station; however, few studies have evaluated this issue in a rigorous manner. If robust regional turbidity-sediment models can be developed successfully, their applications could greatly expand the usage of these methods. Suspended sediment load estimation could occur as soon as flow and turbidity monitoring commence at a site, suspended sediment sampling frequencies for various projects potentially could be reduced, and special-project applications (sediment monitoring following dam removal, for example) could be significantly enhanced. The objective of this effort was to investigate the turbidity-suspended sediment concentration (SSC) relations at all available USGS monitoring sites within Virginia to determine whether meaningful turbidity-sediment regression models can be developed by combining the data from multiple monitoring stations into a single model, known as a “regional” model. Following the development of the regional model, additional objectives included a comparison of predicted SSCs between the regional model and commonly used site-specific models, as well as an evaluation of why specific monitoring stations did not fit the regional model.

  17. Stochastic empirical loading and dilution model (SELDM) version 1.0.0

    USGS Publications Warehouse

    Granato, Gregory E.

    2013-01-01

    The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.

  18. Regression models to estimate real-time concentrations of selected constituents in two tributaries to Lake Houston near Houston, Texas, 2005-07

    USGS Publications Warehouse

    Oden, Timothy D.; Asquith, William H.; Milburn, Matthew S.

    2009-01-01

    In December 2005, the U.S. Geological Survey in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (total coliform and Escherichia coli), atrazine, and suspended sediment at two U.S. Geological Survey streamflow-gaging stations upstream from Lake Houston near Houston (08068500 Spring Creek near Spring, Texas, and 08070200 East Fork San Jacinto River near New Caney, Texas). The data from the discrete water-quality samples collected during 2005-07, in conjunction with monitored real-time data already being collected - physical properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), streamflow, and rainfall - were used to develop regression models for predicting water-quality constituent concentrations for inflows to Lake Houston. Rainfall data were obtained from a rain gage monitored by Harris County Homeland Security and Emergency Management and colocated with the Spring Creek station. The leaps and bounds algorithm was used to find the best subsets of possible regression models (minimum residual sum of squares for a given number of variables). The potential explanatory or predictive variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, rainfall, and time (to account for seasonal variations inherent in some water-quality data). The response variables at each site were nitrite plus nitrate nitrogen, total phosphorus, organic carbon, Escherichia coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities as a means to estimate concentrations of the various constituents under investigation, with accompanying estimates of measurement uncertainty. Each regression equation can be used to estimate concentrations of a given constituent in real time. In conjunction with estimated concentrations, constituent loads were estimated by multiplying the estimated concentration by the corresponding streamflow and applying the appropriate conversion factor. By computing loads from estimated constituent concentrations, a continuous record of estimated loads can be available for comparison to total maximum daily loads. The regression equations presented in this report are site specific to the Spring Creek and East Fork San Jacinto River streamflow-gaging stations; however, the methods that were developed and documented could be applied to other tributaries to Lake Houston for estimating real-time water-quality data for streams entering Lake Houston.

  19. Serotype-Specific Changes in Invasive Pneumococcal Disease after Pneumococcal Conjugate Vaccine Introduction: A Pooled Analysis of Multiple Surveillance Sites

    PubMed Central

    Feikin, Daniel R.; Kagucia, Eunice W.; Loo, Jennifer D.; Link-Gelles, Ruth; Puhan, Milo A.; Cherian, Thomas; Levine, Orin S.; Whitney, Cynthia G.; O’Brien, Katherine L.; Moore, Matthew R.

    2013-01-01

    Background Vaccine-serotype (VT) invasive pneumococcal disease (IPD) rates declined substantially following introduction of 7-valent pneumococcal conjugate vaccine (PCV7) into national immunization programs. Increases in non-vaccine-serotype (NVT) IPD rates occurred in some sites, presumably representing serotype replacement. We used a standardized approach to describe serotype-specific IPD changes among multiple sites after PCV7 introduction. Methods and Findings Of 32 IPD surveillance datasets received, we identified 21 eligible databases with rate data ≥2 years before and ≥1 year after PCV7 introduction. Expected annual rates of IPD absent PCV7 introduction were estimated by extrapolation using either Poisson regression modeling of pre-PCV7 rates or averaging pre-PCV7 rates. To estimate whether changes in rates had occurred following PCV7 introduction, we calculated site specific rate ratios by dividing observed by expected IPD rates for each post-PCV7 year. We calculated summary rate ratios (RRs) using random effects meta-analysis. For children <5 years old, overall IPD decreased by year 1 post-PCV7 (RR 0·55, 95% CI 0·46–0·65) and remained relatively stable through year 7 (RR 0·49, 95% CI 0·35–0·68). Point estimates for VT IPD decreased annually through year 7 (RR 0·03, 95% CI 0·01–0·10), while NVT IPD increased (year 7 RR 2·81, 95% CI 2·12–3·71). Among adults, decreases in overall IPD also occurred but were smaller and more variable by site than among children. At year 7 after introduction, significant reductions were observed (18–49 year-olds [RR 0·52, 95% CI 0·29–0·91], 50–64 year-olds [RR 0·84, 95% CI 0·77–0·93], and ≥65 year-olds [RR 0·74, 95% CI 0·58–0·95]). Conclusions Consistent and significant decreases in both overall and VT IPD in children occurred quickly and were sustained for 7 years after PCV7 introduction, supporting use of PCVs. Increases in NVT IPD occurred in most sites, with variable magnitude. These findings may not represent the experience in low-income countries or the effects after introduction of higher valency PCVs. High-quality, population-based surveillance of serotype-specific IPD rates is needed to monitor vaccine impact as more countries, including low-income countries, introduce PCVs and as higher valency PCVs are used. Please see later in the article for the Editors' Summary PMID:24086113

  20. Algorithm Estimates Microwave Water-Vapor Delay

    NASA Technical Reports Server (NTRS)

    Robinson, Steven E.

    1989-01-01

    Accuracy equals or exceeds conventional linear algorithms. "Profile" algorithm improved algorithm using water-vapor-radiometer data to produce estimates of microwave delays caused by water vapor in troposphere. Does not require site-specific and weather-dependent empirical parameters other than standard meteorological data, latitude, and altitude for use in conjunction with published standard atmospheric data. Basic premise of profile algorithm, wet-path delay approximated closely by solution to simplified version of nonlinear delay problem and generated numerically from each radiometer observation and simultaneous meteorological data.

  1. Joint-specific risk of impaired function in fibrodysplasia ossificans progressiva (FOP).

    PubMed

    Pignolo, Robert J; Durbin-Johnson, Blythe P; Rocke, David M; Kaplan, Frederick S

    2018-04-01

    Fibrodysplasia ossificans progressiva (FOP) causes progressive disability due to heterotopic ossification from episodic flare-ups. Using data from 500 FOP patients (representing 63% of all known patients world-wide), age- and joint-specific risks of new joint involvement were estimated using parametric and nonparametric statistical methods. Compared to data from a 1994 survey of 44 individuals with FOP, our current estimates of age- and joint-specific risks of new joint involvement are more accurate (narrower confidence limits), based on a wider range of ages, and have less bias due to its greater comprehensiveness (captures over three-fifths of the known FOP patients worldwide). For the neck, chest, abdomen, and upper back, the estimated hazard decreases over time. For the jaw, lower back, shoulder, elbow, wrist, fingers, hip, knee, ankle, and foot, the estimated hazard increases initially then either plateaus or decreases. At any given time and for any anatomic site, the data indicate which joints are at risk. This study of approximately 63% of the world's known population of FOP patients provides a refined estimate of risk for new involvement at any joint at any age, as well as the proportion of patients with uninvolved joints at any age. Importantly, these joint-specific survival curves can be used to facilitate clinical trial design and to determine if potential treatments can modify the predicted trajectory of progressive joint dysfunction. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Regression model development and computational procedures to support estimation of real-time concentrations and loads of selected constituents in two tributaries to Lake Houston near Houston, Texas, 2005-9

    USGS Publications Warehouse

    Lee, Michael T.; Asquith, William H.; Oden, Timothy D.

    2012-01-01

    In December 2005, the U.S. Geological Survey (USGS), in cooperation with the City of Houston, Texas, began collecting discrete water-quality samples for nutrients, total organic carbon, bacteria (Escherichia coli and total coliform), atrazine, and suspended sediment at two USGS streamflow-gaging stations that represent watersheds contributing to Lake Houston (08068500 Spring Creek near Spring, Tex., and 08070200 East Fork San Jacinto River near New Caney, Tex.). Data from the discrete water-quality samples collected during 2005–9, in conjunction with continuously monitored real-time data that included streamflow and other physical water-quality properties (specific conductance, pH, water temperature, turbidity, and dissolved oxygen), were used to develop regression models for the estimation of concentrations of water-quality constituents of substantial source watersheds to Lake Houston. The potential explanatory variables included discharge (streamflow), specific conductance, pH, water temperature, turbidity, dissolved oxygen, and time (to account for seasonal variations inherent in some water-quality data). The response variables (the selected constituents) at each site were nitrite plus nitrate nitrogen, total phosphorus, total organic carbon, E. coli, atrazine, and suspended sediment. The explanatory variables provide easily measured quantities to serve as potential surrogate variables to estimate concentrations of the selected constituents through statistical regression. Statistical regression also facilitates accompanying estimates of uncertainty in the form of prediction intervals. Each regression model potentially can be used to estimate concentrations of a given constituent in real time. Among other regression diagnostics, the diagnostics used as indicators of general model reliability and reported herein include the adjusted R-squared, the residual standard error, residual plots, and p-values. Adjusted R-squared values for the Spring Creek models ranged from .582–.922 (dimensionless). The residual standard errors ranged from .073–.447 (base-10 logarithm). Adjusted R-squared values for the East Fork San Jacinto River models ranged from .253–.853 (dimensionless). The residual standard errors ranged from .076–.388 (base-10 logarithm). In conjunction with estimated concentrations, constituent loads can be estimated by multiplying the estimated concentration by the corresponding streamflow and by applying the appropriate conversion factor. The regression models presented in this report are site specific, that is, they are specific to the Spring Creek and East Fork San Jacinto River streamflow-gaging stations; however, the general methods that were developed and documented could be applied to most perennial streams for the purpose of estimating real-time water quality data.

  3. Dominant controls of transpiration along a hillslope transect inferred from ecohydrological measurements and thermodynamic limits

    NASA Astrophysics Data System (ADS)

    Renner, Maik; Hassler, Sibylle K.; Blume, Theresa; Weiler, Markus; Hildebrandt, Anke; Guderle, Marcus; Schymanski, Stanislaus J.; Kleidon, Axel

    2016-05-01

    We combine ecohydrological observations of sap flow and soil moisture with thermodynamically constrained estimates of atmospheric evaporative demand to infer the dominant controls of forest transpiration in complex terrain. We hypothesize that daily variations in transpiration are dominated by variations in atmospheric demand, while site-specific controls, including limiting soil moisture, act on longer timescales. We test these hypotheses with data of a measurement setup consisting of five sites along a valley cross section in Luxembourg. Both hillslopes are covered by forest dominated by European beech (Fagus sylvatica L.). Two independent measurements are used to estimate stand transpiration: (i) sap flow and (ii) diurnal variations in soil moisture, which were used to estimate the daily root water uptake. Atmospheric evaporative demand is estimated through thermodynamically constrained evaporation, which only requires absorbed solar radiation and temperature as input data without any empirical parameters. Both transpiration estimates are strongly correlated to atmospheric demand at the daily timescale. We find that neither vapor pressure deficit nor wind speed add to the explained variance, supporting the idea that they are dependent variables on land-atmosphere exchange and the surface energy budget. Estimated stand transpiration was in a similar range at the north-facing and the south-facing hillslopes despite the different aspect and the largely different stand composition. We identified an inverse relationship between sap flux density and the site-average sapwood area per tree as estimated by the site forest inventories. This suggests that tree hydraulic adaptation can compensate for heterogeneous conditions. However, during dry summer periods differences in topographic factors and stand structure can cause spatially variable transpiration rates. We conclude that absorption of solar radiation at the surface forms a dominant control for turbulent heat and mass exchange and that vegetation across the hillslope adjusts to this constraint at the tree and stand level. These findings should help to improve the description of land-surface-atmosphere exchange at regional scales.

  4. Ground Motion Uncertainty and Variability (single-station sigma): Insights from Euroseistest, Greece

    NASA Astrophysics Data System (ADS)

    Ktenidou, O. J.; Roumelioti, Z.; Abrahamson, N. A.; Cotton, F.; Pitilakis, K.

    2014-12-01

    Despite recent improvements in networks and data, the global aleatory uncertainty (sigma) in GMPEs is still large. One reason is the ergodic approach, where we combine data in space to make up for lack of data in time. By estimating the systematic site response, we can make site-specific GMPEs and use a lower, site-specific uncertainty: single-station sigma. In this study we use the EUROSEISTEST database (http://euroseisdb.civil.auth.gr), which has two distinct advantages: good existing knowledge of site conditions at all stations, and careful relocation of the recorded events. Constraining the site and source parameters as best we can, we minimise the within- and between-events components of the global, ergodic sigma. Following that, knowledge of the site response from empirical and theoretical approaches permits us to move on to single-station sigma. The variability per site is not clearly correlated to the site class. We show that in some cases knowledge of Vs30 is not sufficient, and that site-specific data are needed to capture the response, possibly due to 2D/3D effects from complex geometry. Our values of single-station sigma are low compared to the literature. This may be due to the good ray coverage we have in all directions for small, nearby records. Indeed, our single-station sigma values are similar to published single-path values, which means that they may correspond to a fully -rather than partially- non-ergodic approach. We find larger ground motion variability for short distances and small magnitudes. This may be related to the uncertainty in the depth affecting nearby records more, or to stress drop and causing trade-offs between the source and site terms for small magnitudes.

  5. Sugar-binding sites of the HA1 subcomponent of Clostridium botulinum type C progenitor toxin.

    PubMed

    Nakamura, Toshio; Tonozuka, Takashi; Ide, Azusa; Yuzawa, Takayuki; Oguma, Keiji; Nishikawa, Atsushi

    2008-02-22

    Clostridium botulinum type C 16S progenitor toxin contains a hemagglutinin (HA) subcomponent, designated HA1, which appears to play an important role in the effective internalization of the toxin in gastrointestinal epithelial cells and in creating a broad specificity for the oligosaccharide structure that corresponds to various targets. In this study, using the recombinant protein fused to glutathione S-transferase, we investigated the binding specificity of the HA1 subcomponent to sugars and estimated the binding sites of HA1 based on X-ray crystallography and soaking experiments using various sugars. N-Acetylneuraminic acid, N-acetylgalactosamine, and galactose effectively inhibited the binding that occurs between glutathione S-transferase-HA1 and mucins, whereas N-acetylglucosamine and glucose did not inhibit it. The crystal structures of HA1 complex with N-acetylneuraminic acid, N-acetylgalactosamine, and galactose were also determined. There are two sugar-binding sites, sites I and II. Site I corresponds to the electron densities noted for all sugars and is located at the C-terminal beta-trefoil domain, while site II corresponds to the electron densities noted only for galactose. An aromatic amino acid residue, Trp176, at site I has a stacking interaction with the hexose ring of the sugars. On the other hand, there is no aromatic residue at site II; thus, the interaction with galactose seems to be poor. The double mutant W176A at site I and D271F at site II has no avidity for N-acetylneuraminic acid but has avidity for galactose. In this report, the binding specificity of botulinum C16S toxin HA1 to various sugars is demonstrated based on its structural features.

  6. Sediment data sources and estimated annual suspended-sediment loads of rivers and streams in Colorado

    USGS Publications Warehouse

    Elliott, J.G.; DeFeyter, K.L.

    1986-01-01

    Sources of sediment data collected by several government agencies through water year 1984 are summarized for Colorado. The U.S. Geological Survey has collected suspended-sediment data at 243 sites; these data are stored in the U.S. Geological Survey 's water data storage and retrieval system. The U.S. Forest Service has collected suspended-sediment and bedload data at an additional 225 sites, and most of these data are stored in the U.S. Environmental Protection Agency 's water-quality-control information system. Additional unpublished sediment data are in the possession of the collecting entities. Annual suspended-sediment loads were computed for 133 U.S. Geological Survey sediment-data-collection sites using the daily mean water-discharge/sediment-transport-curve method. Sediment-transport curves were derived for each site by one of three techniques: (1) Least-squares linear regression of all pairs of suspended-sediment and corresponding water-discharge data, (2) least-squares linear regression of data sets subdivided on the basis of hydrograph season; and (3) graphical fit to a logarithm-logarithm plot of data. The curve-fitting technique used for each site depended on site-specific characteristics. Sediment-data sources and estimates of annual loads of suspended, bed, and total sediment from several other reports also are summarized. (USGS)

  7. Describing Site Amplification for Surface Waves in Realistic Basins

    NASA Astrophysics Data System (ADS)

    Bowden, D. C.; Tsai, V. C.

    2017-12-01

    Standard characterizations of site-specific site response assume a vertically-incident shear wave; given a 1D velocity profile, amplification and resonances can be calculated based on conservation of energy. A similar approach can be applied to surface waves, resulting in an estimate of amplification relative to a hard rock site that is different in terms of both amount of amplification and frequency. This prediction of surface-wave site amplification has been well validated through simple simulations, and in this presentation we explore the extent to which a 1D profile can explain observed amplifications in more realistic scenarios. Comparisons of various simple 2D and 3D simulations, for example, allow us to explore the effect of different basin shapes and the relative importance of effects such as focusing, conversion of wave-types and lateral surface wave resonances. Additionally, the 1D estimates for vertically-incident shear waves and for surface waves are compared to spectral ratios of historic events in deep sedimentary basins to demonstrate the appropriateness of the two different predictions. This difference in amplification responses between the wave types implies that a single measurement of site response, whether analytically calculated from 1D models or empirically observed, is insufficient for regions where surface waves play a strong role.

  8. Supply of human allograft tissue in Canada.

    PubMed

    Lakey, Jonathan R T; Mirbolooki, Mohammadreza; Rogers, Christina; Mohr, Jim

    2007-01-01

    There is relatively little known about the supply for allograft tissues in Canada. The major aim of this study is to quantify the current or "Known Supply" of human allograft tissue (bone, tendons, soft tissue, cardiovascular, ocular and skin) from known tissue banks in Canada, to estimate the "Unknown Supply" of human allograft tissue available to Canadian users from other sources, and to investigate the nature and source of these tissue products. Two surveys were developed; one for tissue banks processing one or more tissue types and the other specific to eye banks. Thirty nine sites were initially identified as potential tissue bank respondent sites. Of the 39 sites, 29 sites indicated that they were interested in participating or would consider completing the survey. A survey package and a self-addressed courier envelope were couriered to each of 29 sites. A three week response time was indicated. The project consultants conducted telephone and email follow-up for incomplete data. Unknown supply was estimated by 5 methods. Twenty-eight of 29 sites (97%) completed and returned surveys. Over the past year, respondents reported a total of 5,691 donors (1,550 living and 4,141 cadaveric donors). Including cancellous ground bone, there were 10,729 tissue products produced by the respondent banks. Of these, 71% were produced by accredited banks and 32% were ocular tissues. Total predicted shortfall of allograft tissues was 31,860-66,481 grafts. Through estimating Current supply, and compiling additional qualitative information, this study has provided a snapshot of the current Canadian supply and shortfall of allograft tissue grafts.

  9. Estimation of methane emission rate changes using age-defined waste in a landfill site.

    PubMed

    Ishii, Kazuei; Furuichi, Toru

    2013-09-01

    Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Reduced uncertainty of regional scale CLM predictions of net carbon fluxes and leaf area indices with estimated plant-specific parameters

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2016-04-01

    Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in spring. In order to obtain a more comprehensive estimate of the model uncertainty, a second CLM ensemble was set up, where initial conditions and atmospheric forcings were perturbed in addition to the parameter estimates. This resulted in very high standard deviations (STD) of the modeled annual NEE sums for C3-grass and C3-crop PFTs, ranging between 24.1 and 225.9 gC m-2 y-1, compared to STD = 0.1 - 3.4 gC m-2 y-1 (effect of parameter uncertainty only, without additional perturbation of initial states and atmospheric forcings). The higher spread of modeled NEE for the C3-crop and C3-grass indicated that the model uncertainty was notably higher for those PFTs compared to the forest-PFTs. Our findings highlight the potential of parameter and uncertainty estimation to support the understanding and further development of land surface models such as CLM.

  11. piggybac- and PhiC31-Mediated Genetic Transformation of the Asian Tiger Mosquito, Aedes albopictus (Skuse)

    PubMed Central

    Labbé, Geneviève M. C.; Nimmo, Derric D.; Alphey, Luke

    2010-01-01

    Background The Asian tiger mosquito, Aedes albopictus (Skuse), is a vector of several arboviruses including dengue and chikungunya. This highly invasive species originating from Southeast Asia has travelled the world in the last 30 years and is now established in Europe, North and South America, Africa, the Middle East and the Caribbean. In the absence of vaccine or antiviral drugs, efficient mosquito control strategies are crucial. Conventional control methods have so far failed to control Ae. albopictus adequately. Methodology/Principal Findings Germline transformation of Aedes albopictus was achieved by micro-injection of embryos with a piggyBac-based transgene carrying a 3xP3-ECFP marker and an attP site, combined with piggyBac transposase mRNA and piggyBac helper plasmid. Five independent transgenic lines were established, corresponding to an estimated transformation efficiency of 2–3%. Three lines were re-injected with a second-phase plasmid carrying an attB site and a 3xP3-DsRed2 marker, combined with PhiC31 integrase mRNA. Successful site-specific integration was observed in all three lines with an estimated transformation efficiency of 2–6%. Conclusions/Significance Both piggybac- and site-specific PhiC31-mediated germline transformation of Aedes albopictus were successfully achieved. This is the first report of Ae. albopictus germline transformation and engineering, a key step towards studying and controlling this species using novel molecular techniques and genetic control strategies. PMID:20808959

  12. Spatial and temporal Brook Trout density dynamics: Implications for conservation, management, and monitoring

    USGS Publications Warehouse

    Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,

    2014-01-01

    Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.

  13. Inferring invasive species abundance using removal data from management actions

    USGS Publications Warehouse

    Davis, Amy J.; Hooten, Mevin B.; Miller, Ryan S.; Farnsworth, Matthew L.; Lewis, Jesse S.; Moxcey, Michael; Pepin, Kim M.

    2016-01-01

    Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480–19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates.

  14. Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Martin, Gary R.

    2003-01-01

    This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.

  15. Improved Heat-Stress Algorithm

    NASA Technical Reports Server (NTRS)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  16. Are camera surveys useful for assessing recruitment in white-tailed deer?

    DOE PAGES

    Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.; ...

    2016-12-27

    Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chitwood, M. Colter; Lashley, Marcus A.; Kilgo, John C.

    Camera surveys commonly are used by managers and hunters to estimate white-tailed deer Odocoileus virginianus density and demographic rates. Though studies have documented biases and inaccuracies in the camera survey methodology, camera traps remain popular due to ease of use, cost-effectiveness, and ability to survey large areas. Because recruitment is a key parameter in ungulate population dynamics, there is a growing need to test the effectiveness of camera surveys for assessing fawn recruitment. At Savannah River Site, South Carolina, we used six years of camera-based recruitment estimates (i.e. fawn:doe ratio) to predict concurrently collected annual radiotag-based survival estimates. The coefficientmore » of determination (R) was 0.445, indicating some support for the viability of cameras to reflect recruitment. Here, we added two years of data from Fort Bragg Military Installation, North Carolina, which improved R to 0.621 without accounting for site-specific variability. Also, we evaluated the correlation between year-to-year changes in recruitment and survival using the Savannah River Site data; R was 0.758, suggesting that camera-based recruitment could be useful as an indicator of the trend in survival. Because so few researchers concurrently estimate survival and camera-based recruitment, examining this relationship at larger spatial scales while controlling for numerous confounding variables remains difficult. We believe that future research should test the validity of our results from other areas with varying deer and camera densities, as site (e.g. presence of feral pigs Sus scrofa) and demographic (e.g. fawn age at time of camera survey) parameters may have a large influence on detectability. Until such biases are fully quantified, we urge researchers and managers to use caution when advocating the use of camera-based recruitment estimates.« less

  18. The National Flood Frequency Program, version 3 : a computer program for estimating magnitude and frequency of floods for ungaged sites

    USGS Publications Warehouse

    Ries, Kernell G.; Crouse, Michele Y.

    2002-01-01

    For many years, the U.S. Geological Survey (USGS) has been developing regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, these equations have been developed on a Statewide or metropolitan-area basis as part of cooperative study programs with specific State Departments of Transportation. In 1994, the USGS released a computer program titled the National Flood Frequency Program (NFF), which compiled all the USGS available regression equations for estimating the magnitude and frequency of floods in the United States and Puerto Rico. NFF was developed in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency. Since the initial release of NFF, the USGS has produced new equations for many areas of the Nation. A new version of NFF has been developed that incorporates these new equations and provides additional functionality and ease of use. NFF version 3 provides regression-equation estimates of flood-peak discharges for unregulated rural and urban watersheds, flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals. The Program also provides weighting techniques to improve estimates of flood-peak discharges for gaging stations and ungaged sites. The information provided by NFF should be useful to engineers and hydrologists for planning and design applications. This report describes the flood-regionalization techniques used in NFF and provides guidance on the applicability and limitations of the techniques. The NFF software and the documentation for the regression equations included in NFF are available at http://water.usgs.gov/software/nff.html.

  19. Study of the cell activity in three-dimensional cell culture by using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Arunngam, Pakajiraporn; Mahardika, Anggara; Hiroko, Matsuyoshi; Andriana, Bibin Bintang; Tabata, Yasuhiko; Sato, Hidetoshi

    2018-02-01

    The purpose of this study is to develop a estimation technique of local cell activity in cultured 3D cell aggregate with gelatin hydrogel microspheres by using Raman spectroscopy. It is an invaluable technique allowing real-time, nondestructive, and invasive measurement. Cells in body generally exist in 3D structure, which physiological cell-cell interaction enhances cell survival and biological functions. Although a 3D cell aggregate is a good model of the cells in living tissues, it was difficult to estimate their physiological conditions because there is no effective technique to make observation of intact cells in the 3D structure. In this study, cell aggregates were formed by MC3T-E1 (pre-osteoblast) cells and gelatin hydrogel microspheres. In appropriate condition MC3T-E1 cells can differentiate into osteoblast. We assume that the activity of the cell would be different according to the location in the aggregate because the cells near the surface of the aggregate have more access to oxygen and nutrient. Raman imaging technique was applied to measure 3D image of the aggregate. The concentration of the hydroxyapatite (HA) is generated by osteoblast was estimated with a strong band at 950-970 cm-1 which assigned to PO43- in HA. It reflects an activity of the specific site in the cell aggregate. The cell density in this specific site was analyzed by multivariate analysis of the 3D Raman image. Hence, the ratio between intensity and cell density in the site represents the cell activity.

  20. Multiple-methods investigation of recharge at a humid-region fractured rock site, Pennsylvania, USA

    USGS Publications Warehouse

    Heppner, C.S.; Nimmo, J.R.; Folmar, G.J.; Gburek, W.J.; Risser, D.W.

    2007-01-01

    Lysimeter-percolate and well-hydrograph analyses were combined to evaluate recharge for the Masser Recharge Site (central Pennsylvania, USA). In humid regions, aquifer recharge through an unconfined low-porosity fractured-rock aquifer can cause large magnitude water-table fluctuations over short time scales. The unsaturated hydraulic characteristics of the subsurface porous media control the magnitude and timing of these fluctuations. Data from multiple sets of lysimeters at the site show a highly seasonal pattern of percolate and exhibit variability due to both installation factors and hydraulic property heterogeneity. Individual event analysis of well hydrograph data reveals the primary influences on water-table response, namely rainfall depth, rainfall intensity, and initial water-table depth. Spatial and seasonal variability in well response is also evident. A new approach for calculating recharge from continuous water-table elevation records using a master recession curve (MRC) is demonstrated. The recharge estimated by the MRC approach when assuming a constant specific yield is seasonal to a lesser degree than the recharge estimate resulting from the lysimeter analysis. Partial reconciliation of the two recharge estimates is achieved by considering a conceptual model of flow processes in the highly-heterogeneous underlying fractured porous medium. ?? Springer-Verlag 2007.

  1. Prediction of drilling site-specific interaction of industrial acoustic stimuli and endangered whales: Beaufort Sea (1985). Final report, July 1985-March 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miles, P.R.; Malme, C.I.; Shepard, G.W.

    1986-10-01

    Research was performed during the first year (1985) of the two-year project investigating potential responsiveness of bowhead and gray whales to underwater sounds associated with offshore oil-drilling sites in the Alaskan Beaufort Sea. The underwater acoustic environment and sound propagation characteristics of five offshore sites were determined. Estimates of industrial noise levels versus distance from those sites are provided. LGL Ltd. (bowhead) and BBN (gray whale) jointly present zones of responsiveness of these whales to typical underwater sounds (drillship, dredge, tugs, drilling at gravel island). An annotated bibliography regarding the potential effects of offshore industrial noise on bowhead whales inmore » the Beaufort Sea is included.« less

  2. Estimating the burden of rubella virus infection and congenital rubella syndrome through a rubella immunity assessment among pregnant women in the Democratic Republic of the Congo: Potential impact on vaccination policy.

    PubMed

    Alleman, Mary M; Wannemuehler, Kathleen A; Hao, Lijuan; Perelygina, Ludmila; Icenogle, Joseph P; Vynnycky, Emilia; Fwamba, Franck; Edidi, Samuel; Mulumba, Audry; Sidibe, Kassim; Reef, Susan E

    2016-12-12

    Rubella-containing vaccines (RCV) are not yet part of the Democratic Republic of the Congo's (DRC) vaccination program; however RCV introduction is planned before 2020. Because documentation of DRC's historical burden of rubella virus infection and congenital rubella syndrome (CRS) has been minimal, estimates of the burden of rubella virus infection and of CRS would help inform the country's strategy for RCV introduction. A rubella antibody seroprevalence assessment was conducted using serum collected during 2008-2009 from 1605 pregnant women aged 15-46years attending 7 antenatal care sites in 3 of DRC's provinces. Estimates of age- and site-specific rubella antibody seroprevalence, population, and fertility rates were used in catalytic models to estimate the incidence of CRS per 100,000 live births and the number of CRS cases born in 2013 in DRC. Overall 84% (95% CI 82, 86) of the women tested were estimated to be rubella antibody seropositive. The association between age and estimated antibody seroprevalence, adjusting for study site, was not significant (p=0.10). Differences in overall estimated seroprevalence by study site were observed indicating variation by geographical area (p⩽0.03 for all). Estimated seroprevalence was similar for women declaring residence in urban (84%) versus rural (83%) settings (p=0.67). In 2013 for DRC nationally, the estimated incidence of CRS was 69/100,000 live births (95% CI 0, 186), corresponding to 2886 infants (95% CI 342, 6395) born with CRS. In the 3 provinces, rubella virus transmission is endemic, and most viral exposure and seroconversion occurs before age 15years. However, approximately 10-20% of the women were susceptible to rubella virus infection and thus at risk for having an infant with CRS. This analysis can guide plans for introduction of RCV in DRC. Per World Health Organization recommendations, introduction of RCV should be accompanied by a campaign targeting all children 9months to 14years of age as well as vaccination of women of child bearing age through routine services. Published by Elsevier Ltd.

  3. How Single-site Mutation Affects HP Lattice Proteins

    NASA Astrophysics Data System (ADS)

    Shi, Guangjie; Landau, David P.; Vogel, Thomas; Wüst, Thomas; Li, Ying Wai

    2014-03-01

    We developed a heuristic method based on Wang-Landauand multicanonical sampling for determining the ground-state degeneracy of HP lattice proteins . Our algorithm allowed the most precise estimations of the (sometimes substantial) ground-state degeneracies of some widely studied HP sequences. We investigated the effects of single-site mutation on specific long HP lattice proteins comprehensively, including structural changes in ground-states, changes of ground-state degeneracy and thermodynamic properties of the systems. Both extremely sensitive and insensitive cases have been observed; consequently, properties such as specific heat, tortuosities etc. may be either largely unaffected or may change significantly due to mutation. More interestingly, mutation can even induce a lower ground-state energy in a few cases. Supported by NSF.

  4. 76 FR 38621 - Takes of Marine Mammals Incidental to Specified Activities; Marine Geophysical Survey in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-01

    ... by LGL Ltd., Environmental Research Associates (LGL), on behalf of NSF and L-DEO. The NMFS Biological... must set forth the permissible methods of taking, other means of effecting the least practicable... scientific information and estimation methodology. The alternative method of conducting site-specific...

  5. Solar heating for a restaurant--North Little Rock, Arkansas

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Hot water consumption of large building affects solar-energy system design. Continual demand for hot water at restaurant makes storage less important than at other sites. Storage capacity of system installed in December 1979 equals estimated daily hot-water requirement. Report describes equipment specifications and modifications to existing building heating and hot water systems.

  6. Market Definition For Hardwood Timber in the Southern Appalachians

    Treesearch

    Jeffrey P. Prestemon; John M. Pye; Karen Lee Abt; David N. Wear

    1999-01-01

    Direct estimation of aggregate hardwood supply is seriously complicated by the diversity of prices, species, and site conditions in hardwood stands. An alternative approach is to aggregate regional supply based on stumpage values of individual stands, arguably the real driver of harvest decisions. Complicating this approach is that species-specific prices are only...

  7. The hydrologic bench-mark program; a standard to evaluate time-series trends in selected water-quality constituents for streams in Georgia

    USGS Publications Warehouse

    Buell, G.R.; Grams, S.C.

    1985-01-01

    Significant temporal trends in monthly pH, specific conductance, total alkalinity, hardness, total nitrite-plus-nitrite nitrogen, and total phosphorus measurements at five stream sites in Georgia were identified using a rank correlation technique, the seasonal Kendall test and slope estimator. These sites include a U.S. Geological Survey Hydrologic Bench-Mark site, Falling Creek near Juliette, and four periodic water-quality monitoring sites. Comparison of raw data trends with streamflow-residual trends and, where applicable, with chemical-discharge trends (instantaneous fluxes) shws that some of these trends are responses to factors other than changing streamflow. Percentages of forested, agricultural, and urban cover with each basin did not change much during the periods of water-quality record, and therefore these non-flow-related trends are not obviously related to changes in land cover or land use. Flow-residual water-quality trends at the Hydrologic Bench-Mark site and at the Chattooga River site probably indicate basin reponses to changes in the chemical quality of atmospheric deposition. These two basins are predominantly forested and have received little recent human use. Observed trends at the other three sites probably indicate basin responses to various land uses and water uses associated with agricultural and urban land or to changes in specific uses. (USGS)

  8. Evaluation of Two rK39 Dipstick Tests, Direct Agglutination Test, and Indirect Fluorescent Antibody Test for Diagnosis of Visceral Leishmaniasis in a New Epidemic Site in Highland Ethiopia

    PubMed Central

    Cañavate, Carmen; Herrero, Merce; Nieto, Javier; Cruz, Israel; Chicharro, Carmen; Aparicio, Pilar; Mulugeta, Abate; Argaw, Daniel; Blackstock, Anna J.; Alvar, Jorge; Bern, Caryn

    2011-01-01

    We assessed the performance characteristics of two rK39 immunochromatographic tests, a direct agglutination test (DAT), and an indirect immunofluorescent antibody test (IFAT) in the site of a new epidemic of visceral leishmaniasis (VL) in northwestern Ethiopia. The study population was composed of 179 patients with suspected VL and 67 controls. The sensitivities of Kalazar Detect®, DiaMed-IT Leish®, DAT, and IFAT in 35 polymerase chain reaction–confirmed VL cases were 94.3%, 91.4%, 91.4%, and 100%, respectively, and the specificities were 98.5%, 94%, 98.5%, and 98.5%, respectively. In a Bayesian latent class analysis of all 246 specimens, the estimated sensitivities were 90.5%, 89%, 88.8%, and 96% for Kalazar Detect®, DiaMed-IT Leish®, DAT, and IFAT, respectively; DAT showed the highest estimated specificity (97.4%). Both rK39 immunochromatographic tests perform as well as DAT, and are suitable for VL diagnosis in first-level health centers in this area of Ethiopia. PMID:21212210

  9. Municipal wastewater sludge as a sustainable bioresource in the United States.

    PubMed

    Seiple, Timothy E; Coleman, André M; Skaggs, Richard L

    2017-07-15

    Within the United States and Puerto Rico, publicly owned treatment works (POTWs) process 130.5 Gl/d (34.5 Bgal/d) of wastewater, producing sludge as a waste product. Emerging technologies offer novel waste-to-energy pathways through whole sludge conversion into biofuels. Assessing the feasibility, scalability and tradeoffs of various energy conversion pathways is difficult in the absence of highly spatially resolved estimates of sludge production. In this study, average wastewater solids concentrations and removal rates, and site specific daily average influent flow are used to estimate site specific annual sludge production on a dry weight basis for >15,000 POTWs. Current beneficial uses, regional production hotspots and feedstock aggregation potential are also assessed. Analyses indicate 1) POTWs capture 12.56 Tg/y (13.84 MT/y) of dry solids; 2) 50% are not beneficially utilized, and 3) POTWs can support seven regions that aggregate >910 Mg/d (1000 T/d) of sludge within a travel distance of 100 km. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Estimates of monthly streamflow characteristics at selected sites in the upper Missouri River basin, Montana, base period water years 1937-86

    USGS Publications Warehouse

    Parrett, Charles; Johnson, D.R.; Hull, J.A.

    1989-01-01

    Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)

  11. The use of magnetic resonance sounding for quantifying specific yield and transmissivity in hard rock aquifers: The example of Benin

    NASA Astrophysics Data System (ADS)

    Vouillamoz, J. M.; Lawson, F. M. A.; Yalo, N.; Descloitres, M.

    2014-08-01

    Hundreds of thousands of boreholes have been drilled in hard rocks of Africa and Asia for supplying human communities with drinking water. Despite the common use of geophysics for improving the siting of boreholes, a significant number of drilled holes does not deliver enough water to be equipped (e.g. 40% on average in Benin). As compared to other non-invasive geophysical methods, magnetic resonance sounding (MRS) is selective to groundwater. However, this distinctive feature has not been fully used in previous published studies for quantifying the drainable groundwater in hard rocks (i.e. the specific yield) and the short-term productivity of aquifer (i.e. the transmissivity). We present in this paper a comparison of MRS results (i.e. the water content and pore-size parameter) with both specific yield and transmissivity calculated from long duration pumping tests. We conducted our experiments in six sites located in different hard rock groups in Benin, thus providing a unique data set to assess the usefulness of MRS in hard rock aquifers. We found that the MRS water content is about twice the specific yield. We also found that the MRS pore-size parameter is well correlated with the specific yield. Thus we proposed two linear equations for calculating the specific yield from the MRS water content (with an uncertainty of about 10%) and from the pore-size parameter (with an uncertainty of about 20%). The later has the advantage of defining a so-named MRS cutoff time value for indentifying non-drainable MRS water content and thus low groundwater reserve. We eventually propose a nonlinear equation for calculating the specific yield using jointly the MRS water content and the pore-size parameters, but this approach has to be confirmed with further investigations. This study also confirmed that aquifer transmissivity can be estimated from MRS results with an uncertainty of about 70%. We conclude that MRS can be usefully applied for estimating aquifer specific yield and transmissivity in weathered hard rock aquifers. Our result will contribute to the improvement of well siting and groundwater management in hard rocks.

  12. Evapotranspiration based on equilibrated relative humidity (ETRHEQ): Evaluation over the continental U.S.

    NASA Astrophysics Data System (ADS)

    Rigden, Angela J.; Salvucci, Guido D.

    2015-04-01

    A novel method of estimating evapotranspiration (ET), referred to as the ETRHEQ method, is further developed, validated, and applied across the U.S. from 1961 to 2010. The ETRHEQ method estimates the surface conductance to water vapor transport, which is the key rate-limiting parameter of typical ET models, by choosing the surface conductance that minimizes the vertical variance of the calculated relative humidity profile averaged over the day. The ETRHEQ method, which was previously tested at five AmeriFlux sites, is modified for use at common weather stations and further validated at 20 AmeriFlux sites that span a wide range of climates and limiting factors. Averaged across all sites, the daily latent heat flux RMSE is ˜26 W·m-2 (or 15%). The method is applied across the U.S. at 305 weather stations and spatially interpolated using ANUSPLIN software. Gridded annual mean ETRHEQ ET estimates are compared with four data sets, including water balance-derived ET, machine-learning ET estimates based on FLUXNET data, North American Land Data Assimilation System project phase 2 ET, and a benchmark product that integrates 14 global ET data sets, with RMSEs ranging from 8.7 to 12.5 cm·yr-1. The ETRHEQ method relies only on data measured at weather stations, an estimate of vegetation height derived from land cover maps, and an estimate of soil thermal inertia. These data requirements allow it to have greater spatial coverage than direct measurements, greater historical coverage than satellite methods, significantly less parameter specification than most land surface models, and no requirement for calibration.

  13. Relations of water-quality constituent concentrations to surrogate measurements in the lower Platte River corridor, Nebraska, 2007 through 2011

    USGS Publications Warehouse

    Schaepe, Nathaniel J.; Soenksen, Philip J.; Rus, David L.

    2014-01-01

    The lower Platte River, Nebraska, provides drinking water, irrigation water, and in-stream flows for recreation, wildlife habitat, and vital habitats for several threatened and endangered species. The U.S. Geological Survey (USGS), in cooperation with the Lower Platte River Corridor Alliance (LPRCA) developed site-specific regression models for water-quality constituents at four sites (Shell Creek near Columbus, Nebraska [USGS site 06795500]; Elkhorn River at Waterloo, Nebr. [USGS site 06800500]; Salt Creek near Ashland, Nebr. [USGS site 06805000]; and Platte River at Louisville, Nebr. [USGS site 06805500]) in the lower Platte River corridor. The models were developed by relating continuously monitored water-quality properties (surrogate measurements) to discrete water-quality samples. These models enable existing web-based software to provide near-real-time estimates of stream-specific constituent concentrations to support natural resources management decisions. Since 2007, USGS, in cooperation with the LPRCA, has continuously monitored four water-quality properties seasonally within the lower Platte River corridor: specific conductance, water temperature, dissolved oxygen, and turbidity. During 2007 through 2011, the USGS and the Nebraska Department of Environmental Quality collected and analyzed discrete water-quality samples for nutrients, major ions, pesticides, suspended sediment, and bacteria. These datasets were used to develop the regression models. This report documents the collection of these various water-quality datasets and the development of the site-specific regression models. Regression models were developed for all four monitored sites. Constituent models for Shell Creek included nitrate plus nitrite, total phosphorus, orthophosphate, atrazine, acetochlor, suspended sediment, and Escherichia coli (E. coli) bacteria. Regression models that were developed for the Elkhorn River included nitrate plus nitrite, total Kjeldahl nitrogen, total phosphorus, orthophosphate, chloride, atrazine, acetochlor, suspended sediment, and E. coli. Models developed for Salt Creek included nitrate plus nitrite, total Kjeldahl nitrogen, suspended sediment, and E. coli. Lastly, models developed for the Platte River site included total Kjeldahl nitrogen, total phosphorus, sodium, metolachlor, atrazine, acetochlor, suspended sediment, and E. coli.

  14. Estimating national landfill methane emissions: an application of the 2006 Intergovernmental Panel on Climate Change Waste Model in Panama.

    PubMed

    Weitz, Melissa; Coburn, Jeffrey B; Salinas, Edgar

    2008-05-01

    This paper estimates national methane emissions from solid waste disposal sites in Panama over the time period 1990-2020 using both the 2006 Intergovernmental Panel on Climate Change (IPCC) Waste Model spreadsheet and the default emissions estimate approach presented in the 1996 IPCC Good Practice Guidelines. The IPCC Waste Model has the ability to calculate emissions from a variety of solid waste disposal site types, taking into account country- or region-specific waste composition and climate information, and can be used with a limited amount of data. Countries with detailed data can also run the model with country-specific values. The paper discusses methane emissions from solid waste disposal; explains the differences between the two methodologies in terms of data needs, assumptions, and results; describes solid waste disposal circumstances in Panama; and presents the results of this analysis. It also demonstrates the Waste Model's ability to incorporate landfill gas recovery data and to make projections. The former default method methane emissions estimates are 25 Gg in 1994, and range from 23.1 Gg in 1990 to a projected 37.5 Gg in 2020. The Waste Model estimates are 26.7 Gg in 1994, ranging from 24.6 Gg in 1990 to 41.6 Gg in 2020. Emissions estimates for Panama produced by the new model were, on average, 8% higher than estimates produced by the former default methodology. The increased estimate can be attributed to the inclusion of all solid waste disposal in Panama (as opposed to only disposal in managed landfills), but the increase was offset somewhat by the different default factors and regional waste values between the 1996 and 2006 IPCC guidelines, and the use of the first-order decay model with a time delay for waste degradation in the IPCC Waste Model.

  15. Site-Specific Reference Person Parameters and Derived Concentration Standards for the Savannah River Site

    DOE PAGES

    Stone, Daniel K.; Higley, Kathryn A.; Jannik, G. Timothy

    2014-05-01

    The U.S. Department of Energy Order 458.1 states that the compliance with the 1 mSv annual dose constraint to a member of the public may be demonstrated by calculating dose to the maximally exposed individual (MEI) or to a representative person. Historically, the MEI concept was used for dose compliance at the Savannah River Site (SRS) using adult dose coefficients and adult male usage parameters. For future compliance, SRS plans to use the representative person concept for dose estimates to members of the public. The representative person dose will be based on the reference person dose coefficients from the U.S.more » DOE Derived Concentration Technical Standard and on usage parameters specific to SRS for the reference and typical person. Usage parameters and dose coefficients were determined for inhalation, ingestion and external exposure pathways. The parameters for the representative person were used to calculate and tabulate SRS-specific derived concentration standards (DCSs) for the pathways not included in DOE-STD-1196-2011.« less

  16. Epidemiological burden of postmenopausal osteoporosis in Italy from 2010 to 2020: estimations from a disease model.

    PubMed

    Piscitelli, P; Brandi, M; Cawston, H; Gauthier, A; Kanis, J A; Compston, J; Borgström, F; Cooper, C; McCloskey, E

    2014-11-01

    The article describes the adaptation of a model to estimate the burden of postmenopausal osteoporosis in women aged 50 years and over in Italy between 2010 and 2020. For this purpose, a validated postmenopausal osteoporosis disease model developed for Sweden was adapted to Italy. For each year of the study, the 'incident cohort' (women experiencing a first osteoporotic fracture) was identified and run through a Markov model using 1-year cycles until 2020. Health states were based on the number of fractures and deaths. Fracture by site (hip, clinical vertebral, non-hip non-vertebral) was tracked for each health state. Transition probabilities reflected fracture site-specific risk of death and subsequent fractures. Model inputs specific to Italy included population size and life tables from 1970 to 2020, incidence of hip fracture and BMD by age in the general population (mean and standard deviation). The model estimated that the number of postmenopausal osteoporotic women would increase from 3.3 million to 3.7 million between 2010 and 2020 (+14.3%). Assuming unchanged incidence rates by age group over time, the model predicted the overall number of osteoporotic fractures to increase from 285.0 to 335.8 thousand fractures between 2010 and 2020 (+17.8%). The estimated expected increases in hip, vertebral and non-hip non-vertebral fractures were 22.3, 17.2 and 16.3%, respectively. Due to demographic changes, the burden of fractures is expected to increase markedly by 2020.

  17. Data compilation, synthesis, and calculations used for organic-carbon storage and inventory estimates for mineral soils of the Mississippi River Basin

    USGS Publications Warehouse

    Buell, Gary R.; Markewich, Helaine W.

    2004-01-01

    U.S. Geological Survey investigations of environmental controls on carbon cycling in soils and sediments of the Mississippi River Basin (MRB), an area of 3.3 x 106 square kilometers (km2), have produced an assessment tool for estimating the storage and inventory of soil organic carbon (SOC) by using soil-characterization data from Federal, State, academic, and literature sources. The methodology is based on the linkage of site-specific SOC data (pedon data) to the soil-association map units of the U.S. Department of Agriculture State Soil Geographic (STATSGO) and Soil Survey Geographic (SSURGO) digital soil databases in a geographic information system. The collective pedon database assembled from individual sources presently contains 7,321 pedon records representing 2,581 soil series. SOC storage, in kilograms per square meter (kg/m2), is calculated for each pedon at standard depth intervals from 0 to 10, 10 to 20, 20 to 50, and 50 to 100 centimeters. The site-specific storage estimates are then regionalized to produce national-scale (STATSGO) and county-scale (SSURGO) maps of SOC to a specified depth. Based on this methodology, the mean SOC storage for the top meter of mineral soil in the MRB is approximately 10 kg/m2, and the total inventory is approximately 32.3 Pg (1 petagram = 109 metric tons). This inventory is from 2.5 to 3 percent of the estimated global mineral SOC pool.

  18. Spatially explicit population estimates for black bears based on cluster sampling

    USGS Publications Warehouse

    Humm, J.; McCown, J. Walter; Scheick, B.K.; Clark, Joseph D.

    2017-01-01

    We estimated abundance and density of the 5 major black bear (Ursus americanus) subpopulations (i.e., Eglin, Apalachicola, Osceola, Ocala-St. Johns, Big Cypress) in Florida, USA with spatially explicit capture-mark-recapture (SCR) by extracting DNA from hair samples collected at barbed-wire hair sampling sites. We employed a clustered sampling configuration with sampling sites arranged in 3 × 3 clusters spaced 2 km apart within each cluster and cluster centers spaced 16 km apart (center to center). We surveyed all 5 subpopulations encompassing 38,960 km2 during 2014 and 2015. Several landscape variables, most associated with forest cover, helped refine density estimates for the 5 subpopulations we sampled. Detection probabilities were affected by site-specific behavioral responses coupled with individual capture heterogeneity associated with sex. Model-averaged bear population estimates ranged from 120 (95% CI = 59–276) bears or a mean 0.025 bears/km2 (95% CI = 0.011–0.44) for the Eglin subpopulation to 1,198 bears (95% CI = 949–1,537) or 0.127 bears/km2 (95% CI = 0.101–0.163) for the Ocala-St. Johns subpopulation. The total population estimate for our 5 study areas was 3,916 bears (95% CI = 2,914–5,451). The clustered sampling method coupled with information on land cover was efficient and allowed us to estimate abundance across extensive areas that would not have been possible otherwise. Clustered sampling combined with spatially explicit capture-recapture methods has the potential to provide rigorous population estimates for a wide array of species that are extensive and heterogeneous in their distribution.

  19. Ngram time series model to predict activity type and energy cost from wrist, hip and ankle accelerometers: implications of age

    PubMed Central

    Strath, Scott J; Kate, Rohit J; Keenan, Kevin G; Welch, Whitney A; Swartz, Ann M

    2016-01-01

    To develop and test time series single site and multi-site placement models, we used wrist, hip and ankle processed accelerometer data to estimate energy cost and type of physical activity in adults. Ninety-nine subjects in three age groups (18–39, 40–64, 65 + years) performed 11 activities while wearing three triaxial accelereometers: one each on the non-dominant wrist, hip, and ankle. During each activity net oxygen cost (METs) was assessed. The time series of accelerometer signals were represented in terms of uniformly discretized values called bins. Support Vector Machine was used for activity classification with bins and every pair of bins used as features. Bagged decision tree regression was used for net metabolic cost prediction. To evaluate model performance we employed the jackknife leave-one-out cross validation method. Single accelerometer and multi-accelerometer site model estimates across and within age group revealed similar accuracy, with a bias range of −0.03 to 0.01 METs, bias percent of −0.8 to 0.3%, and a rMSE range of 0.81–1.04 METs. Multi-site accelerometer location models improved activity type classification over single site location models from a low of 69.3% to a maximum of 92.8% accuracy. For each accelerometer site location model, or combined site location model, percent accuracy classification decreased as a function of age group, or when young age groups models were generalized to older age groups. Specific age group models on average performed better than when all age groups were combined. A time series computation show promising results for predicting energy cost and activity type. Differences in prediction across age group, a lack of generalizability across age groups, and that age group specific models perform better than when all ages are combined needs to be considered as analytic calibration procedures to detect energy cost and type are further developed. PMID:26449155

  20. Use of visual range measurements to predict fine particulate matter exposures in Southwest Asia and Afghanistan.

    PubMed

    Masri, Shahir; Garshick, Eric; Hart, Jaime; Bouhamra, Walid; Koutrakis, Petros

    2017-01-01

    Military personnel deployed to Southwest Asia and Afghanistan were exposed to high levels of ambient particulate matter (PM). However, quantitative ambient exposure data for conducting health studies are limited due to a lack of PM monitoring stations. Since visual range (VR) is proportional to particle light extinction, VR can serve as a surrogate for PM 2.5 (particulate matter with an aerodynamic diameter ≤2.5 µm) concentrations. We used data on VR, relative humidity (RH), and PM 2.5 ground measurements collected in Kuwait from years 2004-2005 to establish the relationship between PM 2.5 and VR. Model validation obtained by regressing trimester average PM 2.5 predictions against PM 2.5 measurements in Kuwait produced an r 2 value of 0.84. Cross validation of urban and rural sites in Kuwait also revealed good model fit. We applied this relationship to location-specific visibility data at 104 regional sites between years 2000-2012 to estimate monthly average PM 2.5 concentrations. Monthly averages at sites in Iraq, Afghanistan, United Arab Emirates, Kuwait, Djibouti, and Qatar ranged from 10 to 365 µg/m3 during this period, while site averages ranged from 22 to 80 µg/m3, indicating considerable spatial and temporal heterogeneity in ambient PM 2.5 across these regions. These data support the use of historical visibility data to estimate location-specific PM 2.5 concentrations for application in epidemiological studies. This study demonstrates the ability to use airport visibility to estimate PM 2.5 concentrations in Southwest Asian and Afghanistan. This supports the use of historical and ongoing visibility data to estimate PM 2.5 exposure in this region of the world, where PM exposure information is otherwise scarce. This is of high utility to epidemiologists investigating the relationship between chronic exposure to PM 2.5 and respiratory diseases among deployed military personnel stationed at various military bases throughout the region. Such information will enable the drafting of improved policies relating to military health.

  1. Peak flood estimation using gene expression programming

    NASA Astrophysics Data System (ADS)

    Zorn, Conrad R.; Shamseldin, Asaad Y.

    2015-12-01

    As a case study for the Auckland Region of New Zealand, this paper investigates the potential use of gene-expression programming (GEP) in predicting specific return period events in comparison to the established and widely used Regional Flood Estimation (RFE) method. Initially calibrated to 14 gauged sites, the GEP derived model was further validated to 10 and 100 year flood events with a relative errors of 29% and 18%, respectively. This is compared to the RFE method providing 48% and 44% errors for the same flood events. While the effectiveness of GEP in predicting specific return period events is made apparent, it is argued that the derived equations should be used in conjunction with those existing methodologies rather than as a replacement.

  2. Bioaccumulation of organochlorines in crows from an indian open waste dumping site: evidence for direct transfer of dioxin-like congeners from the contaminated soil.

    PubMed

    Watanabe, Michio X; Iwata, Hisato; Watanabe, Mafumi; Tanabe, Shinsuke; Subramanian, Annamalai; Yoneda, Kumiko; Hashimoto, Takuma

    2005-06-15

    To assess the significance of waste dumping sites as a source of chemical contamination to ecosystems, we analyzed the residue levels of polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), polychlorinated biphenyls (PCBs), and other organochlorines in the breast muscle of crows from a dumping site in the south of Chennai city, South India. Crows from the dumping site contained significantly higher total TEQs (60 +/- 27 pg/g lipid wt) than those from the reference sites (26 +/- 18 pg/g lipid wt). Especially, certain dioxin-like coplanar PCB congeners (Co-PCBs), such as CB-77 and CB-105, whose source is commercial PCBs,were significantly higher in crows from the dumping site than those from the reference sites. Profiles of PCDDs/DFs and Co-PCBs in crows from the dumping site were similar to those of soil at the same site, which was confirmed by principal component analysis. Furthermore, significant positive correlations were obtained between the congener-specific bioconcentration factors (BCFs) of PCDDs/DFs estimated from concentrations in crows and soil from the dumping site and the theoretical BCFs calculated from water-particle and lipid-water partitioning coefficients. On the other hand, the estimated BCFs had significant negative correlations with the molecular weight of PCDDs/DFs, indicating that molecular size limits their bioaccumulation. These results suggest that dioxin-like congeners in the soil of the dumping site were transferred directly to the crows through the ingestion of on-site garbage contaminated with soil, rather than through trophic transfer in the ecosystem. The present study provides insight into the ecological impacts of dumping sites.

  3. Water-quality characteristics, including sodium-adsorption ratios, for four sites in the Powder River drainage basin, Wyoming and Montana, water years 2001-2004

    USGS Publications Warehouse

    Clark, Melanie L.; Mason, Jon P.

    2006-01-01

    The U.S. Geological Survey, in cooperation with the Wyoming Department of Environmental Quality, monitors streams throughout the Powder River structural basin in Wyoming and parts of Montana for potential effects of coalbed natural gas development. Specific conductance and sodium-adsorption ratios may be larger in coalbed waters than in stream waters that may receive the discharge waters. Therefore, continuous water-quality instruments for specific conductance were installed and discrete water-quality samples were collected to characterize water quality during water years 2001-2004 at four sites in the Powder River drainage basin: Powder River at Sussex, Wyoming; Crazy Woman Creek near Arvada, Wyoming; Clear Creek near Arvada, Wyoming; and Powder River at Moorhead, Montana. During water years 2001-2004, the median specific conductance of 2,270 microsiemens per centimeter at 25 degrees Celsius (?S/cm) in discrete samples from the Powder River at Sussex, Wyoming, was larger than the median specific conductance of 1,930 ?S/cm in discrete samples collected downstream from the Powder River at Moorhead, Montana. The median specific conductance was smallest in discrete samples from Clear Creek (1,180 ?S/cm), which has a dilution effect on the specific conductance for the Powder River at Moorhead, Montana. The daily mean specific conductance from continuous water-quality instruments during the irrigation season showed the same spatial pattern as specific conductance values for the discrete samples. Dissolved sodium, sodium-adsorption ratios, and dissolved solids generally showed the same spatial pattern as specific conductance. The largest median sodium concentration (274 milligrams per liter) and the largest range of sodium-adsorption ratios (3.7 to 21) were measured in discrete samples from the Powder River at Sussex, Wyoming. Median concentrations of sodium and sodium-adsorption ratios were substantially smaller in Crazy Woman Creek and Clear Creek, which tend to decrease sodium concentrations and sodium-adsorption ratios at the Powder River at Moorhead, Montana. Dissolved-solids concentrations in discrete samples were closely correlated with specific conductance values; Pearson's correlation coefficients were 0.98 or greater for all four sites. Regression equations for discrete values of specific conductance and sodium-adsorption ratios were statistically significant (p-values <0.001) at all four sites. The strongest relation (R2=0.92) was at the Powder River at Sussex, Wyoming. Relations on Crazy Woman Creek (R2=0.91) and Clear Creek (R2=0.83) also were strong. The relation between specific conductance and sodium-adsorption ratios was weakest (R2=0.65) at the Powder River at Moorhead, Montana; however, the relation was still significant. These data indicate that values of specific conductance are useful for estimating sodium-adsorption ratios. A regression model called LOADEST was used to estimate dissolved-solids loads for the four sites. The average daily mean dissolved-solids loads varied among the sites during water year 2004. The largest average daily mean dissolved-solids load was calculated for the Powder River at Moorhead, Montana. Although the smallest concentrations of dissolved solids were in samples from Clear Creek, the smallest average daily mean dissolved-solids load was calculated for Crazy Woman Creek. The largest loads occurred during spring runoff, and the smallest loads occurred in late summer, when streamflows typically were smallest. Dissolved-solids loads may be smaller than average during water years 2001-2004 because of smaller than average streamflow as a result of drought conditions.

  4. Turbulent Kinetic Energy (TKE) Budgets Using 5-beam Doppler Profilers

    NASA Astrophysics Data System (ADS)

    Guerra, M. A.; Thomson, J. M.

    2016-12-01

    Field observations of turbulence parameters are important for the development of hydrodynamic models, understanding contaminant mixing, and predicting sediment transport. The turbulent kinetic energy (TKE) budget quantifies where turbulence is being produced, dissipated or transported at a specific site. The Nortek Signature 5-beam AD2CP was used to measure velocities at high sampling rates (up to 8 Hz) at Admiralty Inlet and Rich Passage in Puget Sound, WA, USA. Raw along-beam velocity data is quality controlled and is used to estimate TKE spectra, spatial structure functions, and Reynolds stress tensors. Exceptionally low Doppler noise in the data enables clear observations of the inertial sub-range of isotropic turbulence in both the frequency TKE spectra and the spatial structure functions. From these, TKE dissipation rates are estimated following Kolmogorov's theory of turbulence. The TKE production rates are estimated using Reynolds stress tensors together with the vertical shear in the mean flow. The Reynolds stress tensors are estimated following the methodology of Dewey and Stinger (2007), which is significantly improved by inclusion of the 5th beam (as opposed to the conventional 4). These turbulence parameters are used to study the TKE budget along the water column at the two sites. Ebb and flood production and dissipation rates are compared through the water column at both sites. At Admiralty Inlet, dissipation exceeds production during ebb while the opposite occurs during flood because the proximity to a lateral headland. At Rich Passage, production exceeds dissipation through the water column for all tidal conditions due to a vertical sill in the vicinity of the measurement site.

  5. Predatory fish depletion and recovery potential on Caribbean reefs

    PubMed Central

    Valdivia, Abel; Cox, Courtney Ellen; Bruno, John Francis

    2017-01-01

    The natural, prehuman abundance of most large predators is unknown because of the lack of historical data and a limited understanding of the natural factors that control their populations. Determining the supportable predator biomass at a given location (that is, the predator carrying capacity) would help managers to optimize protection and would provide site-specific recovery goals. We assess the relationship between predatory reef fish biomass and several anthropogenic and environmental variables at 39 reefs across the Caribbean to (i) estimate their roles determining local predator biomass and (ii) determine site-specific recovery potential if fishing was eliminated. We show that predatory reef fish biomass tends to be higher in marine reserves but is strongly negatively related to human activities, especially coastal development. However, human activities and natural factors, including reef complexity and prey abundance, explain more than 50% of the spatial variation in predator biomass. Comparing site-specific predator carrying capacities to field observations, we infer that current predatory reef fish biomass is 60 to 90% lower than the potential supportable biomass in most sites, even within most marine reserves. We also found that the scope for recovery varies among reefs by at least an order of magnitude. This suggests that we could underestimate unfished biomass at sites that provide ideal conditions for predators or greatly overestimate that of seemingly predator-depleted sites that may have never supported large predator populations because of suboptimal environmental conditions. PMID:28275730

  6. Domain-specific interactions between MLN8237 and human serum albumin estimated by STD and WaterLOGSY NMR, ITC, spectroscopic, and docking techniques.

    PubMed

    Yang, Hongqin; Liu, Jiuyang; Huang, Yanmei; Gao, Rui; Tang, Bin; Li, Shanshan; He, Jiawei; Li, Hui

    2017-03-30

    Alisertib (MLN8237) is an orally administered inhibitor of Aurora A kinase. This small-molecule inhibitor is under clinical or pre-clinical phase for the treatment of advanced malignancies. The present study provides a detailed characterization of the interaction of MLN8237 with a drug transport protein called human serum albumin (HSA). STD and WaterLOGSY nuclear magnetic resonance (NMR)-binding studies were conducted first to confirm the binding of MLN8237 to HSA. In the ligand orientation assay, the binding sites of MLN8237 were validated through two site-specific spy molecules (warfarin sodium and ibuprofen, which are two known site-selective probes) by using STD and WaterLOGSY NMR competition techniques. These competition experiments demonstrate that both spy molecules do not compete with MLN8237 for the specific binding site. The AutoDock-based blind docking study recognizes the hydrophobic subdomain IB of the protein as the probable binding site for MLN8237. Thermodynamic investigations by isothermal titration calorimetry (ITC) reveal that the non-covalent interaction between MLN8237 and HSA (binding constant was approximately 10 5  M -1 ) is driven mainly by favorable entropy and unfavorable enthalpy. In addition, synchronous fluorescence, circular dichroism (CD), and 3D fluorescence spectroscopy suggest that MLN8237 may induce conformational changes in HSA.

  7. Domain-specific interactions between MLN8237 and human serum albumin estimated by STD and WaterLOGSY NMR, ITC, spectroscopic, and docking techniques

    PubMed Central

    Yang, Hongqin; Liu, Jiuyang; Huang, Yanmei; Gao, Rui; Tang, Bin; Li, Shanshan; He, Jiawei; Li, Hui

    2017-01-01

    Alisertib (MLN8237) is an orally administered inhibitor of Aurora A kinase. This small-molecule inhibitor is under clinical or pre-clinical phase for the treatment of advanced malignancies. The present study provides a detailed characterization of the interaction of MLN8237 with a drug transport protein called human serum albumin (HSA). STD and WaterLOGSY nuclear magnetic resonance (NMR)-binding studies were conducted first to confirm the binding of MLN8237 to HSA. In the ligand orientation assay, the binding sites of MLN8237 were validated through two site-specific spy molecules (warfarin sodium and ibuprofen, which are two known site-selective probes) by using STD and WaterLOGSY NMR competition techniques. These competition experiments demonstrate that both spy molecules do not compete with MLN8237 for the specific binding site. The AutoDock-based blind docking study recognizes the hydrophobic subdomain IB of the protein as the probable binding site for MLN8237. Thermodynamic investigations by isothermal titration calorimetry (ITC) reveal that the non-covalent interaction between MLN8237 and HSA (binding constant was approximately 105 M−1) is driven mainly by favorable entropy and unfavorable enthalpy. In addition, synchronous fluorescence, circular dichroism (CD), and 3D fluorescence spectroscopy suggest that MLN8237 may induce conformational changes in HSA. PMID:28358124

  8. Domain-specific interactions between MLN8237 and human serum albumin estimated by STD and WaterLOGSY NMR, ITC, spectroscopic, and docking techniques

    NASA Astrophysics Data System (ADS)

    Yang, Hongqin; Liu, Jiuyang; Huang, Yanmei; Gao, Rui; Tang, Bin; Li, Shanshan; He, Jiawei; Li, Hui

    2017-03-01

    Alisertib (MLN8237) is an orally administered inhibitor of Aurora A kinase. This small-molecule inhibitor is under clinical or pre-clinical phase for the treatment of advanced malignancies. The present study provides a detailed characterization of the interaction of MLN8237 with a drug transport protein called human serum albumin (HSA). STD and WaterLOGSY nuclear magnetic resonance (NMR)-binding studies were conducted first to confirm the binding of MLN8237 to HSA. In the ligand orientation assay, the binding sites of MLN8237 were validated through two site-specific spy molecules (warfarin sodium and ibuprofen, which are two known site-selective probes) by using STD and WaterLOGSY NMR competition techniques. These competition experiments demonstrate that both spy molecules do not compete with MLN8237 for the specific binding site. The AutoDock-based blind docking study recognizes the hydrophobic subdomain IB of the protein as the probable binding site for MLN8237. Thermodynamic investigations by isothermal titration calorimetry (ITC) reveal that the non-covalent interaction between MLN8237 and HSA (binding constant was approximately 105 M-1) is driven mainly by favorable entropy and unfavorable enthalpy. In addition, synchronous fluorescence, circular dichroism (CD), and 3D fluorescence spectroscopy suggest that MLN8237 may induce conformational changes in HSA.

  9. AFB/open cycle gas turbine conceptual design study

    NASA Technical Reports Server (NTRS)

    Dickinson, T. W.; Tashjian, R.

    1983-01-01

    Applications of coal fired atmospheric fluidized bed gas turbine systems in industrial cogeneration are identified. Based on site-specific conceptual designs, the potential benefits of the AFB/gas turbine system were compared with an atmospheric fluidized design steam boiler/steam turbine system. The application of these cogeneration systems at four industrial plant sites is reviewed. A performance and benefit analysis was made along with a study of the representativeness of the sites both in regard to their own industry and compared to industry as a whole. A site was selected for the conceptual design, which included detailed site definition, AFB/gas turbine and AFB/steam turbine cogeneration system designs, detailed cost estimates, and comparative performance and benefit analysis. Market and benefit analyses identified the potential market penetration for the cogeneration technologies and quantified the potential benefits.

  10. AFB/open cycle gas turbine conceptual design study

    NASA Astrophysics Data System (ADS)

    Dickinson, T. W.; Tashjian, R.

    1983-09-01

    Applications of coal fired atmospheric fluidized bed gas turbine systems in industrial cogeneration are identified. Based on site-specific conceptual designs, the potential benefits of the AFB/gas turbine system were compared with an atmospheric fluidized design steam boiler/steam turbine system. The application of these cogeneration systems at four industrial plant sites is reviewed. A performance and benefit analysis was made along with a study of the representativeness of the sites both in regard to their own industry and compared to industry as a whole. A site was selected for the conceptual design, which included detailed site definition, AFB/gas turbine and AFB/steam turbine cogeneration system designs, detailed cost estimates, and comparative performance and benefit analysis. Market and benefit analyses identified the potential market penetration for the cogeneration technologies and quantified the potential benefits.

  11. Emissions from prescribed fire in temperate forest in south-east Australia: implications for carbon accounting

    NASA Astrophysics Data System (ADS)

    Possell, M.; Jenkins, M.; Bell, T. L.; Adams, M. A.

    2014-09-01

    We estimated of emissions of carbon, as CO2-equivalents, from planned fire in four sites in a south-eastern Australian forest. Emission estimates were calculated using measurements of fuel load and carbon content of different fuel types, before and after burning, and determination of fuel-specific emission factors. Median estimates of emissions for the four sites ranged from 20 to 139 T CO2-e ha-1. Variability in estimates was a consequence of different burning efficiencies of each fuel type from the four sites. Higher emissions resulted from more fine fuel (twigs, decomposing matter, near-surface live and leaf litter) or coarse woody debris (CWD; > 25 mm diameter) being consumed. In order to assess the effect of estimating emissions when only a few fuel variables are known, Monte-Carlo simulations were used to create seven scenarios where input parameters values were replaced by probability density functions. Calculation methods were: (1) all measured data were constrained between measured maximum and minimum values for each variable, (2) as for (1) except the proportion of carbon within a fuel type was constrained between 0 and 1, (3) as for (2) but losses of mass caused by fire were replaced with burning efficiency factors constrained between 0 and 1; and (4) emissions were calculated using default values in the Australian National Greenhouse Accounts (NGA), National Inventory Report 2011, as appropriate for our sites. Effects of including CWD in calculations were assessed for calculation Method 1, 2 and 3 but not for Method 4 as the NGA does not consider this fuel type. Simulations demonstrate that the probability of estimating true median emissions declines strongly as the amount of information available declines. Including CWD in scenarios increased uncertainty in calculations because CWD is the most variable contributor to fuel load. Inclusion of CWD in scenarios generally increased the amount of carbon lost. We discuss implications of these simulations and how emissions from prescribed burns in temperate Australian forests could be improved.

  12. Clearing the waters: Evaluating the need for site-specific field fluorescence corrections based on turbidity measurements

    USGS Publications Warehouse

    Saraceno, John F.; Shanley, James B.; Downing, Bryan D.; Pellerin, Brian A.

    2017-01-01

    In situ fluorescent dissolved organic matter (fDOM) measurements have gained increasing popularity as a proxy for dissolved organic carbon (DOC) concentrations in streams. One challenge to accurate fDOM measurements in many streams is light attenuation due to suspended particles. Downing et al. (2012) evaluated the need for corrections to compensate for particle interference on fDOM measurements using a single sediment standard in a laboratory study. The application of those results to a large river improved unfiltered field fDOM accuracy. We tested the same correction equation in a headwater tropical stream and found that it overcompensated fDOM when turbidity exceeded ∼300 formazin nephelometric units (FNU). Therefore, we developed a site-specific, field-based fDOM correction equation through paired in situ fDOM measurements of filtered and unfiltered streamwater. The site-specific correction increased fDOM accuracy up to a turbidity as high as 700 FNU, the maximum observed in this study. The difference in performance between the laboratory-based correction equation of Downing et al. (2012) and our site-specific, field-based correction equation likely arises from differences in particle size distribution between the sediment standard used in the lab (silt) and that observed in our study (fine to medium sand), particularly during high flows. Therefore, a particle interference correction equation based on a single sediment type may not be ideal when field sediment size is significantly different. Given that field fDOM corrections for particle interference under turbid conditions are a critical component in generating accurate DOC estimates, we describe a way to develop site-specific corrections.

  13. Low-flow characteristics of streams in Virginia

    USGS Publications Warehouse

    Hayes, Donald C.

    1991-01-01

    Streamflow data were collected and low-flow characteristics computed for 715 gaged sites in Virginia Annual minimum average 7-consecutive-day flows range from 0 to 2,195 cubic feet per second for a 2-year recurrence interval and from 0 to 1,423 cubic feet per second for a 10-year recurrence interval. Drainage areas range from 0.17 to 7,320 square miles. Existing and discontinued gaged sites are separated into three types: long-term continuous-record sites, short-term continuous-record sites, and partial-record sites. Low-flow characteristics for long-term continuous-record sites are determined from frequency curves of annual minimum average 7-consecutive-day flows . Low-flow characteristics for short-term continuous-record sites are estimated by relating daily mean base-flow discharge values at a short-term site to concurrent daily mean discharge values at nearby long-term continuous-record sites having similar basin characteristics . Low-flow characteristics for partial-record sites are estimated by relating base-flow measurements to daily mean discharge values at long-term continuous-record sites. Information from the continuous-record sites and partial-record sites in Virginia are used to develop two techniques for estimating low-flow characteristics at ungaged sites. A flow-routing method is developed to estimate low-flow values at ungaged sites on gaged streams. Regional regression equations are developed for estimating low-flow values at ungaged sites on ungaged streams. The flow-routing method consists of transferring low-flow characteristics from a gaged site, either upstream or downstream, to a desired ungaged site. A simple drainage-area proration is used to transfer values when there are no major tributaries between the gaged and ungaged sites. Standard errors of estimate for108 test sites are 19 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 52 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval . A more complex transfer method must be used when major tributaries enter the stream between the gaged and ungaged sites. Twenty-four stream networks are analyzed, and predictions are made for 84 sites. Standard errors of estimate are 15 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 22 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval. Regional regression equations were developed for estimating low-flow values at ungaged sites on ungaged streams. The State was divided into eight regions on the basis of physiography and geographic grouping of the residuals computed in regression analyses . Basin characteristics that were significant in the regression analysis were drainage area, rock type, and strip-mined area. Standard errors of prediction range from 60 to139 percent for estimates of low-flow characteristics having a 2-year recurrence interval and 90 percent to 172 percent for estimates of low-flow characteristics having a 10-year recurrence interval.

  14. Comprehensive analysis of proton range uncertainties related to stopping-power-ratio estimation using dual-energy CT imaging

    NASA Astrophysics Data System (ADS)

    Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.

    2017-09-01

    The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.

  15. Estimating groundwater recharge uncertainty from joint application of an aquifer test and the water-table fluctuation method

    NASA Astrophysics Data System (ADS)

    Delottier, H.; Pryet, A.; Lemieux, J. M.; Dupuy, A.

    2018-05-01

    Specific yield and groundwater recharge of unconfined aquifers are both essential parameters for groundwater modeling and sustainable groundwater development, yet the collection of reliable estimates of these parameters remains challenging. Here, a joint approach combining an aquifer test with application of the water-table fluctuation (WTF) method is presented to estimate these parameters and quantify their uncertainty. The approach requires two wells: an observation well instrumented with a pressure probe for long-term monitoring and a pumping well, located in the vicinity, for the aquifer test. The derivative of observed drawdown levels highlights the necessity to represent delayed drainage from the unsaturated zone when interpreting the aquifer test results. Groundwater recharge is estimated with an event-based WTF method in order to minimize the transient effects of flow dynamics in the unsaturated zone. The uncertainty on groundwater recharge is obtained by the propagation of the uncertainties on specific yield (Bayesian inference) and groundwater recession dynamics (regression analysis) through the WTF equation. A major portion of the uncertainty on groundwater recharge originates from the uncertainty on the specific yield. The approach was applied to a site in Bordeaux (France). Groundwater recharge was estimated to be 335 mm with an associated uncertainty of 86.6 mm at 2σ. By the use of cost-effective instrumentation and parsimonious methods of interpretation, the replication of such a joint approach should be encouraged to provide reliable estimates of specific yield and groundwater recharge over a region of interest. This is necessary to reduce the predictive uncertainty of groundwater management models.

  16. Estimation of distribution overlap of urn models.

    PubMed

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  17. HLA Amino Acid Polymorphisms and Kidney Allograft Survival

    PubMed Central

    Kamoun, Malek; McCullough, Keith P.; Maiers, Martin; Fernandez Vina, Marcelo A.; Li, Hongzhe; Teal, Valerie; Leichtman, Alan B.; Merion, Robert M.

    2017-01-01

    Background The association of HLA mismatching with kidney allograft survival has been well established. We examined whether amino acid (AA) mismatches (MMs) at the antigen recognition site of HLA molecules represent independent and incremental risk factors for kidney graft failure (GF) beyond those MMs assessed at the antigenic (2-digit) specificity. Methods Data on 240 024 kidney transplants performed between 1987 and 2009 were obtained from the Scientific Registry of Transplant Recipients. We imputed HLA-A, -B, and -DRB1 alleles and corresponding AA polymorphisms from antigenic specificity through the application of statistical and population genetics inferences. GF risk was evaluated using Cox proportional-hazards regression models adjusted for covariates including patient and donor risk factors and HLA antigen MMs. Results We show that estimated AA MMs at particular positions in the peptide-binding pockets of HLA-DRB1 molecule account for a significant incremental risk that was independent of the well-known association of HLA antigen MMs with graft survival. A statistically significant linear relationship between the estimated number of AA MMs and risk of GF was observed for HLA-DRB1 in deceased donor and living donor transplants. This relationship was strongest during the first 12 months after transplantation (hazard ratio, 1.30 per 15 DRB1 AA MM; P < 0.0001). Conclusions This study shows that independent of the well-known association of HLA antigen (2-digit specificity) MMs with kidney graft survival, estimated AA MMs at peptide-binding sites of the HLA-DRB1 molecule account for an important incremental risk of GF. PMID:28221244

  18. Estimating animal populations and body sizes from burrows: Marine ecologists have their heads buried in the sand

    NASA Astrophysics Data System (ADS)

    Schlacher, Thomas A.; Lucrezi, Serena; Peterson, Charles H.; Connolly, Rod M.; Olds, Andrew D.; Althaus, Franziska; Hyndes, Glenn A.; Maslo, Brooke; Gilby, Ben L.; Leon, Javier X.; Weston, Michael A.; Lastra, Mariano; Williams, Alan; Schoeman, David S.

    2016-06-01

    Most ecological studies require knowledge of animal abundance, but it can be challenging and destructive of habitat to obtain accurate density estimates for cryptic species, such as crustaceans that tunnel deeply into the seafloor, beaches, or mudflats. Such fossorial species are, however, widely used in environmental impact assessments, requiring sampling techniques that are reliable, efficient, and environmentally benign for these species and environments. Counting and measuring the entrances of burrows made by cryptic species is commonly employed to index population and body sizes of individuals. The fundamental premise is that burrow metrics consistently predict density and size. Here we review the evidence for this premise. We also review criteria for selecting among sampling methods: burrow counts, visual censuses, and physical collections. A simple 1:1 correspondence between the number of holes and population size cannot be assumed. Occupancy rates, indexed by the slope of regression models, vary widely between species and among sites for the same species. Thus, 'average' or 'typical' occupancy rates should not be extrapolated from site- or species specific field validations and then be used as conversion factors in other situations. Predictions of organism density made from burrow counts often have large uncertainty, being double to half of the predicted mean value. Whether such prediction uncertainty is 'acceptable' depends on investigators' judgements regarding the desired detectable effect sizes. Regression models predicting body size from burrow entrance dimensions are more precise, but parameter estimates of most models are specific to species and subject to site-to-site variation within species. These results emphasise the need to undertake thorough field validations of indirect census techniques that include tests of how sensitive predictive models are to changes in habitat conditions or human impacts. In addition, new technologies (e.g. drones, thermal-, acoustic- or chemical sensors) should be used to enhance visual census techniques of burrows and surface-active animals.

  19. Remote sensing-based estimation of annual soil respiration at two contrasting forest sites

    NASA Astrophysics Data System (ADS)

    Huang, Ni; Gu, Lianhong; Black, T. Andrew; Wang, Li; Niu, Zheng

    2015-11-01

    Soil respiration (Rs), an important component of the global carbon cycle, can be estimated using remotely sensed data, but the accuracy of this technique has not been thoroughly investigated. In this study, we proposed a methodology for the remote estimation of annual Rs at two contrasting FLUXNET forest sites (a deciduous broadleaf forest and an evergreen needleleaf forest). A version of the Akaike's information criterion was used to select the best model from a range of models for annual Rs estimation based on the remotely sensed data products from the Moderate Resolution Imaging Spectroradiometer and root-zone soil moisture product derived from assimilation of the NASA Advanced Microwave Scanning Radiometer soil moisture products and a two-layer Palmer water balance model. We found that the Arrhenius-type function based on nighttime land surface temperature (LST-night) was the best model by comprehensively considering the model explanatory power and model complexity at the Missouri Ozark and BC-Campbell River 1949 Douglas-fir sites. In addition, a multicollinearity problem among LST-night, root-zone soil moisture, and plant photosynthesis factor was effectively avoided by selecting the LST-night-driven model. Cross validation showed that temporal variation in Rs was captured by the LST-night-driven model with a mean absolute error below 1 µmol CO2 m-2 s-1 at both forest sites. An obvious overestimation that occurred in 2005 and 2007 at the Missouri Ozark site reduced the evaluation accuracy of cross validation because of summer drought. However, no significant difference was found between the Arrhenius-type function driven by LST-night and the function considering LST-night and root-zone soil moisture. This finding indicated that the contribution of soil moisture to Rs was relatively small at our multiyear data set. To predict intersite Rs, maximum leaf area index (LAImax) was used as an upscaling factor to calibrate the site-specific reference respiration rates. Independent validation demonstrated that the model incorporating LST-night and LAImax efficiently predicted the spatial and temporal variabilities of Rs. Based on the Arrhenius-type function using LST-night as an input parameter, the rates of annual C release from Rs were 894-1027 g C m-2 yr-1 at the BC-Campbell River 1949 Douglas-fir site and 818-943 g C m-2 yr-1 at the Missouri Ozark site. The ratio between annual Rs estimates based on remotely sensed data and the total annual ecosystem respiration from eddy covariance measurements fell within the range reported in previous studies. Our results demonstrated that estimating annual Rs based on remote sensing data products was possible at deciduous and evergreen forest sites.

  20. 76 FR 18827 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-05

    ... individuals. Time to conduct study: 90 minutes. Estimated travel time to and from site: 30 minutes. Estimated... minutes. Estimated travel time to and from site: 30 minutes. Estimated floater burden: 84 hours (24 x 210... study: 90 minutes. Estimated travel time to and from site: 30 minutes. Estimated participant burden: 50...

  1. Volcanic Surface Deformation in Dominica From GPS Geodesy: Results From the 2007 NSF- REU Site

    NASA Astrophysics Data System (ADS)

    Murphy, R.; James, S.; Styron, R. H.; Turner, H. L.; Ashlock, A.; Cavness, C.; Collier, X.; Fauria, K.; Feinstein, R.; Staisch, L.; Williams, B.; Mattioli, G. S.; Jansma, P. E.; Cothren, J.

    2007-12-01

    GPS measurements have been collected on the island of Dominica in the Lesser Antilles between 2001 and 2007, with five month-long campaigns completed in June of each year supported in part by a NSF REU Site award for the past two years. All GPS data were collected using dual-frequency, code-phase receivers and geodetic-quality antenna, primarily choke rings. Three consecutive 24 hr observation days were normally obtained for each site. Precise station positions were estimated with GIPSY-OASISII using an absolute point positioning strategy and final, precise orbits, clocks, earth orientation parameters, and x-files. All position estimates were updated to ITRF05 and a revised Caribbean Euler pole was used to place our observations in a CAR-fixed frame. Time series were created to determine the velocity of each station. Forward and inverse elastic half-space models with planar (i.e. dike) and Mogi (i.e. point) sources were investigated. Inverse modeling was completed using a downhill simplex method of function minimization. Selected site velocities were used to create appropriate models for specific regions of Dominica, which correspond to known centers of pre-historic volcanic or recent shallow, seismic activity. Because of the current distribution of GPS sites with robust velocity estimates, we limit our models to possible magmatic activity in the northern, proximal to the volcanic centers of Morne Diablotins and Morne aux Diables, and southern, proximal to volcanic centers of Soufriere and Morne Plat Pays, regions of the island. Surface deformation data from the northernmost sites may be fit with the development of a several km-long dike trending approximately northeast- southwest. Activity in the southern volcanic centers is best modeled by an expanding point source at approximately 1 km depth.

  2. Evaluating the All-Ages Lead Model Using SiteSpecific Data: Approaches and Challenges

    EPA Science Inventory

    Lead (Pb) exposure continues to be a problem in the United States. Even after years of progress in reducing environmental levels, CDC estimates at least 500,000 U.S. children ages 1-5 years have blood Pb levels (BLL) above the CDC reference level of 5 µg/dL. Childhood Pb ex...

  3. Improved Aviation Readiness and Inventory Reductions Through Repair Cycle Time Reductions Using Modeling and Simulation.

    DTIC Science & Technology

    1997-12-01

    three NADEP’s within the continental United States and fleet repair sites in Italy and Japan. These facilities are located to support specific...number order. This same morning, P&E’s have a last opportunity to edit the induction file through the Planner and Estimator Cancellation Program ( PECAN

  4. Length and Rate of Individual Participation in Various Activities on Recreation Sites and Areas

    Treesearch

    Gary L. Tyre; George A. James

    1971-01-01

    While statistically reliable methods exist for estimating recreation use on large areas, they often prove prohibitively expensive. Inexpensive alternatives involving the length and rate of individual participation in specific activites are presented, together with data and statistics on the recreational use of three large areas on the National Forests. This...

  5. 76 FR 81943 - Agency Information Collection Request. 30-Day Public Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... performance of the agency's functions; (2) the accuracy of the estimated burden; (3) ways to enhance the... full clearance to collect data using site-specific instruments with a 3-year expiration date. [[Page.../WAIT Training.. 533 1 42/60 373 Princeton Center for Leadership Traning 533 1 36/60 320 (PCLT)/TeenPEP...

  6. Epiphyte Water Retention and Evaporation in Native and Invaded Tropical Montane Cloud Forests in Hawaii

    NASA Astrophysics Data System (ADS)

    Mudd, R. G.; Giambelluca, T. W.

    2006-12-01

    Epiphyte water retention was quantified at two montane cloud forest sites in Hawai'i Volcanoes National Park, one native and the other invaded by an alien tree species. Water storage elements measured included all epiphytic mosses, leafy liverworts, and filmy ferns. Tree surface area was estimated and a careful survey was taken to account for all epiphytes in the sample area of the forest. Samples were collected and analyzed in the lab for epiphyte water retention capacity (WRC). Based on the volume of the different kinds of epiphytes and their corresponding WRC, forest stand water retention capacity for each survey area was estimated. Evaporation from the epiphyte mass was quantified using artificial reference samples attached to trees that were weighed at intervals to determine changes in stored water on days without significant rain or fog. In addition, a soil moisture sensor was wrapped in an epiphyte sample and left in the forest for a 6-day period. Epiphyte biomass at the Native Site and Invaded Site were estimated to be 2.89 t ha-1 and 1.05 t ha-1, respectively. Average WRC at the Native Site and Invaded Site were estimated at 1.45 mm and 0.68 mm, respectively. The difference is likely due to the presence of the invasive Psidium cattleianum at the Invaded Site because its smooth stem surface is unable to support a significant epiphytic layer. The evaporation rate from the epiphyte mass near WSC for the forest stand at the Native Site was measured at 0.38 mm day-1, which represented 10.6 % of the total ET from the forest canopy at the Native Site during the period. The above research has been recently complemented by a thorough investigation of the WSC of all water storage elements (tree stems, tree leaves, shrubs, grasses, litter, fallen branches, and epiphytes) at six forested sites at different elevations within, above, and below the zone of frequent cloud-cover. The goal of this study was to create an inexpensive and efficient methodology for acquiring estimates of above-ground water retention in different types of forests by means of minimally-destructive sampling and surveying. The results of this work serve as baseline data providing a range of possible values of the water retention of specific forest elements and the entire above-ground total where no values have been previously recorded.

  7. Comparison of organ dose and dose equivalent using ray tracing of male and female Voxel phantoms to space flight phantom torso data

    NASA Astrophysics Data System (ADS)

    Kim, Myung-Hee; Qualls, Garry; Slaba, Tony; Cucinotta, Francis A.

    Phantom torso experiments have been flown on the space shuttle and International Space Station (ISS) providing validation data for radiation transport models of organ dose and dose equivalents. We describe results for space radiation organ doses using a new human geometry model based on detailed Voxel phantoms models denoted for males and females as MAX (Male Adult voXel) and Fax (Female Adult voXel), respectively. These models represent the human body with much higher fidelity than the CAMERA model currently used at NASA. The MAX and FAX models were implemented for the evaluation of directional body shielding mass for over 1500 target points of major organs. Radiation exposure to solar particle events (SPE), trapped protons, and galactic cosmic rays (GCR) were assessed at each specific site in the human body by coupling space radiation transport models with the detailed body shielding mass of MAX/FAX phantom. The development of multiple-point body-shielding distributions at each organ site made it possible to estimate the mean and variance of space dose equivalents at the specific organ. For the estimate of doses to the blood forming organs (BFOs), active marrow distributions in adult were accounted at bone marrow sites over the human body. We compared the current model results to space shuttle and ISS phantom torso experiments and to calculations using the CAMERA model.

  8. Comparison of Organ Dose and Dose Equivalent Using Ray Tracing of Male and Female Voxel Phantoms to Space Flight Phantom Torso Data

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Qualls, Garry D.; Cucinotta, Francis A.

    2008-01-01

    Phantom torso experiments have been flown on the space shuttle and International Space Station (ISS) providing validation data for radiation transport models of organ dose and dose equivalents. We describe results for space radiation organ doses using a new human geometry model based on detailed Voxel phantoms models denoted for males and females as MAX (Male Adult voXel) and Fax (Female Adult voXel), respectively. These models represent the human body with much higher fidelity than the CAMERA model currently used at NASA. The MAX and FAX models were implemented for the evaluation of directional body shielding mass for over 1500 target points of major organs. Radiation exposure to solar particle events (SPE), trapped protons, and galactic cosmic rays (GCR) were assessed at each specific site in the human body by coupling space radiation transport models with the detailed body shielding mass of MAX/FAX phantom. The development of multiple-point body-shielding distributions at each organ site made it possible to estimate the mean and variance of space dose equivalents at the specific organ. For the estimate of doses to the blood forming organs (BFOs), active marrow distributions in adult were accounted at bone marrow sites over the human body. We compared the current model results to space shuttle and ISS phantom torso experiments and to calculations using the CAMERA model.

  9. Estimates of projection overlap and zones of convergence within frontal-striatal circuits.

    PubMed

    Averbeck, Bruno B; Lehman, Julia; Jacobson, Moriah; Haber, Suzanne N

    2014-07-16

    Frontal-striatal circuits underlie important decision processes, and pathology in these circuits is implicated in many psychiatric disorders. Studies have shown a topographic organization of cortical projections into the striatum. However, work has also shown that there is considerable overlap in the striatal projection zones of nearby cortical regions. To characterize this in detail, we quantified the complete striatal projection zones from 34 cortical injection locations in rhesus monkeys. We first fit a statistical model that showed that the projection zone of a cortical injection site could be predicted with considerable accuracy using a cross-validated model estimated on only the other injection sites. We then examined the fraction of overlap in striatal projection zones as a function of distance between cortical injection sites, and found that there was a highly regular relationship. Specifically, nearby cortical locations had as much as 80% overlap, and the amount of overlap decayed exponentially as a function of distance between the cortical injection sites. Finally, we found that some portions of the striatum received inputs from all the prefrontal regions, making these striatal zones candidates as information-processing hubs. Thus, the striatum is a site of convergence that allows integration of information spread across diverse prefrontal cortical areas. Copyright © 2014 the authors 0270-6474/14/339497-09$15.00/0.

  10. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    NASA Astrophysics Data System (ADS)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  11. Engineering applications of strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Somerville, Paul

    1993-02-01

    The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.

  12. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD Without ID: A Multi-site Study.

    PubMed

    Pugliese, Cara E; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B; White, Susan W; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D; Schultz, Robert T; Martin, Alex; Anthony, Laura Gutermuth

    2015-12-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility.

  13. Use of state cancer surveillance data to estimate the cancer burden in disaster-affected areas--Hurricane Katrina, 2005.

    PubMed

    Joseph, Djenaba A; Wingo, Phyllis A; King, Jessica B; Pollack, Lori A; Richardson, Lisa C; Wu, Xiaocheng; Chen, Vivien; Austin, Harland D; Rogers, Deirdre; Cook, Janice

    2007-01-01

    The objective of this study was to estimate the burden of cancer in counties affected by Hurricane Katrina using population-based cancer registry data, and to discuss issues related to cancer patients who have been displaced by disasters. The cancer burden was assessed in 75 counties in Louisiana, Alabama, and Mississippi that were designated by the Federal Emergency Management Agency as eligible for individual and public assistance. Data from the National Program of Cancer Registries were used to determine three-year average annual age-adjusted incidence rates and case counts during the diagnosis years 2000-2002 for Louisiana and Alabama. Expected rates and counts for the most-affected counties in Mississippi were estimated by direct, age-specific calculation using the 2000-2002 county level populations and the site-, sex-, race-, and age-specific cancer incidence rates for Louisiana. An estimated 23,549 persons with a new diagnosis of cancer in the past year resided in the disaster-affected counties. Fifty-eight percent of the cases were cancers of the lung/bronchus, colon/rectum, female breast, and prostate. Eleven of the top 15 cancer sites by sex and black/white race in disaster counties had >50% of cases diagnosed at the regional or distant stage. Sizable populations of persons with a recent cancer diagnosis were potentially displaced by Hurricane Katrina. Cancer patients required special attention to access records in order to confirm diagnosis and staging, minimize disruption in treatment, and ensure coverage of care. Cancer registry data can be used to provide disaster planners and clinicians with estimates of the number of cancer patients, many of whom may be undergoing active treatment.

  14. Statistical approach for the retrieval of phytoplankton community structures from in situ fluorescence measurements.

    PubMed

    Wang, Shengqiang; Xiao, Cong; Ishizaka, Joji; Qiu, Zhongfeng; Sun, Deyong; Xu, Qian; Zhu, Yuanli; Huan, Yu; Watanabe, Yuji

    2016-10-17

    Knowledge of phytoplankton community structures is important to the understanding of various marine biogeochemical processes and ecosystem. Fluorescence excitation spectra (F(λ)) provide great potential for studying phytoplankton communities because their spectral variability depends on changes in the pigment compositions related to distinct phytoplankton groups. Commercial spectrofluorometers have been developed to analyze phytoplankton communities by measuring the field F(λ), but estimations using the default methods are not always accurate because of their strong dependence on norm spectra, which are obtained by culturing pure algae of a given group and are assumed to be constant. In this study, we proposed a novel approach for estimating the chlorophyll a (Chl a) fractions of brown algae, cyanobacteria, green algae and cryptophytes based on a data set collected in the East China Sea (ECS) and the Tsushima Strait (TS), with concurrent measurements of in vivo F(λ) and phytoplankton communities derived from pigments analysis. The new approach blends various statistical features by computing the band ratios and continuum-removed spectra of F(λ) without requiring a priori knowledge of the norm spectra. The model evaluations indicate that our approach yields good estimations of the Chl a fractions, with root-mean-square errors of 0.117, 0.078, 0.072 and 0.060 for brown algae, cyanobacteria, green algae and cryptophytes, respectively. The statistical analysis shows that the models are generally robust to uncertainty in F(λ). We recommend using a site-specific model for more accurate estimations. To develop a site-specific model in the ECS and TS, approximately 26 samples are sufficient for using our approach, but this conclusion needs to be validated in additional regions. Overall, our approach provides a useful technical basis for estimating phytoplankton communities from measurements of F(λ).

  15. Use of Visual Range Measurements to Predict PM2.5 Exposures in Southwest Asia and Afghanistan

    PubMed Central

    Masri, Shahir; Garshick, Eric; Hart, Jaime; Bouhamra, Walid; Koutrakis, Petros

    2016-01-01

    Military personnel deployed to Southwest Asia and Afghanistan were exposed to high levels of ambient particulate matter (PM) indicating the potential for exposure-related health effects. However, historical quantitative ambient PM exposure data for conducting epidemiological health studies are unavailable due to a lack of monitoring stations. Since visual range is proportional to particle light extinction (scattering and absorption), visibility can serve as a surrogate for PM2.5 concentrations where ground measurements are not available. We used data on visibility, relative humidity (RH), and PM2.5 ground measurements collected in Kuwait from years 2004 to 2005 to establish the relationship between PM2.5 and visibility. Model validation obtained by regressing trimester average PM2.5 predictions against PM2.5 measurements in Kuwait produced an r2 value of 0.84. Cross validation of urban and rural sites in Kuwait also revealed good model fit. We applied this relationship to location-specific visibility data at 104 regional sites between years 2000 and 2012 to estimate monthly average PM2.5 concentrations. Monthly averages at sites in Iraq, Afghanistan, United Arab Emirates, Kuwait, Djibouti, and Qatar ranged from 10 to 365 µg/m3 during this period, while site averages ranged from 22 to 80 µg/m3, indicating considerable spatial and temporal heterogeneity in ambient PM2.5 across these regions. These data support the use of historical visibility data to estimate location-specific PM2.5 concentrations for use in future epidemiological studies in the region. PMID:27700621

  16. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    NASA Astrophysics Data System (ADS)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  17. Estimating economic thresholds for site-specific weed control using manual weed counts and sensor technology: an example based on three winter wheat trials.

    PubMed

    Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland

    2014-02-01

    Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1)  m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.

  18. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  19. Breeding site selection by coho salmon (Oncorhynchus kisutch) in relation to large wood additions and factors that influence reproductive success

    USGS Publications Warehouse

    Clark, Steven M.; Dunham, Jason B.; McEnroe, Jeffery R.; Lightcap, Scott W.

    2014-01-01

    The fitness of female Pacific salmon (Oncorhynchus spp.) with respect to breeding behavior can be partitioned into at least four fitness components: survival to reproduction, competition for breeding sites, success of egg incubation, and suitability of the local environment near breeding sites for early rearing of juveniles. We evaluated the relative influences of habitat features linked to these fitness components with respect to selection of breeding sites by coho salmon (Oncorhynchus kisutch). We also evaluated associations between breeding site selection and additions of large wood, as the latter were introduced into the study system as a means of restoring habitat conditions to benefit coho salmon. We used a model selection approach to organize specific habitat features into groupings reflecting fitness components and influences of large wood. Results of this work suggest that female coho salmon likely select breeding sites based on a wide range of habitat features linked to all four hypothesized fitness components. More specifically, model parameter estimates indicated that breeding site selection was most strongly influenced by proximity to pool-tail crests and deeper water (mean and maximum depths). Linkages between large wood and breeding site selection were less clear. Overall, our findings suggest that breeding site selection by coho salmon is influenced by a suite of fitness components in addition to the egg incubation environment, which has been the emphasis of much work in the past.

  20. Quantification of amine functional groups and their influence on OM/OC in the IMPROVE network

    NASA Astrophysics Data System (ADS)

    Kamruzzaman, Mohammed; Takahama, Satoshi; Dillner, Ann M.

    2018-01-01

    Recently, we developed a method using FT-IR spectroscopy coupled with partial least squares (PLS) regression to measure the four most abundant organic functional groups, aliphatic C-H, alcohol OH, carboxylic acid OH and carbonyl C=O, in atmospheric particulate matter. These functional groups are summed to estimate organic matter (OM) while the carbon from the functional groups is summed to estimate organic carbon (OC). With this method, OM and OM/OC can be estimated for each sample rather than relying on one assumed value to convert OC measurements to OM. This study continues the development of the FT-IR and PLS method for estimating OM and OM/OC by including the amine functional group. Amines are ubiquitous in the atmosphere and come from motor vehicle exhaust, animal husbandry, biomass burning, and vegetation among other sources. In this study, calibration standards for amines are produced by aerosolizing individual amine compounds and collecting them on PTFE filters using an IMPROVE sampler, thereby mimicking the filter media and collection geometry of ambient standards. The moles of amine functional group on each standard and a narrow range of amine-specific wavenumbers in the FT-IR spectra (wavenumber range 1 550-1 500 cm-1) are used to develop a PLS calibration model. The PLS model is validated using three methods: prediction of a set of laboratory standards not included in the model, a peak height analysis and a PLS model with a broader wavenumber range. The model is then applied to the ambient samples collected throughout 2013 from 16 IMPROVE sites in the USA. Urban sites have higher amine concentrations than most rural sites, but amine functional groups account for a lower fraction of OM at urban sites. Amine concentrations, contributions to OM and seasonality vary by site and sample. Amine has a small impact on the annual average OM/OC for urban sites, but for some rural sites including amine in the OM/OC calculations increased OM/OC by 0.1 or more.

  1. Modelling probabilities of heavy precipitation by regional approaches

    NASA Astrophysics Data System (ADS)

    Gaal, L.; Kysely, J.

    2009-09-01

    Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of the size of the region is linked with a built-in test on regional homogeneity of data. Once a pooling group is delineated, weights based on a dissimilarity measure are assigned to individual sites involved in a pooling group, and all (weighted) data are employed in the estimation of model parameters and high quantiles at a given location. The ROI method is compared with the Hosking-Wallis (HW) regional frequency analysis, which is based on delineating fixed regions (instead of flexible pooling groups) and assigning unit weights to all sites in a region. The comparison of the performance of the individual regional models makes use of data on annual maxima of 1-day precipitation amounts at 209 stations covering the Czech Republic, with altitudes ranging from 150 to 1490 m a.s.l. We conclude that the ROI methodology is superior to the HW analysis, particularly for very high quantiles (100-yr return values). Another advantage of the ROI approach is that subjective decisions - unavoidable when fixed regions in the HW analysis are formed - may efficiently be suppressed, and almost all settings of the ROI method may be justified by results of the simulation experiments. The differences between (any) regional method and single-site analysis are very pronounced and suggest that the at-site estimation is highly unreliable. The ROI method is then applied to estimate high quantiles of precipitation amounts at individual sites. The estimates and their uncertainty are compared with those from a single-site analysis. We focus on the eastern part of the Czech Republic, i.e. an area with complex orography and a particularly pronounced role of Mediterranean cyclones in producing precipitation extremes. The design values are compared with precipitation amounts recorded during the recent heavy precipitation events, including the one associated with the flash flood on June 24, 2009. We also show that the ROI methodology may easily be transferred to the analysis of precipitation extremes in climate model outputs. It efficiently reduces (random) variations in the estimates of parameters of the extreme value distributions in individual gridboxes that result from large spatial variability of heavy precipitation, and represents a straightforward tool for ‘weighting’ data from neighbouring gridboxes within the estimation procedure. The study is supported by the Grant Agency of AS CR under project B300420801.

  2. Site-specific Seismic Hazard Assessment to Establish Elastic Design Properties for Oman Museum-Across Ages, Manah, Sultante of Oman

    NASA Astrophysics Data System (ADS)

    El Hussain, I. W.

    2017-12-01

    The current study provides a site specific deterministic seismic hazard assessment (DSHA) at the selected site for establishing the Oman Museum-Across Ages at Manah area, as a part of a comprehensive geotechnical and seismological plan to design the facilities accordingly. The DSHA first defines the seismic sources that might influence the site and assesses the maximum possible earthquake magnitude for each of them. By assuming each of these maximum earthquakes to occur at a location placing them at the closest distances to the site, the ground motion is predicted utilizing empirical ground motion prediction equations. The local site effects are performed by determining the fundamental frequency of the soft soil using HVSR technique and by estimating amplification spectra using the soil characteristics (mainly shear-wave velocity). Shear-wave velocity has been evaluated using the MASW technique. The maximum amplification value of 2.1 at spectral period 0.06 sec is observed at the ground surface, while the largest amplification value at the top of the conglomerate layer (at 5m depth) is 1.6 for a spectral period of 0.04 Sec. The maximum median 5% damped peak ground acceleration is found to be 0.263g at a spectral period of 0.1 sec. Keywords: DSHA; Site Effects; HVSR; MASW; PGA; Spectral Period

  3. The Nubian Complex of Dhofar, Oman: an African middle stone age industry in Southern Arabia.

    PubMed

    Rose, Jeffrey I; Usik, Vitaly I; Marks, Anthony E; Hilbert, Yamandu H; Galletti, Christopher S; Parton, Ash; Geiling, Jean Marie; Cerný, Viktor; Morley, Mike W; Roberts, Richard G

    2011-01-01

    Despite the numerous studies proposing early human population expansions from Africa into Arabia during the Late Pleistocene, no archaeological sites have yet been discovered in Arabia that resemble a specific African industry, which would indicate demographic exchange across the Red Sea. Here we report the discovery of a buried site and more than 100 new surface scatters in the Dhofar region of Oman belonging to a regionally-specific African lithic industry--the late Nubian Complex--known previously only from the northeast and Horn of Africa during Marine Isotope Stage 5, ∼128,000 to 74,000 years ago. Two optically stimulated luminescence age estimates from the open-air site of Aybut Al Auwal in Oman place the Arabian Nubian Complex at ∼106,000 years ago, providing archaeological evidence for the presence of a distinct northeast African Middle Stone Age technocomplex in southern Arabia sometime in the first half of Marine Isotope Stage 5.

  4. Wind/Tornado Guidelines Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, D.S.; Holman, G.S.

    1991-10-01

    This report documents the strategy employed to develop recommended wind/tornado hazard design guidelines for a New Production Reactor (NRP) currently planned for either the Idaho National Engineering Laboratory (INEL) or the Savannah River (SR) site. The Wind/Tornado Working Group (WTWG), comprising six nationally recognized experts in structural engineering, wind engineering, and meteorology, formulated an independent set of guidelines based on site-specific wind/tornado hazard curves and state-of-the-art tornado missile technology. The basic philosophy was to select realistic wind and missile load specifications, and to meet performance goals by applying conservative structural response evaluation and acceptance criteria. Simplified probabilistic risk analyses (PRAs)more » for wind speeds and missile impact were performed to estimate annual damage risk frequencies for both the INEL and SR sites. These PRAs indicate that the guidelines will lead to facilities that meet the US Department of Energy (DOE) design requirements and that the Nuclear Regulatory Commission guidelines adopted by the DOE for design are adequate to meet the NPR safety goals.« less

  5. Modeling Patient-Specific Magnetic Drug Targeting Within the Intracranial Vasculature

    PubMed Central

    Patronis, Alexander; Richardson, Robin A.; Schmieschek, Sebastian; Wylie, Brian J. N.; Nash, Rupert W.; Coveney, Peter V.

    2018-01-01

    Drug targeting promises to substantially enhance future therapies, for example through the focussing of chemotherapeutic drugs at the site of a tumor, thus reducing the exposure of healthy tissue to unwanted damage. Promising work on the steering of medication in the human body employs magnetic fields acting on nanoparticles made of paramagnetic materials. We develop a computational tool to aid in the optimization of the physical parameters of these particles and the magnetic configuration, estimating the fraction of particles reaching a given target site in a large patient-specific vascular system for different physiological states (heart rate, cardiac output, etc.). We demonstrate the excellent computational performance of our model by its application to the simulation of paramagnetic-nanoparticle-laden flows in a circle of Willis geometry obtained from an MRI scan. The results suggest a strong dependence of the particle density at the target site on the strength of the magnetic forcing and the velocity of the background fluid flow. PMID:29725303

  6. The Nubian Complex of Dhofar, Oman: An African Middle Stone Age Industry in Southern Arabia

    PubMed Central

    Rose, Jeffrey I.; Usik, Vitaly I.; Marks, Anthony E.; Hilbert, Yamandu H.; Galletti, Christopher S.; Parton, Ash; Geiling, Jean Marie; Černý, Viktor; Morley, Mike W.; Roberts, Richard G.

    2011-01-01

    Despite the numerous studies proposing early human population expansions from Africa into Arabia during the Late Pleistocene, no archaeological sites have yet been discovered in Arabia that resemble a specific African industry, which would indicate demographic exchange across the Red Sea. Here we report the discovery of a buried site and more than 100 new surface scatters in the Dhofar region of Oman belonging to a regionally-specific African lithic industry - the late Nubian Complex - known previously only from the northeast and Horn of Africa during Marine Isotope Stage 5, ∼128,000 to 74,000 years ago. Two optically stimulated luminescence age estimates from the open-air site of Aybut Al Auwal in Oman place the Arabian Nubian Complex at ∼106,000 years ago, providing archaeological evidence for the presence of a distinct northeast African Middle Stone Age technocomplex in southern Arabia sometime in the first half of Marine Isotope Stage 5. PMID:22140561

  7. Modeling Patient-Specific Magnetic Drug Targeting Within the Intracranial Vasculature.

    PubMed

    Patronis, Alexander; Richardson, Robin A; Schmieschek, Sebastian; Wylie, Brian J N; Nash, Rupert W; Coveney, Peter V

    2018-01-01

    Drug targeting promises to substantially enhance future therapies, for example through the focussing of chemotherapeutic drugs at the site of a tumor, thus reducing the exposure of healthy tissue to unwanted damage. Promising work on the steering of medication in the human body employs magnetic fields acting on nanoparticles made of paramagnetic materials. We develop a computational tool to aid in the optimization of the physical parameters of these particles and the magnetic configuration, estimating the fraction of particles reaching a given target site in a large patient-specific vascular system for different physiological states (heart rate, cardiac output, etc.). We demonstrate the excellent computational performance of our model by its application to the simulation of paramagnetic-nanoparticle-laden flows in a circle of Willis geometry obtained from an MRI scan. The results suggest a strong dependence of the particle density at the target site on the strength of the magnetic forcing and the velocity of the background fluid flow.

  8. Monitoring suspended sediment and associated trace element and nutrient fluxes in large river basins in the USA

    USGS Publications Warehouse

    Horowitz, A.J.

    2004-01-01

    In 1996, the US Geological Survey converted its occurrence and distribution-based National Stream Quality Accounting Network (NASQAN) to a national, flux-based water-quality monitoring programme. The main objective of the revised programme is to characterize large USA river basins by measuring the fluxes of selected constituents at critical nodes in various basins. Each NASQAN site was instrumented to determine daily discharge, but water and suspended sediment samples are collected no more than 12-15 times per year. Due to the limited sampling programme, annual suspended sediment fluxes were determined from site-specific sediment rating (transport) curves. As no significant relationship could be found between either discharge or suspended sediment concentration (SSC) and suspended sediment chemistry, trace element and nutrient fluxes are estimated using site-specific mean or median chemical levels determined from a number of samples collected over a period of years, and under a variety of flow conditions.

  9. How should forensic anthropologists correct national weather service temperature data for use in estimating the postmortem interval?

    PubMed

    Dabbs, Gretchen R

    2015-05-01

    This study examines the correlation between site-specific and retrospectively collected temperature data from the National Weather Service (NWS) over an extended time period. Using iButtonLink thermochrons (model DS1921G), hourly temperature readings were collected at 15 sites (1 validation; 14 experimental) from December 2010 to January 2012. Comparison between the site-specific temperature data and data retrieved from an official reporter of NWS temperature data shows statistically significant differences between the two in 71.4% (10/14) of cases. The difference ranged between 0.04 and 2.81°C. Examination of both regression and simple adjustment of the mean difference over extended periods (1, 2, 3, 4, 5, 6, & 9 months) suggests that on the timescale typical in forensic anthropology cases neither method of correction is consistent or reliable and that forensic anthropologists would be better suited using uncorrected NWS temperature data when the postmortem interval is extended. © 2015 American Academy of Forensic Sciences.

  10. A new approach for continuous estimation of baseflow using discrete water quality data: Method description and comparison with baseflow estimates from two existing approaches

    USGS Publications Warehouse

    Miller, Matthew P.; Johnson, Henry M.; Susong, David D.; Wolock, David M.

    2015-01-01

    Understanding how watershed characteristics and climate influence the baseflow component of stream discharge is a topic of interest to both the scientific and water management communities. Therefore, the development of baseflow estimation methods is a topic of active research. Previous studies have demonstrated that graphical hydrograph separation (GHS) and conductivity mass balance (CMB) methods can be applied to stream discharge data to estimate daily baseflow. While CMB is generally considered to be a more objective approach than GHS, its application across broad spatial scales is limited by a lack of high frequency specific conductance (SC) data. We propose a new method that uses discrete SC data, which are widely available, to estimate baseflow at a daily time step using the CMB method. The proposed approach involves the development of regression models that relate discrete SC concentrations to stream discharge and time. Regression-derived CMB baseflow estimates were more similar to baseflow estimates obtained using a CMB approach with measured high frequency SC data than were the GHS baseflow estimates at twelve snowmelt dominated streams and rivers. There was a near perfect fit between the regression-derived and measured CMB baseflow estimates at sites where the regression models were able to accurately predict daily SC concentrations. We propose that the regression-derived approach could be applied to estimate baseflow at large numbers of sites, thereby enabling future investigations of watershed and climatic characteristics that influence the baseflow component of stream discharge across large spatial scales.

  11. Application of dimensionless sediment rating curves to predict suspended-sediment concentrations, bedload, and annual sediment loads for rivers in Minnesota

    USGS Publications Warehouse

    Ellison, Christopher A.; Groten, Joel T.; Lorenz, David L.; Koller, Karl S.

    2016-10-27

    Consistent and reliable sediment data are needed by Federal, State, and local government agencies responsible for monitoring water quality, planning river restoration, quantifying sediment budgets, and evaluating the effectiveness of sediment reduction strategies. Heightened concerns about excessive sediment in rivers and the challenge to reduce costs and eliminate data gaps has guided Federal and State interests in pursuing alternative methods for measuring suspended and bedload sediment. Simple and dependable data collection and estimation techniques are needed to generate hydraulic and water-quality information for areas where data are unavailable or difficult to collect.The U.S. Geological Survey, in cooperation with the Minnesota Pollution Control Agency and the Minnesota Department of Natural Resources, completed a study to evaluate the use of dimensionless sediment rating curves (DSRCs) to accurately predict suspended-sediment concentrations (SSCs), bedload, and annual sediment loads for selected rivers and streams in Minnesota based on data collected during 2007 through 2013. This study included the application of DSRC models developed for a small group of streams located in the San Juan River Basin near Pagosa Springs in southwestern Colorado to rivers in Minnesota. Regionally based DSRC models for Minnesota also were developed and compared to DSRC models from Pagosa Springs, Colorado, to evaluate which model provided more accurate predictions of SSCs and bedload in Minnesota.Multiple measures of goodness-of-fit were developed to assess the effectiveness of DSRC models in predicting SSC and bedload for rivers in Minnesota. More than 600 dimensionless ratio values of SSC, bedload, and streamflow were evaluated and delineated according to Pfankuch stream stability categories of “good/fair” and “poor” to develop four Minnesota-based DSRC models. The basis for Pagosa Springs and Minnesota DSRC model effectiveness was founded on measures of goodness-of-fit that included proximity of the model(s) fitted line to the 95-percent confidence intervals of the site-specific model, Nash-Sutcliffe Efficiency values, model biases, and deviation of annual sediment loads from each model to the annual sediment loads calculated from measured data.Composite plots comparing Pagosa Springs DSRCs, Minnesota DSRCs, site-specific regression models, and measured data indicated that regionally developed DSRCs (Minnesota DSRC models) more closely approximated measured data for nearly every site. Pagosa Springs DSRC models had markedly larger exponents (slopes) when compared to the Minnesota DSRC models and the site-specific regression models, and over-represent SSC and bedload at streamflows exceeding bankfull. The Nash-Sutcliffe Efficiency values for the Minnesota DSRC model for suspended-sediment concentrations closely matched Nash-Sutcliffe Efficiency values of the site-specific regression models for 12 out of 16 sites. Nash-Sutcliffe Efficiency values associated with Minnesota DSRCs were greater than those associated with Pagosa Springs DSRCs for every site except the Whitewater River near Beaver, Minnesota site. Pagosa Springs DSRC models were less accurate than the mean of the measured data at predicting SSC values for one-half of the sites for good/fair stability sites and one-half of the sites for poor stability sites. Relative model biases were calculated and determined to be substantial (greater than 5 percent) for Pagosa Springs and Minnesota models, with Minnesota models having a lower mean model bias. For predicted annual suspended-sediment loads (SSL), the Minnesota DSRC models for good/fair and poor stream stability sites more closely approximated the annual SSLs calculated from the measured data as compared to the Pagosa Springs DSRC model.Results of data analyses indicate that DSRC models developed using data collected in Minnesota were more effective at compensating for differences in individual stream characteristics across a variety of basin sizes and flow regimes than DSRC models developed using data collected for Pagosa Springs, Colorado. Minnesota DSRC models retained a substantial portion of the unique sediment signatures for most rivers, although deviations were observed for streams with limited sediment supply and for rivers in southeastern Minnesota, which had markedly larger regression exponents. Compared to Pagosa Springs DSRC models, Minnesota DSRC models had regression slopes that more closely matched the slopes of site-specific regression models, had greater Nash-Sutcliffe Efficiency values, had lower model biases, and approximated measured annual sediment loads more closely. The results presented in this report indicate that regionally based DSRCs can be used to estimate reasonably accurate values of SSC and bedload.Practitioners are cautioned that DSRC reliability is dependent on representative measures of bankfull streamflow, SSC, and bedload. It is, therefore, important that samples of SSC and bedload, which will be used for estimating SSC and bedload at the bankfull streamflow, are collected over a range of conditions that includes the ascending and descending limbs of the event hydrograph. The use of DSRC models may have substantial limitations for certain conditions. For example, DSRC models should not be used to predict SSC and sediment loads for extreme streamflows, such as those that exceed twice the bankfull streamflow value because this constitutes conditions beyond the realm of current (2016) empirical modeling capability. Also, if relations between SSC and streamflow and between bedload and streamflow are not statistically significant, DSRC models should not be used to predict SSC or bedload, as this could result in large errors. For streams that do not violate these conditions, DSRC estimates of SSC and bedload can be used for stream restoration planning and design, and for estimating annual sediment loads for streams where little or no sediment data are available.

  12. National Stormwater Calculator: Low Impact Development ...

    EPA Pesticide Factsheets

    Stormwater discharges continue to cause impairment of our Nation’s waterbodies. EPA has developed the National Stormwater Calculator (SWC) to help support local, state, and national stormwater management objectives to reduce runoff through infiltration and retention using green infrastructure practices as low impact development (LID) controls. The primary focus of the SWC is to inform site developers on how well they can meet a desired stormwater retention target with and without the use of green infrastructure. It can also be used by landscapers and homeowners. Platform. The SWC is a Windows-based desktop program that requires an internet connection. A mobile web application version that will be compatible with all operating systems is currently being developed and is expected to be released in the fall of 2017.Cost Module. An LID cost estimation module within the application allows planners and managers to evaluate LID controls based on comparison of regional and national project planning level cost estimates (capital and average annual maintenance) and predicted LID control performance. Cost estimation is accomplished based on user-identified size configuration of the LID control infrastructure and other key project and site-specific variables. This includes whether the project is being applied as part of new development or redevelopment and if there are existing site constraints.Climate Scenarios. The SWC allows users to consider how runoff may vary based

  13. Computational Predictions Provide Insights into the Biology of TAL Effector Target Sites

    PubMed Central

    Grau, Jan; Wolf, Annett; Reschke, Maik; Bonas, Ulla; Posch, Stefan; Boch, Jens

    2013-01-01

    Transcription activator-like (TAL) effectors are injected into host plant cells by Xanthomonas bacteria to function as transcriptional activators for the benefit of the pathogen. The DNA binding domain of TAL effectors is composed of conserved amino acid repeat structures containing repeat-variable diresidues (RVDs) that determine DNA binding specificity. In this paper, we present TALgetter, a new approach for predicting TAL effector target sites based on a statistical model. In contrast to previous approaches, the parameters of TALgetter are estimated from training data computationally. We demonstrate that TALgetter successfully predicts known TAL effector target sites and often yields a greater number of predictions that are consistent with up-regulation in gene expression microarrays than an existing approach, Target Finder of the TALE-NT suite. We study the binding specificities estimated by TALgetter and approve that different RVDs are differently important for transcriptional activation. In subsequent studies, the predictions of TALgetter indicate a previously unreported positional preference of TAL effector target sites relative to the transcription start site. In addition, several TAL effectors are predicted to bind to the TATA-box, which might constitute one general mode of transcriptional activation by TAL effectors. Scrutinizing the predicted target sites of TALgetter, we propose several novel TAL effector virulence targets in rice and sweet orange. TAL-mediated induction of the candidates is supported by gene expression microarrays. Validity of these targets is also supported by functional analogy to known TAL effector targets, by an over-representation of TAL effector targets with similar function, or by a biological function related to pathogen infection. Hence, these predicted TAL effector virulence targets are promising candidates for studying the virulence function of TAL effectors. TALgetter is implemented as part of the open-source Java library Jstacs, and is freely available as a web-application and a command line program. PMID:23526890

  14. Analysis of dam-passage survival of yearling and subyearling Chinook salmon and juvenile steelhead at The Dalles Dam, Oregon, 2010

    USGS Publications Warehouse

    Beeman, John W.; Kock, Tobias J.; Perry, Russell W.; Smith, Steven G.

    2011-01-01

    We performed a series of analyses of mark-recapture data from a study at The Dalles Dam during 2010 to determine if model assumptions for estimation of juvenile salmonid dam-passage survival were met and if results were similar to those using the University of Washington's newly developed ATLAS software. The study was conducted by the Pacific Northwest National Laboratory and used acoustic telemetry of yearling Chinook salmon, juvenile steelhead, and subyearling Chinook salmon released at three sites according to the new virtual/paired-release statistical model. This was the first field application of the new model, and the results were used to measure compliance with minimum survival standards set forth in a recent Biological Opinion. Our analyses indicated that most model assumptions were met. The fish groups mixed in time and space, and no euthanized tagged fish were detected. Estimates of reach-specific survival were similar in fish tagged by each of the six taggers during the spring, but not in the summer. Tagger effort was unevenly allocated temporally during tagging of subyearling Chinook salmon in the summer; the difference in survival estimates among taggers was more likely a result of a temporal trend in actual survival than of tagger effects. The reach-specific survival of fish released at the three sites was not equal in the reaches they had in common for juvenile steelhead or subyearling Chinook salmon, violating one model assumption. This violation did not affect the estimate of dam-passage survival, because data from the common reaches were not used in its calculation. Contrary to expectation, precision of survival estimates was not improved by using the most parsimonious model of recapture probabilities instead of the fully parameterized model. Adjusting survival estimates for differences in fish travel times and tag lives increased the dam-passage survival estimate for yearling Chinook salmon by 0.0001 and for juvenile steelhead by 0.0004. The estimate was unchanged for subyearling Chinook salmon. The tag-life-adjusted dam-passage survival estimates from our analyses were 0.9641 (standard error [SE] 0.0096) for yearling Chinook salmon, 0.9534 (SE 0.0097) for juvenile steelhead, and 0.9404 (SE 0.0091) for subyearling Chinook salmon. These were within 0.0001 of estimates made by the University of Washington using the ATLAS software. Contrary to the intent of the virtual/paired-release model to adjust estimates of the paired-release model downward in order to account for differential handling mortality rates between release groups, random variation in survival estimates may result in an upward adjustment of survival relative to estimates from the paired-release model. Further investigation of this property of the virtual/paired-release model likely would prove beneficial. In addition, we suggest that differential selective pressures near release sites of the two control groups could bias estimates of dam-passage survival from the virtual/paired-release model.

  15. Techniques for estimating 7-day, 10-year low-flow characteristics for ungaged sites on streams in Mississippi

    USGS Publications Warehouse

    Telis, Pamela A.

    1992-01-01

    Mississippi State water laws require that the 7-day, 10-year low-flow characteristic (7Q10) of streams be used as a criterion for issuing wastedischarge permits to dischargers to streams and for limiting withdrawals of water from streams. This report presents techniques for estimating the 7Q10 for ungaged sites on streams in Mississippi based on the availability of baseflow discharge measurements at the site, location of nearby gaged sites on the same stream, and drainage area of the ungaged site. These techniques may be used to estimate the 7Q10 at sites on natural, unregulated or partially regulated, and non-tidal streams. Low-flow characteristics for streams in the Mississippi River alluvial plain were not estimated because the annual lowflow data exhibit decreasing trends with time. Also presented are estimates of the 7Q10 for 493 gaged sites on Mississippi streams.Techniques for estimating the 7Q10 have been developed for ungaged sites with base-flow discharge measurements, for ungaged sites on gaged streams, and for ungaged sites on ungaged streams. For an ungaged site with one or more base-flow discharge measurements, base-flow discharge data at the ungaged site are related to concurrent discharge data at a nearby gaged site. For ungaged sites on gaged streams, several methods of transferring the 7Q10 from a gaged site to an ungaged site were developed; the resulting 7Q10 values are based on drainage area prorations for the sites. For ungaged sites on ungaged streams, the 7Q10 is estimated from a map developed for. this study that shows the unit 7Q10 (7Q10 per square mile of drainage area) for ungaged basins in the State. The mapped values were estimated from the unit 7Q10 determined for nearby gaged basins, adjusted on the basis of the geology and topography of the ungaged basins.

  16. Estimation of local extreme suspended sediment concentrations in California Rivers.

    PubMed

    Tramblay, Yves; Saint-Hilaire, André; Ouarda, Taha B M J; Moatar, Florentina; Hecht, Barry

    2010-09-01

    The total amount of suspended sediment load carried by a stream during a year is usually transported during one or several extreme events related to high river flow and intense rainfall, leading to very high suspended sediment concentrations (SSCs). In this study quantiles of SSC derived from annual maximums and the 99th percentile of SSC series are considered to be estimated locally in a site-specific approach using regional information. Analyses of relationships between physiographic characteristics and the selected indicators were undertaken using the localities of 5-km radius draining of each sampling site. Multiple regression models were built to test the regional estimation for these indicators of suspended sediment transport. To assess the accuracy of the estimates, a Jack-Knife re-sampling procedure was used to compute the relative bias and root mean square error of the models. Results show that for the 19 stations considered in California, the extreme SSCs can be estimated with 40-60% uncertainty, depending on the presence of flow regulation in the basin. This modelling approach is likely to prove functional in other Mediterranean climate watersheds since they appear useful in California, where geologic, climatic, physiographic, and land-use conditions are highly variable. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Estimation Of Organ Doses From Solar Particle Events For Future Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2006-01-01

    Radiation protection practices define the effective dose as a weighted sum of equivalent dose over major organ sites for radiation cancer risks. Since a crew personnel dosimeter does not make direct measurement of the effective dose, it has been estimated with skin-dose measurements and radiation transport codes for ISS and STS missions. If sufficient protection is not provided near solar maximum, the radiation risk can be significant due to exposure to sporadic solar particle events (SPEs) as well as to the continuous galactic cosmic radiation (GCR) on future exploratory-class and long-duration missions. For accurate estimates of overall fatal cancer risks from SPEs, the specific doses at various blood forming organs (BFOs) were considered, because proton fluences and doses vary considerably across marrow regions. Previous estimates of BFO doses from SPEs have used an average body-shielding distribution for the bone marrow based on the computerized anatomical man model (CAM). With the development of an 82-point body-shielding distribution at BFOs, the mean and variance of SPE doses in the major active marrow regions (head and neck, chest, abdomen, pelvis and thighs) will be presented. Consideration of the detailed distribution of bone marrow sites is one of many requirements to improve the estimation of effective doses for radiation cancer risks.

  18. Theoretical model of ruminant adipose tissue metabolism in relation to the whole animal.

    PubMed

    Baldwin, R L; Yang, Y T; Crist, K; Grichting, G

    1976-09-01

    Based on theoretical considerations and experimental data, estimates of contributions of adipose tissue to energy expenditures in a lactating cow and a growing steer were developed. The estimates indicate that adipose energy expenditures range between 5 and 10% of total animal heat production dependent on productive function and diet. These energy expenditures can be partitioned among maintenance (3%), lipogenesis (1-5%) and lipolysis and triglyceride resynthesis (less thatn 1.0%). Specific sites at which acute and chronic effectors can act to produce changes in adipose function, and changes in adipose function produced by diet and during pregnancy, lactation and aging were discussed with emphasis being placed on the need for additional, definitive studies of specific interactions among pregnancy, diet, age, lactation and growth in producing ruminants.

  19. White paper updating conclusions of 1998 ILAW performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANN, F.M.

    The purpose of this document is to provide a comparison of the estimated immobilized low-activity waste (LAW) disposal system performance against established performance objectives using the beat estimates for parameters and models to describe the system. The principal advances in knowledge since the last performance assessment (known as the 1998 ILAW PA [Mann 1998a]) have been in site specific information and data on the waste form performance for BNFL, Inc. relevant glass formulations. The white paper also estimates the maximum release rates for technetium and other key radionuclides and chemicals from the waste form. Finally, this white paper provides limitedmore » information on the impact of changes in waste form loading.« less

  20. Estimating the Economic Potential of Offshore Wind in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beiter, P.; Musial, W.; Smith, A.

    The potential for cost reduction and market deployment for offshore wind varies considerably within the United States. This analysis estimates the future economic viability of offshore wind at more than 7,000 sites under a variety of electric sector and cost reduction scenarios. Identifying the economic potential of offshore wind at a high geospatial resolution can capture the significant variation in local offshore resource quality, costs, and revenue potential. In estimating economic potential, this article applies a method initially developed in Brown et al. (2015) to offshore wind and estimates the sensitivity of results under a variety of most likely electricmore » sector scenarios. For the purposes of this analysis, a theoretical framework is developed introducing a novel offshore resource classification system that is analogous to established resource classifications from the oil and gas sector. Analyzing economic potential within this framework can help establish a refined understanding across industries of the technology and site-specific risks and opportunities associated with future offshore wind development. The results of this analysis are intended to inform the development of the U.S. Department of Energy's offshore wind strategy.« less

  1. A simulation of Earthquake Loss Estimation in Southeastern Korea using HAZUS and the local site classification Map

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.

    2013-12-01

    Regionally varying seismic hazards can be estimated using an earthquake loss estimation system (e.g. HAZUS-MH). The estimations for actual earthquakes help federal and local authorities develop rapid, effective recovery measures. Estimates for scenario earthquakes help in designing a comprehensive earthquake hazard mitigation plan. Local site characteristics influence the ground motion. Although direct measurements are desirable to construct a site-amplification map, such data are expensive and time consuming to collect. Thus we derived a site classification map of the southern Korean Peninsula using geologic and geomorphologic data, which are readily available for the entire southern Korean Peninsula. Class B sites (mainly rock) are predominant in the area, although localized areas of softer soils are found along major rivers and seashores. The site classification map is compared with independent site classification studies to confirm our site classification map effectively represents the local behavior of site amplification during an earthquake. We then estimated the losses due to a magnitude 6.7 scenario earthquake in Gyeongju, southeastern Korea, with and without the site classification map. Significant differences in loss estimates were observed. The loss without the site classification map decreased without variation with increasing epicentral distance, while the loss with the site classification map varied from region to region, due to both the epicentral distance and local site effects. The major cause of the large loss expected in Gyeongju is the short epicentral distance. Pohang Nam-Gu is located farther from the earthquake source region. Nonetheless, the loss estimates in the remote city are as large as those in Gyeongju and are attributed to the site effect of soft soil found widely in the area.

  2. Development, Application, and Implementation of RAMCAP to Characterize Nuclear Power Plant Risk From Terrorism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaertner, John P.; Teagarden, Grant A.

    2006-07-01

    In response to increased interest in risk-informed decision making regarding terrorism, EPRI and ERIN Engineering were selected by U.S. DHS and ASME to develop and demonstrate the RAMCAP method for nuclear power plant (NPP) risk assessment. The objective is to characterize plant-specific NPP risk for risk management opportunities and to provide consistent information for DHS decision making. This paper is an update of this project presented at the American Nuclear Society (ANS) International Topical Meeting on Probabilistic Safety Analysis (PSA05) in September, 2005. The method uses a characterization of risk as a function of Consequence, Vulnerability, and Threat. For eachmore » site, worst case scenarios are developed for each of sixteen benchmark threats. Nuclear RAMCAP hypothesizes that the intent of the perpetrator is to cause offsite radiological consequences. Specific targets are the reactor core, the spent fuel pool, and nuclear spent fuel in a dry storage facility (ISFSI). Results for each scenario are presented as conditional risk for financial loss, early fatalities and early injuries. Expected consequences for each scenario are quantified, while vulnerability is estimated on a relative likelihood scale. Insights for other societal risks are provided. Although threat frequencies are not provided, target attractiveness and threat deterrence are estimated. To assure efficiency, completeness, and consistency; results are documented using standard RAMCAP Evaluator software. Trial applications were successfully performed at four plant sites. Implementation at all other U.S. commercial sites is underway, supported by the Nuclear Sector Coordinating Council (NSCC). Insights from RAMCAP results at 23 U.S. plants completed to date have been compiled and presented to the NSCC. Results are site-specific. Physical security barriers, an armed security force, preparedness for design-basis threats, rugged design against natural hazards, multiple barriers between fuel and environment, accident mitigation capability, severe accident management procedures, and offsite emergency plans are risk-beneficial against all threat types. (authors)« less

  3. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J; Grassberger, C; Paganetti, H

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less

  4. Identification of site frequencies from building records

    USGS Publications Warehouse

    Celebi, M.

    2003-01-01

    A simple procedure to identify site frequencies using earthquake response records from roofs and basements of buildings is presented. For this purpose, data from five different buildings are analyzed using only spectral analyses techniques. Additional data such as free-field records in close proximity to the buildings and site characterization data are also used to estimate site frequencies and thereby to provide convincing evidence and confirmation of the site frequencies inferred from the building records. Furthermore, simple code-formula is used to calculate site frequencies and compare them with the identified site frequencies from records. Results show that the simple procedure is effective in identification of site frequencies and provides relatively reliable estimates of site frequencies when compared with other methods. Therefore the simple procedure for estimating site frequencies using earthquake records can be useful in adding to the database of site frequencies. Such databases can be used to better estimate site frequencies of those sites with similar geological structures.

  5. Urban scale air quality modelling using detailed traffic emissions estimates

    NASA Astrophysics Data System (ADS)

    Borrego, C.; Amorim, J. H.; Tchepel, O.; Dias, D.; Rafael, S.; Sá, E.; Pimentel, C.; Fontes, T.; Fernandes, P.; Pereira, S. R.; Bandeira, J. M.; Coelho, M. C.

    2016-04-01

    The atmospheric dispersion of NOx and PM10 was simulated with a second generation Gaussian model over a medium-size south-European city. Microscopic traffic models calibrated with GPS data were used to derive typical driving cycles for each road link, while instantaneous emissions were estimated applying a combined Vehicle Specific Power/Co-operative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe (VSP/EMEP) methodology. Site-specific background concentrations were estimated using time series analysis and a low-pass filter applied to local observations. Air quality modelling results are compared against measurements at two locations for a 1 week period. 78% of the results are within a factor of two of the observations for 1-h average concentrations, increasing to 94% for daily averages. Correlation significantly improves when background is added, with an average of 0.89 for the 24 h record. The results highlight the potential of detailed traffic and instantaneous exhaust emissions estimates, together with filtered urban background, to provide accurate input data to Gaussian models applied at the urban scale.

  6. Estimated contributions of primary and secondary organic aerosol from fossil fuel combustion during the CalNex and Cal-Mex campaigns

    NASA Astrophysics Data System (ADS)

    Guzman-Morales, J.; Frossard, A. A.; Corrigan, A. L.; Russell, L. M.; Liu, S.; Takahama, S.; Taylor, J. W.; Allan, J.; Coe, H.; Zhao, Y.; Goldstein, A. H.

    2014-05-01

    Observations during CalNex and Cal-Mex field campaigns at Bakersfield, Pasadena, Tijuana, and on board the R/V Atlantis show a substantial contribution of fossil fuel emissions to the ambient particle organic mass (OM). At least two fossil fuel combustion (FFC) factors with a range of contributions of oxidized organic functional groups were identified at each site and accounted for 60-88% of the total OM. Additional marine, vegetative detritus, and biomass burning or biogenic sources contribute up to 40% of the OM. Comparison of the FTIR spectra of four different unburned fossil fuels (gasoline, diesel, motor oil, and ship diesel) with PMF factors from ambient samples shows absorbance peaks from the fuels are retained in organic aerosols, with the spectra of all of the FFC factors containing at least three of the four characteristic alkane peaks observed in fuel standards at 2954, 2923, 2869 and 2855 cm-1. Based on this spectral similarity, we estimate the primary OM from FFC sources for each site to be 16-20%, with secondary FFC OM accounting for an additional 42-62%. Two other methods for estimating primary OM that use carbon monoxide (CO) and elemental carbon (EC) as tracers of primary organic mass were investigated, but both approaches were problematic for the CalNex and Cal-Mex urban sites because they were influenced by multiple emission sources that had site-specific and variable initial ratios to OM. For example, using the ΔPOM/ΔCO ratio of 0.0094 μg ppb V-1 proposed by other studies produces unrealistically high estimates of primary FFC OM of 55-100%.

  7. Inferring invasive species abundance using removal data from management actions.

    PubMed

    Davis, Amy J; Hooten, Mevin B; Miller, Ryan S; Farnsworth, Matthew L; Lewis, Jesse; Moxcey, Michael; Pepin, Kim M

    2016-10-01

    Evaluation of the progress of management programs for invasive species is crucial for demonstrating impacts to stakeholders and strategic planning of resource allocation. Estimates of abundance before and after management activities can serve as a useful metric of population management programs. However, many methods of estimating population size are too labor intensive and costly to implement, posing restrictive levels of burden on operational programs. Removal models are a reliable method for estimating abundance before and after management using data from the removal activities exclusively, thus requiring no work in addition to management. We developed a Bayesian hierarchical model to estimate abundance from removal data accounting for varying levels of effort, and used simulations to assess the conditions under which reliable population estimates are obtained. We applied this model to estimate site-specific abundance of an invasive species, feral swine (Sus scrofa), using removal data from aerial gunning in 59 site/time-frame combinations (480-19,600 acres) throughout Oklahoma and Texas, USA. Simulations showed that abundance estimates were generally accurate when effective removal rates (removal rate accounting for total effort) were above 0.40. However, when abundances were small (<50) the effective removal rate needed to accurately estimates abundances was considerably higher (0.70). Based on our post-validation method, 78% of our site/time frame estimates were accurate. To use this modeling framework it is important to have multiple removals (more than three) within a time frame during which demographic changes are minimized (i.e., a closed population; ≤3 months for feral swine). Our results show that the probability of accurately estimating abundance from this model improves with increased sampling effort (8+ flight hours across the 3-month window is best) and increased removal rate. Based on the inverse relationship between inaccurate abundances and inaccurate removal rates, we suggest auxiliary information that could be collected and included in the model as covariates (e.g., habitat effects, differences between pilots) to improve accuracy of removal rates and hence abundance estimates. © 2016 by the Ecological Society of America.

  8. Development of Preliminary Remediation Goals for Indoor Dust at the Colonie FUSRAP Site - 12273

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watters, David J.; Opdyke, Clifford P.; Moore, James T.

    2012-07-01

    The Colonie FUSRAP Site is located in the Town of Colonie, Albany County, New York. The U.S. Army Corps of Engineers is currently addressing environmental contamination associated with the Site under the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) process as part of the Formerly Utilized Sites Remedial Action Program (FUSRAP). Soil remediation activities have been substantially completed at the Colonie FUSRAP Site and its vicinity properties under the FUSRAP. A study unrelated to FUSRAP was recently performed by an independent party to establish the distribution of DU contamination in various media in the environs of the Site. Asmore » part of this study, dust samples were collected in residencies and businesses in the immediate vicinity of the Site. These samples were collected in non-living areas such as basement window sills and garages. Many of these samples tested positive for DU. An assessment was performed to establish preliminary remediation goals (PRGs) for indoor dust in non-living areas of residential homes and businesses in the vicinity of the Site. The results of this assessment provide estimates of dose-based, carcinogenic risk-based, and noncarcinogenic-based PRGs derived from a hypothetical exposure scenario with reasonable levels of conservatism. Ultimately, the PRGs will be compared to results of dust sampling and analyses in residences and businesses in proximity of the Site to determine whether a response action is appropriate. This assessment estimates PRGs for DU contaminated dust in non-living areas of residences in the vicinity of the Colonie FUSRAP site based on a reasonably conservative exposure scenario. Estimated PRGs based on residential receptors are considered to be conservatively representative of workers in nearby businesses based on the considerably longer exposure duration of residents relative to workers. This assessment provides reasonably conservative estimates of PRGs for DU contaminated dust in non-living areas within residences in the vicinity of the Site. It should be noted that the PRGs include hypothetical exposures resulting from activities in both the living areas and non-living areas of a residence. The PRGs are derived and presented in terms of DU dust concentration in non-living areas to facilitate comparison to results of a planned Site Investigation that will characterize concentrations of DU in dust in non-living areas. It is important to recognize that the exposure assumptions used to derive these PRGs are based on average dust DU concentrations in non-living areas. It is inappropriate to compare these PRGs to the dust DU concentration in an isolated small area. The ongoing Site Investigation addresses this consideration and is designed to provide reasonable estimates of average dust DU concentrations in non-living areas of vicinity properties. In order to accomplish this, sampling is conducted in accordance with EPA Guidance for the Sampling and Analysis of Lead in Indoor Residential Dust for Use in the Integrated Exposure Uptake Biokinetic (IEUBK) Model, (EPA 2008), which specifically addresses estimating average contaminant concentrations in dust. Four (4) large-area samples are collected from each VP in accordance with this guidance. It is anticipated that the results of the Site Investigation will be used in conjunction with the results of this assessment, and/or subsequent assessments, to establish whether or not a response action is appropriate. (authors)« less

  9. Proceedings: ISEA Bioavailability Symposium, Durham, North Carolina Use of InVitro Bioaccessibility/Relative Bioavailability Estimates for Metals in Regulatory Settings: What is Needed?

    EPA Science Inventory

    Oral ingestion of soil and dust is a key pathway for human exposures to metal and metalloid contaminants. It is widely recognized that the site-specific bioavailability of metals in soil and dust may be reduced relative to the metal bioavailability in media such as water and food...

  10. Stand level height-diameter mixed effects models: parameters fitted using loblolly pine but calibrated for sweetgum

    Treesearch

    Curtis L. Vanderschaaf

    2008-01-01

    Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...

  11. A reference skeletal dosimetry model for an adult male radionuclide therapy patient based on three-dimensional imaging and paired-image radiation transport

    NASA Astrophysics Data System (ADS)

    Shah, Amish P.

    The need for improved patient-specificity of skeletal dose estimates is widely recognized in radionuclide therapy. Current clinical models for marrow dose are based on skeletal mass estimates from a variety of sources and linear chord-length distributions that do not account for particle escape into cortical bone. To predict marrow dose, these clinical models use a scheme that requires separate calculations of cumulated activity and radionuclide S values. Selection of an appropriate S value is generally limited to one of only three sources, all of which use as input the trabecular microstructure of an individual measured 25 years ago, and the tissue mass derived from different individuals measured 75 years ago. Our study proposed a new modeling approach to marrow dosimetry---the Paired Image Radiation Transport (PIRT) model---that properly accounts for both the trabecular microstructure and the cortical macrostructure of each skeletal site in a reference male radionuclide patient. The PIRT model, as applied within EGSnrc, requires two sets of input geometry: (1) an infinite voxel array of segmented microimages of the spongiosa acquired via microCT; and (2) a segmented ex-vivo CT image of the bone site macrostructure defining both the spongiosa (marrow, endosteum, and trabeculae) and the cortical bone cortex. Our study also proposed revising reference skeletal dosimetry models for the adult male cancer patient. Skeletal site-specific radionuclide S values were obtained for a 66-year-old male reference patient. The derivation for total skeletal S values were unique in that the necessary skeletal mass and electron dosimetry calculations were formulated from the same source bone site over the entire skeleton. We conclude that paired-image radiation-transport techniques provide an adoptable method by which the intricate, anisotropic trabecular microstructure of the skeletal site; and the physical size and shape of the bone can be handled together, for improved compilation of reference radionuclide S values. We also conclude that this comprehensive model for the adult male cancer patient should be implemented for use in patient-specific calculations for radionuclide dosimetry of the skeleton.

  12. Site specific probability of passive acoustic detection of humpback whale calls from single fixed hydrophones.

    PubMed

    Helble, Tyler A; D'Spain, Gerald L; Hildebrand, John A; Campbell, Gregory S; Campbell, Richard L; Heaney, Kevin D

    2013-09-01

    Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. A common mistake in the analysis of marine mammal acoustic data is formulating conclusions about these animals without first understanding how environmental properties such as bathymetry, sediment properties, water column sound speed, and ocean acoustic noise influence the detection and character of vocalizations in the acoustic data. The approach in this paper is to use Monte Carlo simulations with a full wave field acoustic propagation model to characterize the site specific probability of detection of six types of humpback whale calls at three passive acoustic monitoring locations off the California coast. Results show that the probability of detection can vary by factors greater than ten when comparing detections across locations, or comparing detections at the same location over time, due to environmental effects. Effects of uncertainties in the inputs to the propagation model are also quantified, and the model accuracy is assessed by comparing calling statistics amassed from 24,690 humpback units recorded in the month of October 2008. Under certain conditions, the probability of detection can be estimated with uncertainties sufficiently small to allow for accurate density estimates.

  13. Social Media Use and Access to Digital Technology in US Young Adults in 2016.

    PubMed

    Villanti, Andrea C; Johnson, Amanda L; Ilakkuvan, Vinu; Jacobs, Megan A; Graham, Amanda L; Rath, Jessica M

    2017-06-07

    In 2015, 90% of US young adults with Internet access used social media. Digital and social media are highly prevalent modalities through which young adults explore identity formation, and by extension, learn and transmit norms about health and risk behaviors during this developmental life stage. The purpose of this study was to provide updated estimates of social media use from 2014 to 2016 and correlates of social media use and access to digital technology in data collected from a national sample of US young adults in 2016. Young adult participants aged 18-24 years in Wave 7 (October 2014, N=1259) and Wave 9 (February 2016, N=989) of the Truth Initiative Young Adult Cohort Study were asked about use frequency for 11 social media sites and access to digital devices, in addition to sociodemographic characteristics. Regular use was defined as using a given social media site at least weekly. Weighted analyses estimated the prevalence of use of each social media site, overlap between regular use of specific sites, and correlates of using a greater number of social media sites regularly. Bivariate analyses identified sociodemographic correlates of access to specific digital devices. In 2014, 89.42% (weighted n, 1126/1298) of young adults reported regular use of at least one social media site. This increased to 97.5% (weighted n, 965/989) of young adults in 2016. Among regular users of social media sites in 2016, the top five sites were Tumblr (85.5%), Vine (84.7%), Snapchat (81.7%), Instagram (80.7%), and LinkedIn (78.9%). Respondents reported regularly using an average of 7.6 social media sites, with 85% using 6 or more sites regularly. Overall, 87% of young adults reported access or use of a smartphone with Internet access, 74% a desktop or laptop computer with Internet access, 41% a tablet with Internet access, 29% a smart TV or video game console with Internet access, 11% a cell phone without Internet access, and 3% none of these. Access to all digital devices with Internet was lower in those reporting a lower subjective financial situation; there were also significant differences in access to specific digital devices with Internet by race, ethnicity, and education. The high mean number of social media sites used regularly and the substantial overlap in use of multiple social media sites reflect the rapidly changing social media environment. Mobile devices are a primary channel for social media, and our study highlights disparities in access to digital technologies with Internet access among US young adults by race/ethnicity, education, and subjective financial status. Findings from this study may guide the development and implementation of future health interventions for young adults delivered via the Internet or social media sites. ©Andrea C Villanti, Amanda L Johnson, Vinu Ilakkuvan, Megan A Jacobs, Amanda L Graham, Jessica M Rath. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 07.06.2017.

  14. Social Media Use and Access to Digital Technology in US Young Adults in 2016

    PubMed Central

    Johnson, Amanda L; Ilakkuvan, Vinu; Jacobs, Megan A; Graham, Amanda L; Rath, Jessica M

    2017-01-01

    Background In 2015, 90% of US young adults with Internet access used social media. Digital and social media are highly prevalent modalities through which young adults explore identity formation, and by extension, learn and transmit norms about health and risk behaviors during this developmental life stage. Objective The purpose of this study was to provide updated estimates of social media use from 2014 to 2016 and correlates of social media use and access to digital technology in data collected from a national sample of US young adults in 2016. Methods Young adult participants aged 18-24 years in Wave 7 (October 2014, N=1259) and Wave 9 (February 2016, N=989) of the Truth Initiative Young Adult Cohort Study were asked about use frequency for 11 social media sites and access to digital devices, in addition to sociodemographic characteristics. Regular use was defined as using a given social media site at least weekly. Weighted analyses estimated the prevalence of use of each social media site, overlap between regular use of specific sites, and correlates of using a greater number of social media sites regularly. Bivariate analyses identified sociodemographic correlates of access to specific digital devices. Results In 2014, 89.42% (weighted n, 1126/1298) of young adults reported regular use of at least one social media site. This increased to 97.5% (weighted n, 965/989) of young adults in 2016. Among regular users of social media sites in 2016, the top five sites were Tumblr (85.5%), Vine (84.7%), Snapchat (81.7%), Instagram (80.7%), and LinkedIn (78.9%). Respondents reported regularly using an average of 7.6 social media sites, with 85% using 6 or more sites regularly. Overall, 87% of young adults reported access or use of a smartphone with Internet access, 74% a desktop or laptop computer with Internet access, 41% a tablet with Internet access, 29% a smart TV or video game console with Internet access, 11% a cell phone without Internet access, and 3% none of these. Access to all digital devices with Internet was lower in those reporting a lower subjective financial situation; there were also significant differences in access to specific digital devices with Internet by race, ethnicity, and education. Conclusions The high mean number of social media sites used regularly and the substantial overlap in use of multiple social media sites reflect the rapidly changing social media environment. Mobile devices are a primary channel for social media, and our study highlights disparities in access to digital technologies with Internet access among US young adults by race/ethnicity, education, and subjective financial status. Findings from this study may guide the development and implementation of future health interventions for young adults delivered via the Internet or social media sites. PMID:28592394

  15. Regression modeling of gas-particle partitioning of atmospheric oxidized mercury from temperature data

    NASA Astrophysics Data System (ADS)

    Cheng, Irene; Zhang, Leiming; Blanchard, Pierrette

    2014-10-01

    Models describing the partitioning of atmospheric oxidized mercury (Hg(II)) between the gas and fine particulate phases were developed as a function of temperature. The models were derived from regression analysis of the gas-particle partitioning parameters, defined by a partition coefficient (Kp) and Hg(II) fraction in fine particles (fPBM) and temperature data from 10 North American sites. The generalized model, log(1/Kp) = 12.69-3485.30(1/T) (R2 = 0.55; root-mean-square error (RMSE) of 1.06 m3/µg for Kp), predicted the observed average Kp at 7 of the 10 sites. Discrepancies between the predicted and observed average Kp were found at the sites impacted by large Hg sources because the model had not accounted for the different mercury speciation profile and aerosol compositions of different sources. Site-specific equations were also generated from average Kp and fPBM corresponding to temperature interval data. The site-specific models were more accurate than the generalized Kp model at predicting the observations at 9 of the 10 sites as indicated by RMSE of 0.22-0.5 m3/µg for Kp and 0.03-0.08 for fPBM. Both models reproduced the observed monthly average values, except for a peak in Hg(II) partitioning observed during summer at two locations. Weak correlations between the site-specific model Kp or fPBM and observations suggest the role of aerosol composition, aerosol water content, and relative humidity factors on Hg(II) partitioning. The use of local temperature data to parameterize Hg(II) partitioning in the proposed models potentially improves the estimation of mercury cycling in chemical transport models and elsewhere.

  16. Ecosystem Model Performance at Wetlands: Results from the North American Carbon Program Site Synthesis

    NASA Astrophysics Data System (ADS)

    Sulman, B. N.; Desai, A. R.; Schroeder, N. M.; NACP Site Synthesis Participants

    2011-12-01

    Northern peatlands contain a significant fraction of the global carbon pool, and their responses to hydrological change are likely to be important factors in future carbon cycle-climate feedbacks. Global-scale carbon cycle modeling studies typically use general ecosystem models with coarse spatial resolution, often without peatland-specific processes. Here, seven ecosystem models were used to simulate CO2 fluxes at three field sites in Canada and the northern United States, including two nutrient-rich fens and one nutrient-poor, sphagnum-dominated bog, from 2002-2006. Flux residuals (simulated - observed) were positively correlated with measured water table for both gross ecosystem productivity (GEP) and ecosystem respiration (ER) at the two fen sites for all models, and were positively correlated with water table at the bog site for the majority of models. Modeled diurnal cycles at fen sites agreed well with eddy covariance measurements overall. Eddy covariance GEP and ER were higher during dry periods than during wet periods, while model results predicted either the opposite relationship or no significant difference. At the bog site, eddy covariance GEP had no significant dependence on water table, while models predicted higher GEP during wet periods. All models significantly over-estimated GEP at the bog site, and all but one over-estimated ER at the bog site. Carbon cycle models in peatland-rich regions could be improved by incorporating better models or measurements of hydrology and by inhibiting GEP and ER rates under saturated conditions. Bogs and fens likely require distinct treatments in ecosystem models due to differences in nutrients, peat properties, and plant communities.

  17. An Intraoperative Site-specific Bone Density Device: A Pilot Test Case.

    PubMed

    Arosio, Paolo; Moschioni, Monica; Banfi, Luca Maria; Di Stefano, Anilo Alessio

    2015-08-01

    This paper reports a case of all-on-four rehabilitation where bone density at implant sites was assessed both through preoperative computed tomographic (CT) scans and using a micromotor working as an intraoperative bone density measurement device. Implant-supported rehabilitation is a predictable treatment option for tooth replacement whose success depends on the clinician's experience, the implant characteristics and location and patient-related factors. Among the latter, bone density is a determinant for the achievement of primary implant stability and, eventually, for implant success. The ability to measure bone density at the placement site before implant insertion could be important in the clinical setting. A patient complaining of masticatory impairment was presented with a plan calling for extraction of all her compromised teeth, followed by implant rehabilitation. A week before surgery, she underwent CT examination, and the bone density on the CT scans was measured. When the implant osteotomies were created, the bone density was again measured with a micromotor endowed with an instantaneous torque-measuring system. The implant placement protocols were adapted for each implant, according to the intraoperative measurements, and the patient was rehabilitated following an all-on-four immediate loading protocol. The bone density device provided valuable information beyond that obtained from CT scans, allowing for site-specific, intraoperative assessment of bone density immediately before implant placement and an estimation of primary stability just after implant insertion. Measuring jaw-bone density could help clinicians to select implant-placement protocols and loading strategies based on site-specific bone features.

  18. Precipitation Depth-Duration-Frequency Analysis for the Nevada National Security Site and Surrounding Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; Miller, Julianne J.

    Accurate precipitation frequency data are important for Environmental Management Soils Activities on the Nevada National Security Site (NNSS). These data are important for environmental assessments performed for regulatory closure of Soils Corrective Action Unit (CAU) Sites, as well as engineering mitigation designs and post-closure monitoring strategies to assess and minimize potential contaminant migration from Soils CAU Sites. Although the National Oceanic and Atmospheric Administration (NOAA) Atlas 14 (Bonnin et al., 2011) provides precipitation frequency data for the NNSS area, the NNSS-specific observed precipitation data were not consistent with the NOAA Atlas 14 predicted data. This is primarily due to themore » NOAA Atlas 14 products being produced from analyses without including the approximately 30 NNSS precipitation gage records, several of which approach or exceed 50 year of record. Therefore, a study of precipitation frequency that incorporated the NNSS precipitation gage records into the NOAA Atlas 14 dataset, was performed specifically for the NNSS to derive more accurate site-specific precipitation data products. Precipitation frequency information, such as the depth-duration-frequency (DDF) relationships, are required to generate synthetic standard design storm hydrographs and assess actual precipitation events. In this study, the actual long-term NNSS precipitation gage records, some of which are the longest gage records in southern and central Nevada, were analyzed to allow for more accurate precipitation DDF estimates to be developed for the NNSS. Gridded maps of precipitation frequency for the NNSS and surrounding areas were then produced.« less

  19. Assessing the Efficacy of Restricting Access to Barbecue Charcoal for Suicide Prevention in Taiwan: A Community-Based Intervention Trial

    PubMed Central

    Chen, Ying-Yeh; Chen, Feng; Chang, Shu-Sen; Wong, Jacky; Yip, Paul S F

    2015-01-01

    Objective Charcoal-burning suicide has recently been spreading to many Asian countries. There have also been several cases involving this new method of suicide in Western countries. Restricting access to suicide means is one of the few suicide-prevention measures that have been supported by empirical evidence. The current study aims to assess the effectiveness of a community intervention program that restricts access to charcoal to prevent suicide in Taiwan. Methods and Findings A quasi-experimental design is used to compare method-specific (charcoal-burning suicide, non-charcoal-burning suicide) and overall suicide rates in New Taipei City (the intervention site, with a population of 3.9 million) with two other cities (Taipei City and Kaohsiung City, the control sites, each with 2.7 million residents) before (Jan 1st 2009- April 30th 2012) and after (May 1st 2012-Dec. 31st 2013) the initiation of a charcoal-restriction program on May 1st 2012. The program mandates the removal of barbecue charcoal from open shelves to locked storage in major retail stores in New Taipei City. No such restriction measure was implemented in the two control sites. Generalized linear regression models incorporating secular trends were used to compare the changes in method-specific and overall suicide rates before and after the initiation of the restriction measure. A simulation approach was used to estimate the number of lives saved by the intervention. Compared with the pre-intervention period, the estimated rate reduction of charcoal-burning suicide in New Taipei City was 37% (95% CI: 17%, 50%) after the intervention. Taking secular trends into account, the reduction was 30% (95% CI: 14%, 44%). No compensatory rise in non-charcoal-burning suicide was observed in New Taipei City. No significant reduction in charcoal-burning suicide was observed in the other two control sites. The simulation approach estimated that 91 (95%CI [55, 128]) lives in New Taipei City were saved during the 20 months of the intervention. Conclusion Our results demonstrate that the charcoal-restriction program reduced method-specific and overall suicides. This study provides strong empirical evidence that restricting the accessibility of common lethal methods of suicide can effectively reduce suicide rates. PMID:26305374

  20. Sediment Transport and Dust Flux in Disturbed and Undisturbed Dryland Ecosystems: From Site Specific Estimates to Trends Across Gradients of Woody Plant Cover

    NASA Astrophysics Data System (ADS)

    Field, J. P.; Breshears, D. D.; Whicker, J. J.; Zou, C. B.; Allen, C. D.

    2007-12-01

    Aeolian sediment transport and associated dust flux are important processes in dryland ecosystems where vegetation cover is inherently sparse relative to more mesic ecosystems. Aeolian processes in dryland ecosystems are strongly influenced by the spatial density of roughness elements, which is largely determined by woody plant height and spacing. Despite the global extent of dryland ecosystems, relatively few measurements of aeolian sediment transport have been made within these systems, and these few existing measurements have not been systematically evaluated with respect to gradients of woody plant cover. We report measured aeolian sediment transport in an undisturbed and disturbed semiarid grasslands in southern Arizona. To place our estimate in a broader context, we compared our site-specific findings to other recently published measurements of aeolian sediment transport in disturbed and undisturbed dryland ecosystems. We propose a new conceptual framework for dryland aeolian sediment transport and dust flux as a function of woody plant cover that integrates our site-specific data with the broader literature base. Our findings suggest that for relatively undisturbed ecosystems, shrublands have inherently greater aeolian sediment transport and associated dust flux than grasslands, woodlands and forests due to wake interference flow associated with the height and spacing of woody roughness elements. Furthermore, the proposed framework suggests that for disturbed ecosystems, the upper bound for aeolian sediment transport increases as a function of decreasing woody plant cover. As a result, aeolian sediment transport spans a relatively small range in woodlands and forests, an intermediate range in shrublands, and the largest range in grasslands. Our framework is applicable both within locations and across broad gradients

  1. Impact of a Saharan dust intrusion over southern Spain on DNI estimation with sky cameras

    NASA Astrophysics Data System (ADS)

    Alonso-Montesinos, J.; Barbero, J.; Polo, J.; López, G.; Ballestrín, J.; Batlles, F. J.

    2017-12-01

    To operate Central Tower Solar Power (CTSP) plants properly, solar collector systems must be able to work under varied weather conditions. Therefore, knowing the state of the atmosphere, and more specifically the level of incident radiation, is essential operational information to adapt the electricity production system to atmospheric conditions. In this work, we analyze the impact of a strong Saharan dust intrusion on the Direct normal irradiance (DNI) registered at two sites 35 km apart in southeastern Spain: the University of Almería (UAL) and the Plataforma Solar de Almería (PSA). DNI can be inputted into the European Solar Radiation Atlas (ESRA) clear sky procedure to derive Linke turbidity values, which proved to be extremely high at the UAL. By using the Simple Model of the Atmospheric Radiative Transfer of Sunshine (SMARTS) at the PSA site, AERONET data from PSA and assuming dust dominated aerosol, DNI estimations agreed strongly with the measured DNI values. At the UAL site, a SMARTS simulation of the DNI values also seemed to be compatible with dust dominated aerosol.

  2. L-325 Sagebrush Habitat Mitigation Project: FY2009 Compensation Area Monitoring Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durham, Robin E.; Sackschewsky, Michael R.

    2009-09-29

    Annual monitoring in support of the Fluor Daniel Hanford Company (Fluor) Mitigation Action Plan (MAP) for Project L-325, Electrical Utility Upgrades was conducted in June 2009. MAP guidelines defined mitigation success for this project as 3000 established sagebrush transplants on a 4.5 ha mitigation site after five monitoring years. Annual monitoring results suggest that an estimated 2130 sagebrush transplants currently grow on the site. Additional activities in support of this project included gathering sagebrush seed and securing a local grower to produce between 2250 and 2500 10-in3 tublings for outplanting during the early winter months of FY2010. If the minimummore » number of seedlings grown for this planting meets quality specifications, and planting conditions are favorable, conservative survival estimates indicate the habitat mitigation goals outlined in the MAP will be met in FY2014.« less

  3. Interactive computation of coverage regions for indoor wireless communication

    NASA Astrophysics Data System (ADS)

    Abbott, A. Lynn; Bhat, Nitin; Rappaport, Theodore S.

    1995-12-01

    This paper describes a system which assists in the strategic placement of rf base stations within buildings. Known as the site modeling tool (SMT), this system allows the user to display graphical floor plans and to select base station transceiver parameters, including location and orientation, interactively. The system then computes and highlights estimated coverage regions for each transceiver, enabling the user to assess the total coverage within the building. For single-floor operation, the user can choose between distance-dependent and partition- dependent path-loss models. Similar path-loss models are also available for the case of multiple floors. This paper describes the method used by the system to estimate coverage for both directional and omnidirectional antennas. The site modeling tool is intended to be simple to use by individuals who are not experts at wireless communication system design, and is expected to be very useful in the specification of indoor wireless systems.

  4. Mars sample return: Site selection and sample acquisition study

    NASA Technical Reports Server (NTRS)

    Nickle, N. (Editor)

    1980-01-01

    Various vehicle and mission options were investigated for the continued exploration of Mars; the cost of a minimum sample return mission was estimated; options and concepts were synthesized into program possibilities; and recommendations for the next Mars mission were made to the Planetary Program office. Specific sites and all relevant spacecraft and ground-based data were studied in order to determine: (1) the adequacy of presently available data for identifying landing sities for a sample return mission that would assure the acquisition of material from the most important geologic provinces of Mars; (2) the degree of surface mobility required to assure sample acquisition for these sites; (3) techniques to be used in the selection and drilling of rock a samples; and (4) the degree of mobility required at the two Viking sites to acquire these samples.

  5. External quality-assurance results for the national atmospheric deposition program/national trends network, 2000-2001

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Latysh, Natalie E.; Gordon, John D.

    2004-01-01

    Five external quality-assurance programs were operated by the U.S. Geological Survey for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) from 2000 through 2001 (study period): the intersite-comparison program, the blind-audit program, the field-audit program, the interlaboratory-comparison program, and the collocated-sampler program. Each program is designed to measure specific components of the total error inherent in NADP/NTN wet-deposition measurements. The intersite-comparison program assesses the variability and bias of pH and specific-conductance determinations made by NADP/NTN site operators with respect to accuracy goals. The accuracy goals are statistically based using the median of all of the measurements obtained for each of four intersite-comparison studies. The percentage of site operators responding on time that met the pH accuracy goals ranged from 84.2 to 90.5 percent. In these same four intersite-comparison studies, 88.9 to 99.0 percent of the site operators met the accuracy goals for specific conductance. The blind-audit program evaluates the effects of routine sample handling, processing, and shipping on the chemistry of weekly precipitation samples. The blind-audit data for the study period indicate that sample handling introduced a small amount of sulfate contamination and slight changes to hydrogen-ion content of the precipitation samples. The magnitudes of the paired differences are not environmentally significant to NADP/NTN data users. The field-audit program (also known as the 'field-blank program') was designed to measure the effects of field exposure, handling, and processing on the chemistry of NADP/NTN precipitation samples. The results indicate potential low-level contamination of NADP/NTN samples with calcium, ammonium, chloride, and nitrate. Less sodium contamination was detected by the field-audit data than in previous years. Statistical analysis of the paired differences shows that contaminant ions are entrained into the solutions from the field-exposed buckets, but the positive bias that results from the minor amount of contamination appears to affect the analytical results by less than 6 percent. An interlaboratory-comparison program is used to estimate the analytical variability and bias of participating laboratories, especially the NADP Central Analytical Laboratory (CAL). Statistical comparison of the analytical results of participating laboratories implies that analytical data from the various monitoring networks can be compared. Bias was identified in the CAL data for ammonium, chloride, nitrate, sulfate, hydrogen-ion, and specific-conductance measurements, but the absolute value of the bias was less than analytical minimum reporting limits for all constituents except ammonium and sulfate. Control charts show brief time periods when the CAL's analytical precision for sodium, ammonium, and chloride was not within the control limits. Data for the analysis of ultrapure deionized-water samples indicated that the laboratories are maintaining good control of laboratory contamination. Estimated analytical precision among the laboratories indicates that the magnitudes of chemical-analysis errors are not environmentally significant to NADP data users. Overall precision of the precipitation-monitoring system used by the NADP/NTN was estimated by evaluation of samples from collocated monitoring sites at CA99, CO08, and NH02. Precision defined by the median of the absolute percent difference (MAE) was estimated to be approximately 10 percent or less for calcium, magnesium, sodium, chloride, nitrate, sulfate, specific conductance, and sample volume. The MAE values for ammonium and hydrogen-ion concentrations were estimated to be less than 10 percent for CA99 and NH02 but nearly 20 percent for ammonium concentration and about 17 percent for hydrogen-ion concentration for CO08. As in past years, the variability in the collocated-site data for sam

  6. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS) measurements

    NASA Astrophysics Data System (ADS)

    Dohe, S.; Sherlock, V.; Hase, F.; Gisi, M.; Robinson, J.; Sepúlveda, E.; Schneider, M.; Blumenstock, T.

    2013-08-01

    The Total Carbon Column Observing Network (TCCON) has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF) of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE) is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment). Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y) at both sites show discrepancies of 0.2-0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  7. Risk assessment guidance for Superfund: Volume 1 -- Human health evaluation manual. Supplement to Part A: Community involvement in Superfund risk assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1999-03-01

    The purpose of the guidance document is to provide the site team--risk assessor, remedial project manager (RPM), and community involvement coordinator--with information to improve community involvement in the Superfund risk assessment process. Specifically, the document: provides suggestions for how Superfund staff and community members can work together during the early stages of Superfund cleanup; identifies where, within the framework of the human health risk assessment methodology, community input can augment and improve EPA`s estimates of exposure and risk; recommends questions the site team should ask the community; and illustrates why community involvement is valuable during the human health assessment atmore » Superfund sites.« less

  8. Automatic detection of patient identification and positioning errors in radiation therapy treatment using 3-dimensional setup images.

    PubMed

    Jani, Shyam S; Low, Daniel A; Lamb, James M

    2015-01-01

    To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  9. Aquifer response to stream-stage and recharge variations. II. Convolution method and applications

    NASA Astrophysics Data System (ADS)

    Barlow, P. M.; DeSimone, L. A.; Moench, A. F.

    2000-05-01

    In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to stream-stage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped) parameter that accounts not only for the resistance of flow at the river-aquifer boundary, but also for the effects of partial penetration of the river and other near-stream flow phenomena not included in the theoretical development of the step-response functions.

  10. Characterization of air manganese exposure estimates for residents in two Ohio towns

    PubMed Central

    Colledge, Michelle A.; Julian, Jaime R.; Gocheva, Vihra V.; Beseler, Cheryl L.; Roels, Harry A.; Lobdell, Danelle T.; Bowler, Rosemarie M.

    2016-01-01

    This study was conducted to derive receptor-specific outdoor exposure concentrations of total suspended particulate (TSP) and respirable (dae ≤ 10 μm) air manganese (air-Mn) for East Liverpool and Marietta (Ohio) in the absence of facility emissions data, but where long-term air measurements were available. Our “site-surface area emissions method” used U.S. Environmental Protection Agency’s (EPA) AERMOD (AMS/EPA Regulatory Model) dispersion model and air measurement data to estimate concentrations for residential receptor sites in the two communities. Modeled concentrations were used to create ratios between receptor points and calibrated using measured data from local air monitoring stations. Estimated outdoor air-Mn concentrations were derived for individual study subjects in both towns. The mean estimated long-term air-Mn exposure levels for total suspended particulate were 0.35 μg/m3 (geometric mean [GM]) and 0.88 μg/m3 (arithmetic mean [AM]) in East Liverpool (range: 0.014–6.32 μg/m3) and 0.17 μg/m3 (GM) and 0.21 μg/m3 (AM) in Marietta (range: 0.03–1.61 μg/m3). Modeled results compared well with averaged ambient air measurements from local air monitoring stations. Exposure to respirable Mn particulate matter (PM10; PM <10 μm) was higher in Marietta residents. Implications Few available studies evaluate long-term health outcomes from inhalational manganese (Mn) exposure in residential populations, due in part to challenges in measuring individual exposures. Local long-term air measurements provide the means to calibrate models used in estimating long-term exposures. Furthermore, this combination of modeling and ambient air sampling can be used to derive receptor-specific exposure estimates even in the absence of source emissions data for use in human health outcome studies. PMID:26211636

  11. Species richness and occupancy estimation in communities subject to temporary emigration

    USGS Publications Warehouse

    Kery, M.; Royle, J. Andrew; Plattner, M.; Dorazio, R.M.

    2009-01-01

    Species richness is the most common biodiversity metric, although typically some species remain unobserved. Therefore, estimates of species richness and related quantities should account for imperfect detectability. Community dynamics can often be represented as superposition of species-specific phenologies (e. g., in taxa with well-defined flight [insects], activity [rodents], or vegetation periods [plants]). We develop a model for such predictably open communities wherein species richness is expressed as the sum over observed and unobserved species of estimated species-specific and site-specific occurrence indicators and where seasonal occurrence is modeled as a species-specific function of time. Our model is a multispecies extension of a multistate model with one unobservable state and represents a parsimonious way of dealing with a widespread form of 'temporary emigration.'' For illustration we use Swiss butterfly monitoring data collected under a robust design (RD); species were recorded on 13 transects during two secondary periods within <= 7 primary sampling periods. We compare estimates with those under a variation of the model applied to standard data, where secondary samples are pooled. The latter model yielded unrealistically high estimates of total community size of 274 species. In contrast, estimates were similar under models applied to RD data with constant (122) or seasonally varying (126) detectability for each species, but the former was more parsimonious and therefore used for inference. Per transect, 6 44 (mean 21.1) species were detected. Species richness estimates averaged 29.3; therefore only 71% (range 32-92%) of all species present were ever detected. In any primary period, 0.4-5.6 species present were overlooked. Detectability varied by species and averaged 0.88 per primary sampling period. Our modeling framework is extremely flexible; extensions such as covariates for the occurrence or detectability of individual species are easy. It should be useful for communities with a predictable form of temporary emigration where rigorous estimation of community metrics has proved challenging so far.

  12. Characterizing low affinity epibatidine binding to α4β2 nicotinic acetylcholine receptors with ligand depletion and nonspecific binding

    PubMed Central

    2011-01-01

    Background Along with high affinity binding of epibatidine (Kd1≈10 pM) to α4β2 nicotinic acetylcholine receptor (nAChR), low affinity binding of epibatidine (Kd2≈1-10 nM) to an independent binding site has been reported. Studying this low affinity binding is important because it might contribute understanding about the structure and synthesis of α4β2 nAChR. The binding behavior of epibatidine and α4β2 AChR raises a question about interpreting binding data from two independent sites with ligand depletion and nonspecific binding, both of which can affect equilibrium binding of [3H]epibatidine and α4β2 nAChR. If modeled incorrectly, ligand depletion and nonspecific binding lead to inaccurate estimates of binding constants. Fitting total equilibrium binding as a function of total ligand accurately characterizes a single site with ligand depletion and nonspecific binding. The goal of this study was to determine whether this approach is sufficient with two independent high and low affinity sites. Results Computer simulations of binding revealed complexities beyond fitting total binding for characterizing the second, low affinity site of α4β2 nAChR. First, distinguishing low-affinity specific binding from nonspecific binding was a potential problem with saturation data. Varying the maximum concentration of [3H]epibatidine, simultaneously fitting independently measured nonspecific binding, and varying α4β2 nAChR concentration were effective remedies. Second, ligand depletion helped identify the low affinity site when nonspecific binding was significant in saturation or competition data, contrary to a common belief that ligand depletion always is detrimental. Third, measuring nonspecific binding without α4β2 nAChR distinguished better between nonspecific binding and low-affinity specific binding under some circumstances of competitive binding than did presuming nonspecific binding to be residual [3H]epibatidine binding after adding a large concentration of cold competitor. Fourth, nonspecific binding of a heterologous competitor changed estimates of high and low inhibition constants but did not change the ratio of those estimates. Conclusions Investigating the low affinity site of α4β2 nAChR with equilibrium binding when ligand depletion and nonspecific binding are present likely needs special attention to experimental design and data interpretation beyond fitting total binding data. Manipulation of maximum ligand and receptor concentrations and intentionally increasing ligand depletion are potentially helpful approaches. PMID:22112852

  13. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-site Study

    PubMed Central

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verbal IQ below 85 or above 115, and ages 16 and older. Best practice diagnostic procedures should include the Module 4 in conjunction with other assessment tools. Balancing needs for sensitivity and specificity depending on the purpose of assessment (e.g., clinical vs. research) and demographic characteristics mentioned above will enhance its utility. PMID:26385796

  14. Tuberculous Lymphadenitis Is Associated with Enhanced Baseline and Antigen-Specific Induction of Type 1 and Type 17 Cytokines and Reduced Interleukin-1β (IL-1β) and IL-18 at the Site of Infection.

    PubMed

    Kathamuthu, Gokul Raj; Moideen, Kadar; Baskaran, Dhanaraj; Banurekha, Vaithilingam V; Nair, Dina; Sekar, Gomathi; Sridhar, Rathinam; Vidyajayanthi, Bharathi; Gajendraraj, Ganeshan; Parandhaman, Dinesh Kumar; Srinivasan, Alena; Babu, Subash

    2017-05-01

    Tuberculous lymphadenitis (TBL) is characterized by an expansion of Th1 and Th17 cells with altered serum levels of proinflammatory cytokines. However, the cytokine profile at the site of infection, i.e., the affected lymph nodes, has not been examined in detail. To estimate the baseline and mycobacterial antigen-stimulated concentrations of type 1, type 17, and other proinflammatory cytokines in patients with TBL ( n = 14), we examined both the baseline and the antigen-specific concentrations of these cytokines before and after chemotherapy and compared them with those in individuals with pulmonary tuberculosis (PTB) ( n = 14). In addition, we also compared the cytokine responses in whole blood and those in the lymph nodes of TBL individuals. We observed significantly enhanced baseline and antigen-specific levels of type 1 cytokines (gamma interferon [IFN-γ] and tumor necrosis factor alpha [TNF-α]) and a type 17 cytokine (interleukin-17 [IL-17]) and significantly diminished baseline and antigen-specific levels of proinflammatory cytokines (IL-1β and IL-18) in the whole blood of TBL individuals compared to those in the whole blood of PTB individuals. Moreover, we also observed a pattern of baseline and antigen-specific cytokine production at the site of infection (lymph node) similar to that in the whole blood of TBL individuals. Following standard antituberculosis (anti-TB) treatment, we observed alterations in the baseline and/or antigen-specific levels of IFN-γ, TNF-α, IL-1β, and IL-18. TBL is therefore characterized by enhanced baseline and antigen-specific production of type 1 and type 17 cytokines and reduced baseline and antigen-specific production of IL-1β and IL-18 at the site of infection. Copyright © 2017 American Society for Microbiology.

  15. Yields of Soviet underground nuclear explosions at Novaya Zemlya, 1964-1976, from seismic body and surface waves

    PubMed Central

    Sykes, Lynn R.; Wiggins, Graham C.

    1986-01-01

    Surface and body wave magnitudes are determined for 15 U.S.S.R. underground nuclear weapons tests conducted at Novaya Zemlya between 1964 and 1976 and are used to estimate yields. These events include the largest underground explosions detonated by the Soviet Union. A histogram of body wave magnitude (mb) values indicates a clustering of explosions at a few specific yields. The most pronounced cluster consists of six explosions of yield near 500 kilotons. Several of these seem to be tests of warheads for major strategic systems that became operational in the late 1970s. The largest Soviet underground explosion is estimated to have a yield of 3500 ± 600 kilotons, somewhat smaller than the yield of the largest U.S. underground test. A preliminary estimation of the significance of tectonic release is made by measuring the amplitude of Love waves. The bias in mb for Novaya Zemlya relative to the Nevada test site is about 0.35, nearly identical to that of the eastern Kazakhstan test site relative to Nevada. PMID:16593645

  16. Monitoring in the nearshore: A process for making reasoned decisions

    USGS Publications Warehouse

    Bodkin, James L.; Dean, T.A.

    2003-01-01

    Over the past several years, a conceptual framework for the GEM nearshore monitoring program has been developed through a series of workshops. However, details of the proposed monitoring program, e.g. what to sample, where to sample, when to sample and at how many sites, have yet to be determined. In FY 03 we were funded under Project 03687 to outline a process whereby specific alternatives to monitoring are developed and presented to the EVOS Trustee Council for consideration. As part of this process, two key elements are required before reasoned decisions can be made. These are: 1) a comprehensive historical perspective of locations and types of past studies conducted in the nearshore marine communities within Gulf of Alaska, and 2) estimates of costs for each element of a proposed monitoring program. We have developed a GIS database that details available information from past studies of selected nearshore habitats and species in the Gulf of Alaska and provide a visual means of selecting sites based (in part) on the locations for which historical data of interest are available. We also provide cost estimates for specific monitoring plan alternatives and outline several alternative plans that can be accomplished within reasonable budgetary constraints. The products that we will provide are: 1) A GIS database and maps showing the location and types of information available from the nearshore in the Gulf of Alaska; 2) A list of several specific monitoring alternatives that can be conducted within reasonable budgetary constraints; and 3) Cost estimates for proposed tasks to be conducted as part of the nearshore program. Because data compilation and management will not be completed until late in FY03 we are requesting support for close-out of this project in FY 04.

  17. Non-steroidal Anti-inflammatory Drugs and Cancer Risk in Women: Results from the Women’s Health Initiative

    PubMed Central

    Brasky, Theodore M.; Liu, Jingmin; White, Emily; Peters, Ulrike; Potter, John D.; Walter, Roland B.; Baik, Christina S.; Lane, Dorothy S.; Manson, JoAnn E.; Vitolins, Mara Z.; Allison, Matthew A.; Tang, Jean Y.; Wactawski-Wende, Jean

    2017-01-01

    The use of non-steroidal anti-inflammatory drugs (NSAIDs) has been associated with reduced risks of cancers at several sites in some studies; however, we recently reported no association between their use and total cancer risk in women in a prospective study. Here we examine the association between NSAIDs and total and site-specific cancer incidence in the large, prospective Women’s Health Initiative (WHI). 129,013 women were recruited to participate in the WHI at 40 US clinical centers from 1993 to 1998 and followed prospectively. After 9.7 years of follow-up, 12,998 incident, first primary, invasive cancers were diagnosed. NSAID use was systematically collected at study visits. We used Cox proportional hazards regression models to estimate multivariable-adjusted hazard ratios (HR) and 95% confidence intervals (CI) for associations between NSAIDs use and total and site-specific cancer risk. Relative to non-use, consistent use (i.e., use at baseline and year 3 of follow-up) of any NSAID was not associated with total cancer risk (HR 1.00, 95% CI: 0.94–1.06). Results for individual NSAIDs were similar to the aggregate measure. In site-specific analyses, NSAIDs were associated with reduced risks of colorectal cancer, ovarian cancer, and melanoma. Our study confirms a chemopreventive benefit for colorectal cancer in women and gives preliminary evidence for a reduction of the risk of some rarer cancers. NSAIDs’ benefit on cancer risk was limited to specific sites and not evident when total cancer risk was examined. This information may be of importance when NSAIDs are considered as chemopreventive agents. PMID:24599876

  18. Influences of Availability on Parameter Estimates from Site Occupancy Models with Application to Submersed Aquatic Vegetation

    USGS Publications Warehouse

    Gray, Brian R.; Holland, Mark D.; Yi, Feng; Starcevich, Leigh Ann Harrod

    2013-01-01

    Site occupancy models are commonly used by ecologists to estimate the probabilities of species site occupancy and of species detection. This study addresses the influence on site occupancy and detection estimates of variation in species availability among surveys within sites. Such variation in availability may result from temporary emigration, nonavailability of the species for detection, and sampling sites spatially when species presence is not uniform within sites. We demonstrate, using Monte Carlo simulations and aquatic vegetation data, that variation in availability and heterogeneity in the probability of availability may yield biases in the expected values of the site occupancy and detection estimates that have traditionally been associated with low-detection probabilities and heterogeneity in those probabilities. These findings confirm that the effects of availability may be important for ecologists and managers, and that where such effects are expected, modification of sampling designs and/or analytical methods should be considered. Failure to limit the effects of availability may preclude reliable estimation of the probability of site occupancy.

  19. Analysis of High Frequency Site-Specific Nitrogen and Oxygen Isotopic Composition of Atmospheric Nitrous Oxide at Mace Head, Ireland

    NASA Astrophysics Data System (ADS)

    McClellan, M. J.; Harris, E. J.; Olszewski, W.; Ono, S.; Prinn, R. G.

    2014-12-01

    Atmospheric nitrous oxide (N2O) significantly impacts Earth's climate due to its dual role as an inert potent greenhouse gas in the troposphere and as a reactive source of ozone-destroying nitrogen oxides in the stratosphere. However, there remain significant uncertainties in the global budget of this gas. The marked spatial divide in its reactivity means that all stages in the N2O life cycle—emission, transport, and destruction—must be examined to understand the overall effect of N2O on climate. Source and sink processes of N2O lead to varying concentrations of N2O isotopologues (14N14N16O, 14N15N16O, 15N14N16O, and 14N14N18O being measured) due to preferential isotopic production and elimination in different environments. Estimation of source and sink fluxes can be improved by combining isotopically resolved N2O observations with simulations using a chemical transport model with reanalysis meteorology and treatments of isotopic signatures of specific surface sources and stratospheric intrusions. We present the first few months of site-specific nitrogen and oxygen isotopic composition data from the Stheno-TILDAS instrument (Harris et al, 2013) at Mace Head, Ireland and compare these to results from MOZART-4 (Model for Ozone and Related Chemical Tracers, version 4) chemical transport model runs including N2O isotopic fractionation processes and reanalysis meterological fields (NCEP/NCAR, MERRA, and GEOS-5). This study forms the basis for future inverse modeling experiments that will improve the accuracy of isotopically differentiated N2O emission and loss estimates. Ref: Harris, E., D. Nelson, W. Olszewski, M. Zahniser, K. Potter, B. McManus, A. Whitehill, R. Prinn, and S. Ono, Development of a spectroscopic technique for continuous online monitoring of oxygen and site-specific nitrogen isotopic composition of atmospheric nitrous oxide, Analytical Chemistry, 2013; DOI: 10.1021/ac403606u.

  20. Independent Review of Simulation of Net Infiltration for Present-Day and Potential Future Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Review Panel: Soroosh Sorooshian, Ph.D., Panel Chairperson, University of California, Irvine; Jan M. H. Hendrickx, Ph.D., New Mexico Institute of Mining and Technology; Binayak P. Mohanty, Ph.D., Texas A&M University

    The DOE Office of Civilian Radioactive Waste Management (OCRWM) tasked Oak Ridge Institute for Science and Education (ORISE) with providing an independent expert review of the documented model and prediction results for net infiltration of water into the unsaturated zone at Yucca Mountain. The specific purpose of the model, as documented in the report MDL-NBS-HS-000023, Rev. 01, is “to provide a spatial representation, including epistemic and aleatory uncertainty, of the predicted mean annual net infiltration at the Yucca Mountain site ...” (p. 1-1) The expert review panel assembled by ORISE concluded that the model report does not provide a technicallymore » credible spatial representation of net infiltration at Yucca Mountain. Specifically, the ORISE Review Panel found that: • A critical lack of site-specific meteorological, surface, and subsurface information prevents verification of (i) the net infiltration estimates, (ii) the uncertainty estimates of parameters caused by their spatial variability, and (iii) the assumptions used by the modelers (ranges and distributions) for the characterization of parameters. The paucity of site-specific data used by the modeling team for model implementation and validation is a major deficiency in this effort. • The model does not incorporate at least one potentially important hydrologic process. Subsurface lateral flow is not accounted for by the model, and the assumption that the effect of subsurface lateral flow is negligible is not adequately justified. This issue is especially critical for the wetter climate periods. This omission may be one reason the model results appear to underestimate net infiltration beneath wash environments and therefore imprecisely represent the spatial variability of net infiltration. • While the model uses assumptions consistently, such as uniform soil depths and a constant vegetation rooting depth, such assumptions may not be appropriate for this net infiltration simulation because they oversimplify a complex landscape and associated hydrologic processes, especially since the model assumptions have not been adequately corroborated by field and laboratory observations at Yucca Mountain.« less

  1. A-to-I RNA editing occurs at over a hundred million genomic sites, located in a majority of human genes.

    PubMed

    Bazak, Lily; Haviv, Ami; Barak, Michal; Jacob-Hirsch, Jasmine; Deng, Patricia; Zhang, Rui; Isaacs, Farren J; Rechavi, Gideon; Li, Jin Billy; Eisenberg, Eli; Levanon, Erez Y

    2014-03-01

    RNA molecules transmit the information encoded in the genome and generally reflect its content. Adenosine-to-inosine (A-to-I) RNA editing by ADAR proteins converts a genomically encoded adenosine into inosine. It is known that most RNA editing in human takes place in the primate-specific Alu sequences, but the extent of this phenomenon and its effect on transcriptome diversity are not yet clear. Here, we analyzed large-scale RNA-seq data and detected ∼1.6 million editing sites. As detection sensitivity increases with sequencing coverage, we performed ultradeep sequencing of selected Alu sequences and showed that the scope of editing is much larger than anticipated. We found that virtually all adenosines within Alu repeats that form double-stranded RNA undergo A-to-I editing, although most sites exhibit editing at only low levels (<1%). Moreover, using high coverage sequencing, we observed editing of transcripts resulting from residual antisense expression, doubling the number of edited sites in the human genome. Based on bioinformatic analyses and deep targeted sequencing, we estimate that there are over 100 million human Alu RNA editing sites, located in the majority of human genes. These findings set the stage for exploring how this primate-specific massive diversification of the transcriptome is utilized.

  2. Estimating site occupancy rates when detection probabilities are less than one

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.

    2002-01-01

    Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.

  3. Comparison of hoop-net trapping and visual surveys to monitor abundance of the Rio Grande cooter (Pseudemys gorzugi).

    PubMed

    Mali, Ivana; Duarte, Adam; Forstner, Michael R J

    2018-01-01

    Abundance estimates play an important part in the regulatory and conservation decision-making process. It is important to correct monitoring data for imperfect detection when using these data to track spatial and temporal variation in abundance, especially in the case of rare and elusive species. This paper presents the first attempt to estimate abundance of the Rio Grande cooter ( Pseudemys gorzugi ) while explicitly considering the detection process. Specifically, in 2016 we monitored this rare species at two sites along the Black River, New Mexico via traditional baited hoop-net traps and less invasive visual surveys to evaluate the efficacy of these two sampling designs. We fitted the Huggins closed-capture estimator to estimate capture probabilities using the trap data and distance sampling models to estimate detection probabilities using the visual survey data. We found that only the visual survey with the highest number of observed turtles resulted in similar abundance estimates to those estimated using the trap data. However, the estimates of abundance from the remaining visual survey data were highly variable and often underestimated abundance relative to the estimates from the trap data. We suspect this pattern is related to changes in the basking behavior of the species and, thus, the availability of turtles to be detected even though all visual surveys were conducted when environmental conditions were similar. Regardless, we found that riverine habitat conditions limited our ability to properly conduct visual surveys at one site. Collectively, this suggests visual surveys may not be an effective sample design for this species in this river system. When analyzing the trap data, we found capture probabilities to be highly variable across sites and between age classes and that recapture probabilities were much lower than initial capture probabilities, highlighting the importance of accounting for detectability when monitoring this species. Although baited hoop-net traps seem to be an effective sampling design, it is important to note that this method required a relatively high trap effort to reliably estimate abundance. This information will be useful when developing a larger-scale, long-term monitoring program for this species of concern.

  4. Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada

    USGS Publications Warehouse

    Hess, G.W.; Bohman, L.R.

    1996-01-01

    Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.

  5. A revised logistic regression equation and an automated procedure for mapping the probability of a stream flowing perennially in Massachusetts

    USGS Publications Warehouse

    Bent, Gardner C.; Steeves, Peter A.

    2006-01-01

    A revised logistic regression equation and an automated procedure were developed for mapping the probability of a stream flowing perennially in Massachusetts. The equation provides city and town conservation commissions and the Massachusetts Department of Environmental Protection a method for assessing whether streams are intermittent or perennial at a specific site in Massachusetts by estimating the probability of a stream flowing perennially at that site. This information could assist the environmental agencies who administer the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a 200-foot-wide protected riverfront area extending from the mean annual high-water line along each side of a perennial stream, with exceptions for some urban areas. The equation was developed by relating the observed intermittent or perennial status of a stream site to selected basin characteristics of naturally flowing streams (defined as having no regulation by dams, surface-water withdrawals, ground-water withdrawals, diversion, wastewater discharge, and so forth) in Massachusetts. This revised equation differs from the equation developed in a previous U.S. Geological Survey study in that it is solely based on visual observations of the intermittent or perennial status of stream sites across Massachusetts and on the evaluation of several additional basin and land-use characteristics as potential explanatory variables in the logistic regression analysis. The revised equation estimated more accurately the intermittent or perennial status of the observed stream sites than the equation from the previous study. Stream sites used in the analysis were identified as intermittent or perennial based on visual observation during low-flow periods from late July through early September 2001. The database of intermittent and perennial streams included a total of 351 naturally flowing (no regulation) sites, of which 85 were observed to be intermittent and 266 perennial. Stream sites included in the database had drainage areas that ranged from 0.04 to 10.96 square miles. Of the 66 stream sites with drainage areas greater than 2.00 square miles, 2 sites were intermittent and 64 sites were perennial. Thus, stream sites with drainage areas greater than 2.00 square miles were assumed to flow perennially, and the database used to develop the logistic regression equation included only those stream sites with drainage areas less than 2.00 square miles. The database for the equation included 285 stream sites that had drainage areas less than 2.00 square miles, of which 83 sites were intermittent and 202 sites were perennial. Results of the logistic regression analysis indicate that the probability of a stream flowing perennially at a specific site in Massachusetts can be estimated as a function of four explanatory variables: (1) drainage area (natural logarithm), (2) areal percentage of sand and gravel deposits, (3) areal percentage of forest land, and (4) region of the state (eastern region or western region). Although the equation provides an objective means of determining the probability of a stream flowing perennially at a specific site, the reliability of the equation is constrained by the data used in its development. The equation is not recommended for (1) losing stream reaches or (2) streams whose ground-water contributing areas do not coincide with their surface-water drainage areas, such as many streams draining the Southeast Coastal Region-the southern part of the South Coastal Basin, the eastern part of the Buzzards Bay Basin, and the entire area of the Cape Cod and the Islands Basins. If the equation were used on a regulated stream site, the estimated intermittent or perennial status would reflect the natural flow conditions for that site. An automated mapping procedure was developed to determine the intermittent or perennial status of stream sites along reaches throughout a basin. The procedure delineates the drainage area boundaries, determines values for the four explanatory variables, and solves the equation for estimating the probability of a stream flowing perennially at two locations on a headwater (first-order) stream reach-one near its confluence or end point and one near its headwaters or start point. The automated procedure then determines the intermittent or perennial status of the reach on the basis of the calculated probability values and a probability cutpoint (a stream is considered to flow perennially at a cutpoint of 0.56 or greater for this study) for the two locations or continues to loop upstream or downstream between locations less than and greater than the cutpoint of 0.56 to determine the transition point from an intermittent to a perennial stream. If the first-order stream reach is determined to be intermittent, the procedure moves to the next downstream reach and repeats the same process. The automated procedure then moves to the next first-order stream and repeats the process until the entire basin is mapped. A map of the intermittent and perennial stream reaches in the Shawsheen River Basin is provided on a CD-ROM that accompanies this report. The CD-ROM also contains ArcReader 9.0, a freeware product, that allows a user to zoom in and out, set a scale, pan, turn on and off map layers (such as a USGS topographic map), and print a map of the stream site with a scale bar. Maps of the intermittent and perennial stream reaches in Massachusetts will provide city and town conservation commissions and the Massachusetts Department of Environmental Protection with an additional method for assessing the intermittent or perennial status of stream sites.

  6. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  7. Development of a new family of normalized modulus reduction and material damping curves

    NASA Astrophysics Data System (ADS)

    Darendeli, Mehmet Baris

    2001-12-01

    As part of various research projects [including the SRS (Savannah River Site) Project AA891070, EPRI (Electric Power Research Institute) Project 3302, and ROSRINE (Resolution of Site Response Issues from the Northridge Earthquake) Project], numerous geotechnical sites were drilled and sampled. Intact soil samples over a depth range of several hundred meters were recovered from 20 of these sites. These soil samples were tested in the laboratory at The University of Texas at Austin (UTA) to characterize the materials dynamically. The presence of a database accumulated from testing these intact specimens motivated a re-evaluation of empirical curves employed in the state of practice. The weaknesses of empirical curves reported in the literature were identified and the necessity of developing an improved set of empirical curves was recognized. This study focused on developing the empirical framework that can be used to generate normalized modulus reduction and material damping curves. This framework is composed of simple equations, which incorporate the key parameters that control nonlinear soil behavior. The data collected over the past decade at The University of Texas at Austin are statistically analyzed using First-order, Second-moment Bayesian Method (FSBM). The effects of various parameters (such as confining pressure and soil plasticity) on dynamic soil properties are evaluated and quantified within this framework. One of the most important aspects of this study is estimating not only the mean values of the empirical curves but also estimating the uncertainty associated with these values. This study provides the opportunity to handle uncertainty in the empirical estimates of dynamic soil properties within the probabilistic seismic hazard analysis framework. A refinement in site-specific probabilistic seismic hazard assessment is expected to materialize in the near future by incorporating the results of this study into state of practice.

  8. Modeling clear-sky solar radiation across a range of elevations in Hawai‘i: Comparing the use of input parameters at different temporal resolutions

    NASA Astrophysics Data System (ADS)

    Longman, Ryan J.; Giambelluca, Thomas W.; Frazier, Abby G.

    2012-01-01

    Estimates of clear sky global solar irradiance using the parametric model SPCTRAL2 were tested against clear sky radiation observations at four sites in Hawai`i using daily, mean monthly, and 1 year mean model parameter settings. Atmospheric parameters in SPCTRAL2 and similar models are usually set at site-specific values and are not varied to represent the effects of fluctuating humidity, aerosol amount and type, or ozone concentration, because time-dependent atmospheric parameter estimates are not available at most sites of interest. In this study, we sought to determine the added value of using time dependent as opposed to fixed model input parameter settings. At the AERONET site, Mauna Loa Observatory (MLO) on the island of Hawai`i, where daily measurements of atmospheric optical properties and hourly solar radiation observations are available, use of daily rather than 1 year mean aerosol parameter values reduced mean bias error (MBE) from 18 to 10 W m-2 and root mean square error from 25 to 17 W m-2. At three stations in the HaleNet climate network, located at elevations of 960, 1640, and 2590 m on the island of Maui, where aerosol-related parameter settings were interpolated from observed values for AERONET sites at MLO (3397 m) and Lāna`i (20 m), and precipitable water was estimated using radiosonde-derived humidity profiles from nearby Hilo, the model performed best when using constant 1 year mean parameter values. At HaleNet Station 152, for example, MBE was 18, 10, and 8 W m-2 for daily, monthly, and 1 year mean parameters, respectively.

  9. Quantifying Construction Site Sediment Discharge Risk and Treatment Potential

    NASA Astrophysics Data System (ADS)

    Ferrell, L.; Beighley, R. E.

    2006-12-01

    Dealing with soil erosion and sediment transport can be a significant challenge during the construction process due to the potentially large spatial and temporal extent and conditions of bare soils. Best Management Practices (BMP) are commonly used to eliminate or reduce offsite discharge of sediment. However, few efforts have investigated the time varying risk of sediment discharge from construction sites, which often have dynamic soil conditions and the potential for less than optimal BMP installations. The goal of this research is to improve the design, implementation and effectiveness of sediment and erosion control at construction sites using site specific, temporal distributions of sediment discharge risk. Sediment risk is determined from individual factors leading to sediment expert, such as rainfall frequency, the adequacy of BMP installations, and the extent and duration of bare soil conditions. This research specifically focuses on quantifying: (a) the effectiveness of temporary sediment and control erosion control BMPs in preventing, containing, and/or treating construction site sediment discharge at varying levels of "proper" installation, and (b) sediment discharge potential from construction sites during different phases of construction, (ex., disturbed earth operations). BMPs are evaluated at selected construction sites in southern California and at the Soil Erosion Research Laboratory (SERL) in the Civil and Environmental Engineering department at San Diego State University. SERL experiments are performed on a 3-m by 10-m tilting soil bed with soil depths up to 1 meter, slopes ranging from 0 to 50 percent, and rainfall rates up to 150 mm/hr (6 in/hr). BMP performance is assessed based on experiments where BMPs are installed per manufacture specifications, potential less than optimal installations, and no treatment conditions. Soil conditions are also varied to represent site conditions during different phases of construction (i.e., loose lifts, stock piles, temporary roads, finished grade, others). Preliminary site monitoring, experimental results, and a conceptual model for estimating the time depend risk of sediment discharge over the duration of a construction project are presented.

  10. Subsistence strategies in Argentina during the late Pleistocene and early Holocene

    NASA Astrophysics Data System (ADS)

    Martínez, Gustavo; Gutiérrez, María A.; Messineo, Pablo G.; Kaufmann, Cristian A.; Rafuse, Daniel J.

    2016-07-01

    This paper highlights regional and temporal variation in the presence and exploitation of faunal resources from different regions of Argentina during the late Pleistocene and early Holocene. Specifically, the faunal analysis considered here includes the zooarchaeological remains from all sites older than 7500 14C years BP. We include quantitative information for each reported species (genus, family, or order) and we use the number of identified specimens (NISP per taxon and the NISPtotal by sites) as the quantitative measure of taxonomic abundance. The taxonomic richness (Ntaxatotal and Ntaxaexploited) and the taxonomic heterogeneity or Shannon-Wiener index are estimated in order to consider dietary generalization or specialization, and ternary diagrams are used to categorize subsistence patterns of particular sites and regions. The archaeological database is composed of 78 sites which are represented by 110 stratigraphic contexts. Our results demonstrate that although some quantitative differences between regions are observed, artiodactyls (camelids and deer) were the most frequently consumed animal resource in Argentina. Early hunter-gatherers did not follow a specialized predation strategy in megamammals. A variety in subsistence systems, operating in parallel with a strong regional emphasis is shown, according to specific environmental conditions and cultural trajectories.

  11. Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site

    PubMed Central

    Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.

    2016-01-01

    Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites. PMID:27996043

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robichaud, R.; Fields, J.; Roberts, J. O.

    The U.S. Environmental Protection Agency (EPA) launched the RE-Powering America's Land initiative to encourage development of renewable energy (RE) on potentially contaminated land and mine sites. EPA is collaborating with the U.S. Department of Energy's (DOE's) National Renewable Energy Laboratory (NREL) to evaluate RE options at Naval Station (NAVSTA) Newport in Newport, Rhode Island where multiple contaminated areas pose a threat to human health and the environment. Designated a superfund site on the National Priorities List in 1989, the base is committed to working toward reducing the its dependency on fossil fuels, decreasing its carbon footprint, and implementing RE projectsmore » where feasible. The Naval Facilities Engineering Service Center (NFESC) partnered with NREL in February 2009 to investigate the potential for wind energy generation at a number of Naval and Marine bases on the East Coast. NAVSTA Newport was one of several bases chosen for a detailed, site-specific wind resource investigation. NAVSTA Newport, in conjunction with NREL and NFESC, has been actively engaged in assessing the wind resource through several ongoing efforts. This report focuses on the wind resource assessment, the estimated energy production of wind turbines, and a survey of potential wind turbine options based upon the site-specific wind resource.« less

  13. Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site

    NASA Astrophysics Data System (ADS)

    Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.

    2016-12-01

    Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites.

  14. Exploring Site-Specific N-Glycosylation Microheterogeneity of Haptoglobin using Glycopeptide CID Tandem Mass Spectra and Glycan Database Search

    PubMed Central

    Chandler, Kevin Brown; Pompach, Petr; Goldman, Radoslav

    2013-01-01

    Glycosylation is a common protein modification with a significant role in many vital cellular processes and human diseases, making the characterization of protein-attached glycan structures important for understanding cell biology and disease processes. Direct analysis of protein N-glycosylation by tandem mass spectrometry of glycopeptides promises site-specific elucidation of N-glycan microheterogeneity, something which detached N-glycan and de-glycosylated peptide analyses cannot provide. However, successful implementation of direct N-glycopeptide analysis by tandem mass spectrometry remains a challenge. In this work, we consider algorithmic techniques for the analysis of LC-MS/MS data acquired from glycopeptide-enriched fractions of enzymatic digests of purified proteins. We implement a computational strategy which takes advantage of the properties of CID fragmentation spectra of N-glycopeptides, matching the MS/MS spectra to peptide-glycan pairs from protein sequences and glycan structure databases. Significantly, we also propose a novel false-discovery-rate estimation technique to estimate and manage the number of false identifications. We use a human glycoprotein standard, haptoglobin, digested with trypsin and GluC, enriched for glycopeptides using HILIC chromatography, and analyzed by LC-MS/MS to demonstrate our algorithmic strategy and evaluate its performance. Our software, GlycoPeptideSearch (GPS), assigned glycopeptide identifications to 246 of the spectra at false-discovery-rate 5.58%, identifying 42 distinct haptoglobin peptide-glycan pairs at each of the four haptoglobin N-linked glycosylation sites. We further demonstrate the effectiveness of this approach by analyzing plasma-derived haptoglobin, identifying 136 N-linked glycopeptide spectra at false-discovery-rate 0.4%, representing 15 distinct glycopeptides on at least three of the four N-linked glycosylation sites. The software, GlycoPeptideSearch, is available for download from http://edwardslab.bmcb.georgetown.edu/GPS. PMID:23829323

  15. Consequence assessment for Airborne Releases of SO{sub 2} from the Y-12 Pilot Dechlorination Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pendergrass, W.R.

    The Atmospheric Turbulence and Diffusion Division was requested by the Department of Energy`s Oak Ridge Operations Office to conduct a consequence assessment for potential atmospheric releases of SO{sub 2} from the Y-12 Pilot Dechlorination Facility. The focus of the assessment was to identify ``worst`` case meteorology which posed the highest concentration exposure potential for both on-site as well as off-site populations. A series of plausible SO{sub 2} release scenarios were provided by Y-12 for the consequence assessment. Each scenario was evaluated for predictions of downwind concentration, estimates of a five-minute time weighted average, and estimate of the dimension of themore » puff. The highest hazard potential was associated with Scenario 1, in which a total of eight SO{sub 2} cylinders are released internally to the Pilot Facility and exhausted through the emergency venting system. A companion effort was also conducted to evaluate the potential for impact of releases of SO{sub 2} from the Pilot Facility on the population of Oak Ridge. While specific transport trajectory data is not available for the Pilot Facility, extrapolations based on the Oak Ridge Site Survey and climatological records from the Y-12 meteorological program does not indicate the potential for impact on the city of Oak Ridge. Steering by the local topographical features severely limits the potential impact ares. Due to the lack of specific observational data, both tracer and meteorological, only inferences can be made concerning impact zones. It is recommended tat the Department of Energy Oak Ridge Operations examine the potential for off-site impact and develop the background data to prepare impact zones for releases of hazardous materials from the Y-12 facility.« less

  16. Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.

    PubMed

    Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael

    2017-12-01

    Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.

  17. Exposure to benzene in a pooled analysis of petroleum industry case-control studies.

    PubMed

    Glass, D C; Schnatter, A R; Tang, G; Armstrong, T W; Rushton, L

    2017-11-01

    Cases of lymphohematopoietic cancer from three petroleum industry cohorts, matched to controls from the respective cohort, were pooled into single study. Average benzene exposure was quantitatively estimated in ppm for each job based on measured data from the relevant country, adjusted for the specific time period, site and job exposure characteristics and the certainty of the exposure estimate scored. The probability of dermal exposure and of peak exposure was also assessed. Before risk was examined, an exposure estimate comparison and rationalisation exercise was performed across the studies to ensure accuracy and consistency of approach. This article evaluates the final exposure estimates and their use in the risk assessments. Overall benzene exposure estimates were low: 90% of participants accumulated less than 20 ppm-years. Mean cumulative exposure was estimated as 5.15 ppm-years, mean duration was 22 years, and mean exposure intensity was 0.2 ppm. 46% of participants were allocated a peak exposure (>3 ppm at least weekly). 40% of participants had a high probability of dermal exposure (based on the relative probability of at least weekly exposure). There were differences in mean intensity of exposure, probability of peak, and/or dermal exposure associated with job category, job site, and decade of exposure. Terminal Operators handling benzene-containing products were the most highly exposed group, followed by Tanker Drivers carrying gasoline. Exposures were higher around 1940-1950 and lower in more recent decades. Overall confidence in the exposure estimates was highest for recently held jobs and for white-collar jobs. We used sensitivity analyses, which included and excluded case-sets on the basis of exposure certainty scores, to inform the risk assessment. The above analyses demonstrated that the different patterns of exposure across the three studies are largely attributable to differences in jobs, site types, and time frames rather than study. This provides reassurance that the previous rationalisation of exposures achieved inter-study consistency and that the data could be confidently pooled.

  18. Near-infrared spectrometry allows fast and extensive predictions of functional traits from dry leaves and branches.

    PubMed

    Costa, Flávia R C; Lang, Carla; Almeida, Danilo R A; Castilho, Carolina V; Poorter, Lourens

    2018-05-16

    The linking of individual functional traits to ecosystem processes is the basis for making generalizations in ecology, but the measurement of individual values is laborious and time consuming, preventing large-scale trait mapping. Also, in hyper-diverse systems, errors occur because identification is difficult, and species level values ignore intra-specific variation. To allow extensive trait mapping at the individual level, we evaluated the potential of Fourrier-Transformed Near Infra-Red Spectrometry (FT-NIR) to adequately describe 14 traits that are key for plant carbon, water, and nutrient balance. FT-NIR absorption spectra (1,000-2,500 nm) were obtained from dry leaves and branches of 1,324 trees of 432 species from a hyper-diverse Amazonian forest. FT-NIR spectra were related to measured traits for the same plants using partial least squares regressions. A further 80 plants were collected from a different site to evaluate model applicability across sites. Relative prediction error (RMSE rel ) was calculated as the percentage of the trait value range represented by the final model RMSE. The key traits used in most functional trait studies; specific leaf area, leaf dry matter content, wood density and wood dry matter content can be well predicted by the model (R 2  = 0.69-0.78, RMSE rel  = 9-11%), while leaf density, xylem proportion, bark density and bark dry matter content can be moderately well predicted (R 2  = 0.53-0.61, RMSE rel  = 14-17%). Community-weighted means of all traits were well estimated with NIR, as did the shape of the frequency distribution of the community values for the above key traits. The model developed at the core site provided good estimations of the key traits of a different site. An evaluation of the sampling effort indicated that 400 or less individuals may be sufficient for establishing a good local model. We conclude that FT-NIR is an easy, fast and cheap method for the large-scale estimation of individual plant traits that was previously impossible. The ability to use dry intact leaves and branches unlocks the potential for using herbarium material to estimate functional traits; thus advancing our knowledge of community and ecosystem functioning from local to global scales. © 2018 by the Ecological Society of America.

  19. Using dynamic N-mixture models to test cavity limitation on northern flying squirrel demographic parameters using experimental nest box supplementation.

    PubMed

    Priol, Pauline; Mazerolle, Marc J; Imbeau, Louis; Drapeau, Pierre; Trudeau, Caroline; Ramière, Jessica

    2014-06-01

    Dynamic N-mixture models have been recently developed to estimate demographic parameters of unmarked individuals while accounting for imperfect detection. We propose an application of the Dail and Madsen (2011: Biometrics, 67, 577-587) dynamic N-mixture model in a manipulative experiment using a before-after control-impact design (BACI). Specifically, we tested the hypothesis of cavity limitation of a cavity specialist species, the northern flying squirrel, using nest box supplementation on half of 56 trapping sites. Our main purpose was to evaluate the impact of an increase in cavity availability on flying squirrel population dynamics in deciduous stands in northwestern Québec with the dynamic N-mixture model. We compared abundance estimates from this recent approach with those from classic capture-mark-recapture models and generalized linear models. We compared apparent survival estimates with those from Cormack-Jolly-Seber (CJS) models. Average recruitment rate was 6 individuals per site after 4 years. Nevertheless, we found no effect of cavity supplementation on apparent survival and recruitment rates of flying squirrels. Contrary to our expectations, initial abundance was not affected by conifer basal area (food availability) and was negatively affected by snag basal area (cavity availability). Northern flying squirrel population dynamics are not influenced by cavity availability at our deciduous sites. Consequently, we suggest that this species should not be considered an indicator of old forest attributes in our study area, especially in view of apparent wide population fluctuations across years. Abundance estimates from N-mixture models were similar to those from capture-mark-recapture models, although the latter had greater precision. Generalized linear mixed models produced lower abundance estimates, but revealed the same relationship between abundance and snag basal area. Apparent survival estimates from N-mixture models were higher and less precise than those from CJS models. However, N-mixture models can be particularly useful to evaluate management effects on animal populations, especially for species that are difficult to detect in situations where individuals cannot be uniquely identified. They also allow investigating the effects of covariates at the site level, when low recapture rates would require restricting classic CMR analyses to a subset of sites with the most captures.

  20. Estimating site occupancy rates for aquatic plants using spatial sub-sampling designs when detection probabilities are less than one

    USGS Publications Warehouse

    Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.

    2011-01-01

    Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.

  1. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  2. Estimating rates of local extinction and colonization in colonial species and an extension to the metapopulation and community levels

    USGS Publications Warehouse

    Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.

    2003-01-01

    Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.

  3. Surface vapor conductance derived from the ETRHEQ: Dependence on environmental variables and similarity to Oren's stomatal stress model for vapor pressure deficit

    NASA Astrophysics Data System (ADS)

    Salvucci, G.; Rigden, A. J.

    2015-12-01

    Daily time series of evapotranspiration and surface conductance to water vapor were estimated using the ETRHEQ method (Evapotranspiration from Relative Humidity at Equilibrium). ETRHEQ has been previously compared with ameriflux site-level measurements of ET at daily and seasonal time scales, with watershed water balance estimates, and with various benchmark ET data sets. The ETRHEQ method uses meteorological data collected at common weather stations and estimates the surface conductance by minimizing the vertical variance of the calculated relative humidity profile averaged over the day. The key advantage of the ETRHEQ method is that it does not require knowledge of the surface state (soil moisture, stomatal conductance, leaf are index, etc.) or site-specific calibration. The daily estimates of conductance from 229 weather stations for 53 years were analyzed for dependence on environmental variables known to impact stomatal conductance and soil diffusivity: surface temperature, surface vapor pressure deficit, solar radiation, antecedent precipitation (as a surrogate for soil moisture), and a seasonal vegetation greenness index. At each site the summertime (JJAS) conductance values estimated from ETRHEQ were fitted to a multiplicate Jarvis-type stress model. Functional dependence was not proscribed, but instead fitted using flexible piecewise-linear splines. The resulting stress functions reproduce the time series of conductance across a wide range of ecosystems and climates. The VPD stress term resembles that proposed by Oren (i.e., 1-m*log(VPD) ), with VPD measured in kilopascals. The equivalent value of m derived from our spline-fits at each station varied over a remarkably small range of 0.58 to 0.62, in agreement with Oren's original analysis based on leaf and tree-level measurements.

  4. A Bayesian-Based Novel Methodology to Generate Reliable Site Response Mapping Sensitive to Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Chakraborty, A.; Goto, H.

    2017-12-01

    The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.

  5. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME IV - SITE SPECIFIC STUDIES FOR MO, MS, NC, NH, NJ, NY, OH

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  6. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME V - SITE SPECIFIC STUDIES FOR PA, SC, TN, VA, WI, WV

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  7. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME II - SITE SPECIFIC STUDIES FOR AL, DE. FL, GA, IL

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  8. RETROFIT COSTS FOR SO2 AND NOX CONTROL OPTIONS AT 200 COAL-FIRED PLANTS, VOLUME III - SITE SPECIFIC STUDIES FOR IN, KY, MA, MD, MI, MN

    EPA Science Inventory

    The report gives results of a study, the objective of which was to significantly improve engineering cost estimates currently being used to evaluate the economic effects of applying SO2 and NOx controls at 200 large SO2-emitting coal-fired utility plants. To accomplish the object...

  9. Site-specific critical acid load estimates for forest soils in the Osborn Creek watershed, Michigan

    Treesearch

    Trevor Hobbs; Jason Lynch; Randy Kolka

    2017-01-01

    Anthropogenic acid deposition has the potential to accelerate leaching of soil cations, and in turn, deplete nutrients essential to forest vegetation. The critical load concept, employing a simple mass balance (SMB) approach, is often used to model this process. In an evaluation under the U.S. Forest Service Watershed Condition Framework program, soils in all 6th level...

  10. Haditha General Hospital Under the Economic Support Fund Program Haditha, Iraq

    DTIC Science & Technology

    2009-06-23

    disease from the use of the restrooms. Photos 10 and 11. Heart monitors and defibrillator machines (left) and standing...350 kilometers west of Baghdad, Haditha is a river-side community with an estimated population of 150,000. The hospital, located in the heart of...medical equipment requiring electricity; specifically, several heart monitors and defibrillator machines (Site Photo 10). This equipment appeared

  11. The United Kingdom SATMaP program

    NASA Technical Reports Server (NTRS)

    Towshend, J. R.; Cushnie, J.; Atkinson, P.; Hardy, J. R.; Wilson, A.; Harrison, A.; Baker, J. R.; Jackson, M.

    1983-01-01

    Data from test tapes from the United States (specifically the August Arkansas scene) and the first tape of the UK test site which came from ESRIN are analyzed. Methods for estimating spatial resolution are discussed and some preliminary results are included. The characteristics of the ESRIN data are examined and the utility of the various spectral bands of the thematic mapper for land cover mapping are outlined.

  12. Mapping migratory flyways in Asia using dynamic Brownian bridge movement models

    USGS Publications Warehouse

    Palm, E.C.; Newman, S.H.; Prosser, Diann J.; Xiao, Xiangming; Luo, Ze; Batbayar, Nyambayar; Balachandran, Sivananinthaperumal; Takekawa, John Y.

    2015-01-01

    The dynamic Brownian bridge movement model improves our understanding of flyways by estimating relative use of regions in the flyway while providing detailed, quantitative information on migration timing and population connectivity including uncertainty between locations. This model effectively quantifies the relative importance of different migration corridors and stopover sites and may help prioritize specific areas in flyways for conservation of waterbird populations.

  13. Site-specific thromboembolism: a novel animal model for stroke.

    PubMed

    Ringer, Andrew J; Guterman, Lee R; Hopkins, L Nelson

    2004-02-01

    To develop a technique for site-specific placement of a thrombus of predetermined volume in an animal model for the purpose of evaluating methods of intravascular thrombolysis and clot retrieval. Six swine were subjected to thrombus injection bilaterally in the ascending pharyngeal artery (APA). Each animal underwent transfemoral angiography while under general anesthesia. A nondetachable balloon catheter and a 3-French microcatheter were then advanced into the common carotid artery through a 7-French guide catheter. With the microcatheter in the proximal APA and the balloon inflated proximally, a bolus of preformed thrombus composed of 0.9 mL of autologous blood and 0.1 mL of bovine thrombin (200 IU/mL) was injected through the microcatheter while local flow arrest was maintained for 15 min. The balloon was deflated and removed. The occluded arteries were observed by serial angiography for 3 hr and then resected for gross examination and hematoxylin and eosin staining. Each APA was occluded angiographically and did not recanalize during the 3-hr observation period. Persistent, proximal progression of thrombus to the superior thyroid artery origin occurred in three animals. Gross inspection revealed that the resected arteries contained thrombus in the proximal APA but not in the common carotid artery. Histologic examination revealed organized thrombus, without evidence of intimal injury. Our model provides a simple, reliable method for site-specific injection of a thrombus of predetermined volume. Site-specific placement is important for evaluation of the efficacy of thrombolytic agents and techniques. Angiographic evidence of brain revascularization can be used to grade revascularization and clot volume. The ability to specifically localize and estimate clot volume makes our model well suited for the evaluation and comparison of thrombolytic agents and endovascular techniques.

  14. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  15. Plastic surgeons’ self-reported operative infection rates at a Canadian academic hospital

    PubMed Central

    Ng, Wendy KY; Kaur, Manraj Nirmal; Thoma, Achilleas

    2014-01-01

    BACKGROUND: Surgical site infection rates are of great interest to patients, surgeons, hospitals and third-party payers. While previous studies have reported hospital-acquired infection rates that are nonspecific to all surgical services, there remain no overall reported infection rates focusing specifically on plastic surgery in the literature. OBJECTIVE: To estimate the reported surgical site infection rate in plastic surgery procedures over a 10-year period at an academic hospital in Canada. METHODS: A review was conducted on reported plastic surgery surgical site infection rates from 2003 to 2013, based on procedures performed in the main operating room. For comparison, prospective infection surveillance data over an eight-year period (2005 to 2013) for nonplastic surgery procedures were reviewed to estimate the overall operative surgical site infection rates. RESULTS: A total of 12,183 plastic surgery operations were performed from 2003 to 2013, with 96 surgical site infections reported, corresponding to a net operative infection rate of 0.79%. There was a 0.49% surgeon-reported infection rate for implant-based procedures. For non-plastic surgery procedures, surgical site infection rates ranged from 0.04% for cataract surgery to 13.36% for high-risk abdominal hysterectomies. DISCUSSION: The plastic surgery infection rate at the study institution was found to be <1%. This rate was equal to, or somewhat less than, surgical site infection rates. However, these results do not report patterns of infection rates germane to procedures, season, age groups or sex. To provide more in-depth knowledge of this topic, multicentre studies should be conducted. PMID:25535460

  16. Great horned owl (Bubo virginianus) dietary exposure to PCDD/DF in the Tittabawassee River floodplain in Midland, Michigan, USA.

    PubMed

    Coefield, Sarah J; Zwiernik, Matthew J; Fredricks, Timothy B; Seston, Rita M; Nadeau, Michael W; Tazelaar, Dustin L; Moore, Jeremy N; Kay, Denise P; Roark, Shaun A; Giesy, John P

    2010-10-01

    Soils and sediments in the floodplain of the Tittabawassee River downstream of Midland, Michigan, USA contain elevated concentrations of polychlorinated dibenzofurans (PCDF) and polychlorinated dibenzo-p-dioxins (PCDD). As a long-lived, resident top predator, the great horned owl (Bubo virginianus; GHO) has the potential to be exposed to bioaccumulative compounds such as PCDD/DF. Site-specific components of the GHO diet were collected along 115 km of the Tittabawassee, Pine, Chippewa, and Saginaw Rivers during 2005 and 2006. The site-specific GHO biomass-based diet was dominated by cottontail rabbits (Sylvilagus floridanus) and muskrats (Ondatra zibethicus). Incidental soil ingestion and cottontail rabbits were the primary contributors of PCDD/DF to the GHO diet. The great horned owl daily dietary exposure estimates were greater in the study area (SA) (3.3 to 5.0 ng 2,3,7,8-TCDD equivalents (TEQ(WHO-avian))/kg body wt/d) than the reference area (RA) (0.07 ng TEQ(WHO-Avian)/kg body wt/d). Hazard quotients (HQs) based on central tendency estimates of the average daily dose and no-observable-adverse effect level (NOAEL) for the screech owl and uncertainty factors were <1.0 for both the RA and the SA. Hazard quotients based on upper end estimates of the average daily dose and NOAEL were <1.0 in the RA and up to 3.4 in the SA. Environ. Toxicol. Chem. 2010;29:2350-2362. © 2010 SETAC.

  17. Burden of disease from toxic waste sites in India, Indonesia, and the Philippines in 2010.

    PubMed

    Chatham-Stephens, Kevin; Caravanos, Jack; Ericson, Bret; Sunga-Amparo, Jennifer; Susilorini, Budi; Sharma, Promila; Landrigan, Philip J; Fuller, Richard

    2013-07-01

    Prior calculations of the burden of disease from toxic exposures have not included estimates of the burden from toxic waste sites due to the absence of exposure data. We developed a disability-adjusted life year (DALY)-based estimate of the disease burden attributable to toxic waste sites. We focused on three low- and middle-income countries (LMICs): India, Indonesia, and the Philippines. Sites were identified through the Blacksmith Institute's Toxic Sites Identification Program, a global effort to identify waste sites in LMICs. At least one of eight toxic chemicals was sampled in environmental media at each site, and the population at risk estimated. By combining estimates of disease incidence from these exposures with population data, we calculated the DALYs attributable to exposures at each site. We estimated that in 2010, 8,629,750 persons were at risk of exposure to industrial pollutants at 373 toxic waste sites in the three countries, and that these exposures resulted in 828,722 DALYs, with a range of 814,934-1,557,121 DALYs, depending on the weighting factor used. This disease burden is comparable to estimated burdens for outdoor air pollution (1,448,612 DALYs) and malaria (725,000 DALYs) in these countries. Lead and hexavalent chromium collectively accounted for 99.2% of the total DALYs for the chemicals evaluated. Toxic waste sites are responsible for a significant burden of disease in LMICs. Although some factors, such as unidentified and unscreened sites, may cause our estimate to be an underestimate of the actual burden of disease, other factors, such as extrapolation of environmental sampling to the entire exposed population, may result in an overestimate of the burden of disease attributable to these sites. Toxic waste sites are a major, and heretofore underrecognized, global health problem.

  18. NESHAP Dose-Release Factor Isopleths for Five Source-to-Receptor Distances from the Center of Site and H-Area for all Compass Sectors at SRS using CAP88-PC Version 4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trimor, P.

    The Environmental Protection Agency (EPA) requires the use of the computer model CAP88-PC to estimate the total effective doses (TED) for demonstrating compliance with 40 CFR 61, Subpart H (EPA 2006), the National Emission Standards for Hazardous Air Pollutants (NESHAP) regulations. As such, CAP88 Version 4.0 was used to calculate the receptor dose due to routine atmospheric releases at the Savannah River Site (SRS). For estimation, NESHAP dose-release factors (DRFs) have been supplied to Environmental Compliance and Area Closure Projects (EC&ACP) for many years. DRFs represent the dose to a maximum receptor exposed to 1 Ci of a specified radionuclidemore » being released into the atmosphere. They are periodically updated to include changes in the CAP88 version, input parameter values, site meteorology, and location of the maximally exposed individual (MEI). This report presents the DRFs of tritium oxide released at two onsite locations, center-of-site (COS) and H-Area, at 0 ft. elevation to maximally exposed individuals (MEIs) located 1000, 3000, 6000, 9000, and 12000 meters from the release areas for 16 compass sectors. The analysis makes use of area-specific meteorological data (Viner 2014).« less

  19. Emissions from prescribed fires in temperate forest in south-east Australia: implications for carbon accounting

    NASA Astrophysics Data System (ADS)

    Possell, M.; Jenkins, M.; Bell, T. L.; Adams, M. A.

    2015-01-01

    We estimated emissions of carbon, as equivalent CO2 (CO2e), from planned fires in four sites in a south-eastern Australian forest. Emission estimates were calculated using measurements of fuel load and carbon content of different fuel types, before and after burning, and determination of fuel-specific emission factors. Median estimates of emissions for the four sites ranged from 20 to 139 Mg CO2e ha-1. Variability in estimates was a consequence of different burning efficiencies of each fuel type from the four sites. Higher emissions resulted from more fine fuel (twigs, decomposing matter, near-surface live and leaf litter) or coarse woody debris (CWD; > 25 mm diameter) being consumed. In order to assess the effect of declining information quantity and the inclusion of coarse woody debris when estimating emissions, Monte Carlo simulations were used to create seven scenarios where input parameters values were replaced by probability density functions. Calculation methods were (1) all measured data were constrained between measured maximum and minimum values for each variable; (2) as in (1) except the proportion of carbon within a fuel type was constrained between 0 and 1; (3) as in (2) but losses of mass caused by fire were replaced with burning efficiency factors constrained between 0 and 1; and (4) emissions were calculated using default values in the Australian National Greenhouse Accounts (NGA), National Inventory Report 2011, as appropriate for our sites. Effects of including CWD in calculations were assessed for calculation Method 1, 2 and 3 but not for Method 4 as the NGA does not consider this fuel type. Simulations demonstrate that the probability of estimating true median emissions declines strongly as the amount of information available declines. Including CWD in scenarios increased uncertainty in calculations because CWD is the most variable contributor to fuel load. Inclusion of CWD in scenarios generally increased the amount of carbon lost. We discuss implications of these simulations and how emissions from prescribed burns in temperate Australian forests could be improved.

  20. Estimates of inorganic nitrogen wet deposition from precipitation for the conterminous United States, 1955-84

    USGS Publications Warehouse

    Gronberg, Jo Ann M.; Ludtke, Amy S.; Knifong, Donna L.

    2014-01-01

    The U.S. Geological Survey’s National Water-Quality Assessment program requires nutrient input information for analysis of national and regional assessment of water quality. Historical data are needed to lengthen the data record for assessment of trends in water quality. This report provides estimates of inorganic nitrogen deposition from precipitation for the conterminous United States for 1955–56, 1961–65, and 1981–84. The estimates were derived from ammonium, nitrate, and inorganic nitrogen concentrations in atmospheric wet deposition and precipitation-depth data. This report documents the sources of these data and the methods that were used to estimate the inorganic nitrogen deposition. Tabular datasets, including the analytical results, precipitation depth, and calculated site-specific precipitation-weighted concentrations, and raster datasets of nitrogen from wet deposition are provided as appendixes in this report.

  1. Comparison of unmanned aircraft systems (UAS) to LiDAR for streambank erosion measurement at the site-specific and river network scales

    NASA Astrophysics Data System (ADS)

    Hamshaw, S. D.; Dewoolkar, M. M.; Rizzo, D.; ONeil-Dunne, J.; Frolik, J.

    2016-12-01

    Measurement of rates and extent of streambank erosion along river corridors is an important component of many catchment studies and necessary for engineering projects such as river restoration, hazard assessment, and total maximum daily load (TMDL) development. A variety of methods have been developed to quantify streambank erosion, including bank pins, ground surveys, photogrammetry, LiDAR, and analytical models. However, these methods are not only resource intensive, but many are feasible and appropriate only for site-specific studies and not practical for erosion estimates at larger scales. Recent advancements in unmanned aircraft systems (UAS) and photogrammetry software provide capabilities for more rapid and economical quantification of streambank erosion and deposition at multiple scales (from site-specific to river network). At the site-specific scale, the capability of UAS to quantify streambank erosion was compared to terrestrial laser scanning (TLS) and RTK-GPS ground survey and assessed at seven streambank monitoring sites in central Vermont. Across all sites, the UAS-derived bank topography had mean errors of 0.21 m compared to TLS and GPS data. Highest accuracies were achieved in early spring conditions where mean errors approached 10 cm. The cross sectional area of bank erosion at a typical, vegetated streambank site was found to be reliably calculated within 10% of actual for erosion areas greater than 3.5 m2. At the river network-level scale, 20 km of river corridor along the New Haven, Winooski, and Mad Rivers was flown on multiple dates with UAS and used to generate digital elevation models (DEMs) that were then compared for change detection analysis. Airborne LiDAR data collected prior to UAS surveys was also compared to UAS data to determine multi-year rates of bank erosion. UAS-based photogrammetry for generation of fine scale topographic data shows promise for the monitoring of streambank erosion both at the individual site scale and river-network scale in areas that are not densely covered with vegetation year-round.

  2. Evapotranspiration from areas of native vegetation in west-central Florida

    USGS Publications Warehouse

    Bidlake, W.R.; Woodham, W.M.; Lopez, M.A.

    1993-01-01

    A study was made to examine the suitability of three different micrometeorological methods for estimating evapotranspiration from selected areas of native vegetation in west-central Florida and to estimate annual evapotranspiration from those areas. Evapotranspiration was estimated using the energy- balance Bowen ratio and eddy correlation methods. Potential evapotranspiration was computed using the Penman equation. The energy-balance Bowen ratio method was used to estimate diurnal evapotrans- piration at unforested sites and yielded reasonable results; however, measurements indicated that the magnitudes of air temperature and vapor-pressure gradients above the forested sites were too small to obtain reliable evapotranspiration measurements with the energy balance Bowen ratio system. Analysis of the surface energy-balance indicated that sensible and latent heat fluxes computed using standard eddy correlation computation methods did not adequately account for available energy. Eddy correlation data were combined with the equation for the surface energy balance to yield two additional estimates of evapotranspiration. Daily potential evapotranspiration and evapotranspira- tion estimated using the energy-balance Bowen ratio method were not correlated at a unforested, dry prairie site, but they were correlated at a marsh site. Estimates of annual evapotranspiration for sites within the four vegetation types, which were based on energy-balance Bowen ratio and eddy correlation measurements, were 1,010 millimeters for dry prairie sites, 990 millimeters for marsh sites, 1,060 millimeters for pine flatwood sites, and 970 millimeters for a cypress swamp site.

  3. Comprehensive Characterization a Tidal Energy Site (Invited)

    NASA Astrophysics Data System (ADS)

    Polagye, B. L.; Thomson, J. M.; Bassett, C. S.; Epler, J.; Northwest National Marine Renewable Energy Center

    2010-12-01

    Northern Admiralty Inlet, Puget Sound, Washington is the proposed location of a pilot tidal energy project. Site-specific characterization of the physical and biological environment is required for device engineering and environmental analysis. However, the deep water and strong currents which make the site attractive for tidal energy development also pose unique challenges to collecting comprehensive information. This talk focuses on efforts to optimally site hydrokinetic turbines and estimate their acoustic impact, based on 18 months of field data collected to date. Additional characterization efforts being undertaken by the University of Washington branch of the Northwest National Marine Renewable Energy Center and its partners include marine mammal presence and behavior, water quality, seabed geology, and biofouling potential. Because kinetic power density varies with the cube of horizontal current velocity, an accurate map of spatial current variations is required to optimally site hydrokinetic turbines. Acoustic Doppler profilers deployed on the seabed show operationally meaningful variations in flow characteristics (e.g., power density, directionality, vertical shear) and tidal harmonic constituents over length scales of less than 100m. This is, in part, attributed to the proximity of this site to a headland. Because of these variations, interpolation between stationary measurement locations introduces potentially high uncertainty. The use of shipboard acoustic Doppler profilers is shown to be an effective tool for mapping peak currents and, combined with information from seabed profilers, may be able to resolve power density variations in the project area. Because noise levels from operating turbines are expected to exceed regulatory thresholds for incidental harassment of marine mammals known to be present in the project area, an estimate of the acoustic footprint is required to permit the pilot project. This requires site-specific descriptions of pre-existing ambient noise levels and the transmission loss (or practical spreading) at frequencies of interest. Recording hydrophones deployed on the seabed are used to quantify ambient noise, but are contaminated by self-noise during periods of strong currents. An empirical estimate of transmission loss is obtained from a source of opportunity - a passenger ferry which operates for more than twelve hours each day. By comparing recorded sound pressure levels against the location of the passenger ferry and other vessels (logged by an AIS receiver), the empirical transmission loss and source level for the ferry are obtained. Measurements of current velocity and underwater noise can apply routine oceanographic instruments and techniques. More unique measurements will be more challenging, such as high resolution sampling of current structure upstream and downstream of an operating device tens of meters off the seabed. Innovative approaches are required for cost effective characterization of tidal energy sites and monitoring of operating projects.

  4. Evaluation of Fuzzy-Logic Framework for Spatial Statistics Preserving Methods for Estimation of Missing Precipitation Data

    NASA Astrophysics Data System (ADS)

    El Sharif, H.; Teegavarapu, R. S.

    2012-12-01

    Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study evaluates the efficacy of a fuzzy-logic methodology for infilling missing historical daily precipitation data in preserving site and regional statistics. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of this newly proposed method in comparison to traditional data infilling techniques. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.

  5. Use of USLE/GIS methodology for predicting soil loss in a semiarid agricultural watershed.

    PubMed

    Erdogan, Emrah H; Erpul, Günay; Bayramin, Ilhami

    2007-08-01

    The Universal Soil Loss Equation (USLE) is an erosion model to estimate average soil loss that would generally result from splash, sheet, and rill erosion from agricultural plots. Recently, use of USLE has been extended as a useful tool predicting soil losses and planning control practices in agricultural watersheds by the effective integration of the GIS-based procedures to estimate the factor values in a grid cell basis. This study was performed in the Kazan Watershed located in the central Anatolia, Turkey, to predict soil erosion risk by the USLE/GIS methodology for planning conservation measures in the site. Rain erosivity (R), soil erodibility (K), and cover management factor (C) values of the model were calculated from erosivity map, soil map, and land use map of Turkey, respectively. R values were site-specifically corrected using DEM and climatic data. The topographical and hydrological effects on the soil loss were characterized by LS factor evaluated by the flow accumulation tool using DEM and watershed delineation techniques. From resulting soil loss map of the watershed, the magnitude of the soil erosion was estimated in terms of the different soil units and land uses and the most erosion-prone areas where irreversible soil losses occurred were reasonably located in the Kazan watershed. This could be very useful for deciding restoration practices to control the soil erosion of the sites to be severely influenced.

  6. Transient Inverse Calibration of Site-Wide Groundwater Model to Hanford Operational Impacts from 1943 to 1996--Alternative Conceptual Model Considering Interaction with Uppermost Basalt Confined Aquifer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeul, Vincent R.; Cole, Charles R.; Bergeron, Marcel P.

    2001-08-29

    The baseline three-dimensional transient inverse model for the estimation of site-wide scale flow parameters, including their uncertainties, using data on the transient behavior of the unconfined aquifer system over the entire historical period of Hanford operations, has been modified to account for the effects of basalt intercommunication between the Hanford unconfined aquifer and the underlying upper basalt confined aquifer. Both the baseline and alternative conceptual models (ACM-1) considered only the groundwater flow component and corresponding observational data in the 3-Dl transient inverse calibration efforts. Subsequent efforts will examine both groundwater flow and transport. Comparisons of goodness of fit measures andmore » parameter estimation results for the ACM-1 transient inverse calibrated model with those from previous site-wide groundwater modeling efforts illustrate that the new 3-D transient inverse model approach will strengthen the technical defensibility of the final model(s) and provide the ability to incorporate uncertainty in predictions related to both conceptual model and parameter uncertainty. These results, however, indicate that additional improvements are required to the conceptual model framework. An investigation was initiated at the end of this basalt inverse modeling effort to determine whether facies-based zonation would improve specific yield parameter estimation results (ACM-2). A description of the justification and methodology to develop this zonation is discussed.« less

  7. A Mw 6.3 earthquake scenario in the city of Nice (southeast France): ground motion simulations

    NASA Astrophysics Data System (ADS)

    Salichon, Jérome; Kohrs-Sansorny, Carine; Bertrand, Etienne; Courboulex, Françoise

    2010-07-01

    The southern Alps-Ligurian basin junction is one of the most seismically active zone of the western Europe. A constant microseismicity and moderate size events (3.5 < M < 5) are regularly recorded. The last reported historical event took place in February 1887 and reached an estimated magnitude between 6 and 6.5, causing human losses and extensive damages (intensity X, Medvedev-Sponheuer-Karnik). Such an event, occurring nowadays, could have critical consequences given the high density of population living on the French and Italian Riviera. We study the case of an offshore Mw 6.3 earthquake located at the place where two moderate size events (Mw 4.5) occurred recently and where a morphotectonic feature has been detected by a bathymetric survey. We used a stochastic empirical Green’s functions (EGFs) summation method to produce a population of realistic accelerograms on rock and soil sites in the city of Nice. The ground motion simulations are calibrated on a rock site with a set of ground motion prediction equations (GMPEs) in order to estimate a reasonable stress-drop ratio between the February 25th, 2001, Mw 4.5, event taken as an EGF and the target earthquake. Our results show that the combination of the GMPEs and EGF techniques is an interesting tool for site-specific strong ground motion estimation.

  8. Binding of the cyclic AMP receptor protein of Escherichia coli and DNA bending at the P4 promoter of pBR322.

    PubMed

    Brierley, I; Hoggett, J G

    1992-07-01

    The binding of the Escherichia coli cyclic AMP receptor protein (CRP) to its specific site on the P4 promoter of pBR322 has been studied by gel electrophoresis. Binding to the P4 site was about 40-50-fold weaker than to the principal CRP site on the lactose promoter at both low (0.01 M) and high (0.1 M) ionic strengths. CRP-induced bending at the P4 site was investigated from the mobilities of CRP bound to circularly permuted P4 fragments. The estimated bending angle, based on comparison with Zinkel & Crothers [(1990) Biopolymers 29, 29-38] A-tract bending standards, was found to be approximately 96 degrees, similar to that found for binding to the lac site. These observations suggest that there is not a simple relationship between strength of CRP binding and the extent of induced bending for different CRP sites. The apparent centre of bending in P4 is displaced about 6-8 bp away from the conserved TGTGA sequence and the P4 transcription start site.

  9. Results of external quality-assurance program for the National Atmospheric Deposition Program and National Trends Network during 1985

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1988-01-01

    External quality assurance monitoring of the National Atmospheric Deposition Program (NADP) and National Trends Network (NTN) was performed by the U.S. Geological Survey during 1985. The monitoring consisted of three primary programs: (1) an intersite comparison program designed to assess the precision and accuracy of onsite pH and specific conductance measurements made by NADP and NTN site operators; (2) a blind audit sample program designed to assess the effect of routine field handling on the precision and bias of NADP and NTN wet deposition data; and (3) an interlaboratory comparison program designed to compare analytical data from the laboratory processing NADP and NTN samples with data produced by other laboratories routinely analyzing wet deposition samples and to provide estimates of individual laboratory precision. An average of 94% of the site operators participated in the four voluntary intersite comparisons during 1985. A larger percentage of participating site operators met the accuracy goal for specific conductance measurements (average, 87%) than for pH measurements (average, 67%). Overall precision was dependent on the actual specific conductance of the test solution and independent of the pH of the test solution. Data for the blind audit sample program indicated slight positive biases resulting from routine field handling for all analytes except specific conductance. These biases were not large enough to be significant for most data users. Data for the blind audit sample program also indicated that decreases in hydrogen ion concentration were accompanied by decreases in specific conductance. Precision estimates derived from the blind audit sample program indicate that the major source of uncertainty in wet deposition data is the routine field handling that each wet deposition sample receives. Results of the interlaboratory comparison program were similar to results of previous years ' evaluations, indicating that the participating laboratories produced comparable data when they analyzed identical wet deposition samples, and that the laboratory processing NADP and NTN samples achieved the best analyte precision of the participating laboratories. (Author 's abstract)

  10. Constructing a framework for risk analyses of climate change effects on the water budget of differently sloped vineyards with a numeric simulation using the Monte Carlo method coupled to a water balance model

    PubMed Central

    Hofmann, Marco; Lux, Robert; Schultz, Hans R.

    2014-01-01

    Grapes for wine production are a highly climate sensitive crop and vineyard water budget is a decisive factor in quality formation. In order to conduct risk assessments for climate change effects in viticulture models are needed which can be applied to complete growing regions. We first modified an existing simplified geometric vineyard model of radiation interception and resulting water use to incorporate numerical Monte Carlo simulations and the physical aspects of radiation interactions between canopy and vineyard slope and azimuth. We then used four regional climate models to assess for possible effects on the water budget of selected vineyard sites up 2100. The model was developed to describe the partitioning of short-wave radiation between grapevine canopy and soil surface, respectively, green cover, necessary to calculate vineyard evapotranspiration. Soil water storage was allocated to two sub reservoirs. The model was adopted for steep slope vineyards based on coordinate transformation and validated against measurements of grapevine sap flow and soil water content determined down to 1.6 m depth at three different sites over 2 years. The results showed good agreement of modeled and observed soil water dynamics of vineyards with large variations in site specific soil water holding capacity (SWC) and viticultural management. Simulated sap flow was in overall good agreement with measured sap flow but site-specific responses of sap flow to potential evapotranspiration were observed. The analyses of climate change impacts on vineyard water budget demonstrated the importance of site-specific assessment due to natural variations in SWC. The improved model was capable of describing seasonal and site-specific dynamics in soil water content and could be used in an amended version to estimate changes in the water budget of entire grape growing areas due to evolving climatic changes. PMID:25540646

  11. Standardizing Nasal Nitric Oxide Measurement as a Test for Primary Ciliary Dyskinesia

    PubMed Central

    Hazucha, Milan J.; Chawla, Kunal K.; Baker, Brock R.; Shapiro, Adam J.; Brown, David E.; LaVange, Lisa M.; Horton, Bethany J.; Qaqish, Bahjat; Carson, Johnny L.; Davis, Stephanie D.; Dell, Sharon D.; Ferkol, Thomas W.; Atkinson, Jeffrey J.; Olivier, Kenneth N.; Sagel, Scott D.; Rosenfeld, Margaret; Milla, Carlos; Lee, Hye-Seung; Krischer, Jeffrey; Zariwala, Maimoona A.; Knowles, Michael R.

    2013-01-01

    Rationale: Several studies suggest that nasal nitric oxide (nNO) measurement could be a test for primary ciliary dyskinesia (PCD), but the procedure and interpretation have not been standardized. Objectives: To use a standard protocol for measuring nNO to establish a disease-specific cutoff value at one site, and then validate at six other sites. Methods: At the lead site, nNO was prospectively measured in individuals later confirmed to have PCD by ciliary ultrastructural defects (n = 143) or DNAH11 mutations (n = 6); and in 78 healthy and 146 disease control subjects, including individuals with asthma (n = 37), cystic fibrosis (n = 77), and chronic obstructive pulmonary disease (n = 32). A disease-specific cutoff value was determined, using generalized estimating equations (GEEs). Six other sites prospectively measured nNO in 155 consecutive individuals enrolled for evaluation for possible PCD. Measurements and Main Results: At the lead site, nNO values in PCD (mean ± standard deviation, 20.7 ± 24.1 nl/min; range, 1.5–207.3 nl/min) only rarely overlapped with the nNO values of healthy control subjects (304.6 ± 118.8; 125.5–867.0 nl/min), asthma (267.8 ± 103.2; 125.0–589.7 nl/min), or chronic obstructive pulmonary disease (223.7 ± 87.1; 109.7–449.1 nl/min); however, there was overlap with cystic fibrosis (134.0 ± 73.5; 15.6–386.1 nl/min). The disease-specific nNO cutoff value was defined at 77 nl/minute (sensitivity, 0.98; specificity, >0.999). At six other sites, this cutoff identified 70 of the 71 (98.6%) participants with confirmed PCD. Conclusions: Using a standardized protocol in multicenter studies, nNO measurement accurately identifies individuals with PCD, and supports its usefulness as a test to support the clinical diagnosis of PCD. PMID:24024753

  12. Multiple approaches to assess the safety of artisanal marine food in a tropical estuary.

    PubMed

    Padovan, A C; Neave, M J; Munksgaard, N C; Gibb, K S

    2017-03-01

    In this study, metal and metalloid concentrations and pathogens were measured in shellfish at different locations in a tropical estuary, including sites impacted by sewage and industry. Oyster, mangrove snails and mud snails did not exceed Australian and New Zealand Food Standards maximum levels for copper, lead or estimated inorganic arsenic at any site although copper concentrations in oysters and mud snails exceeded generally expected levels at some locations. Bacterial community composition in shellfish was species-specific regardless of location and different to the surrounding water and sediment. In the snails Telescopium telescopium, Terebralia palustris and Nerita balteata, some bacterial taxa differed between sites, but not in Saccostrea cucullata oysters. The abundance of potential human pathogens was very low and pathogen abundance or diversity was not associated with site classification, i.e. sewage impact, industry impact and reference.

  13. Development of a liquefaction hazard screening tool for caltrans bridge sites

    USGS Publications Warehouse

    Knudsen, K.-L.; Bott, J.D.J.; Woods, M.O.; McGuire, T.L.

    2009-01-01

    We have developed a liquefaction hazard screening tool for the California Department of Transportation (Caltrans) that is being used to evaluate the liquefaction hazard to approximately 13,000 bridge sites in California. Because of the large number of bridge sites to be evaluated, we developed a tool that makes use of parameters not typically considered in site-specific liquefaction investigations. We assessed geologic, topographic, seismic hazard, and subsurface conditions at about 100 sites of past liquefaction in California. Among the parameters we found common to many of these sites are: (a) low elevations, (b) proximity to a water body, and (c) presence of geologically youthful deposits or artificial fill materials. The nature of the study necessitated the use of readily available data, preferably datasets that are consistent across the state. The screening tool we provided to Caltrans makes use of the following parameters: (1) proximity to a water body, (2) whether the bridge crosses a water body, (3) the age of site geologic materials and the environment in which the materials were deposited, as discerned from available digital geologic maps, (4) probabilistic shaking estimates, (5) the site elevation, (6) information from available liquefaction hazard maps [covering the 9-county San Francisco Bay Area and Ventura County] and California Geological Survey (CGS) Zones of Required Investigation. For bridge sites at which subsurface boring data were available (from CGS' existing database), we calculated Displacement Potential Index values using a methodology developed by Allison Faris and Jiaer Wu. Caltrans' staff will use this hazard-screening tool, along with other tools focused on bridges and foundations, to prioritize site-specific investigations. ?? 2009 ASCE.

  14. Reducing Contingency through Sampling at the Luckey FUSRAP Site - 13186

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frothingham, David; Barker, Michelle; Buechi, Steve

    2013-07-01

    Typically, the greatest risk in developing accurate cost estimates for the remediation of hazardous, toxic, and radioactive waste sites is the uncertainty in the estimated volume of contaminated media requiring remediation. Efforts to address this risk in the remediation cost estimate can result in large cost contingencies that are often considered unacceptable when budgeting for site cleanups. Such was the case for the Luckey Formerly Utilized Sites Remedial Action Program (FUSRAP) site near Luckey, Ohio, which had significant uncertainty surrounding the estimated volume of site soils contaminated with radium, uranium, thorium, beryllium, and lead. Funding provided by the American Recoverymore » and Reinvestment Act (ARRA) allowed the U.S. Army Corps of Engineers (USACE) to conduct additional environmental sampling and analysis at the Luckey Site between November 2009 and April 2010, with the objective to further delineate the horizontal and vertical extent of contaminated soils in order to reduce the uncertainty in the soil volume estimate. Investigative work included radiological, geophysical, and topographic field surveys, subsurface borings, and soil sampling. Results from the investigative sampling were used in conjunction with Argonne National Laboratory's Bayesian Approaches for Adaptive Spatial Sampling (BAASS) software to update the contaminated soil volume estimate for the site. This updated volume estimate was then used to update the project cost-to-complete estimate using the USACE Cost and Schedule Risk Analysis process, which develops cost contingencies based on project risks. An investment of $1.1 M of ARRA funds for additional investigative work resulted in a reduction of 135,000 in-situ cubic meters (177,000 in-situ cubic yards) in the estimated base volume estimate. This refinement of the estimated soil volume resulted in a $64.3 M reduction in the estimated project cost-to-complete, through a reduction in the uncertainty in the contaminated soil volume estimate and the associated contingency costs. (authors)« less

  15. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  16. Redefinition and global estimation of basal ecosystem respiration rate

    NASA Astrophysics Data System (ADS)

    Yuan, Wenping; Luo, Yiqi; Li, Xianglan; Liu, Shuguang; Yu, Guirui; Zhou, Tao; Bahn, Michael; Black, Andy; Desai, Ankur R.; Cescatti, Alessandro; Marcolla, Barbara; Jacobs, Cor; Chen, Jiquan; Aurela, Mika; Bernhofer, Christian; Gielen, Bert; Bohrer, Gil; Cook, David R.; Dragoni, Danilo; Dunn, Allison L.; Gianelle, Damiano; Grünwald, Thomas; Ibrom, Andreas; Leclerc, Monique Y.; Lindroth, Anders; Liu, Heping; Marchesini, Luca Belelli; Montagnani, Leonardo; Pita, Gabriel; Rodeghiero, Mirco; Rodrigues, Abel; Starr, Gregory; Stoy, Paul C.

    2011-12-01

    Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ˜3°S to ˜70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr -1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.

  17. Stability of detectability over 17 years at a single site and other lizard detection comparisons from Guam

    USGS Publications Warehouse

    Rodda, Gordon H.; Dean-Bradley, Kathryn; Campbell, Earl W.; Fritts, Thomas H.; Lardner, Bjorn; Yackel Adams, Amy A.; Reed, Robert N.

    2015-01-01

    To obtain quantitative information about population dynamics from counts of animals, the per capita detectabilities of each species must remain constant over the course of monitoring. We characterized lizard detection constancy for four species over 17 yr from a single site in northern Guam, a relatively benign situation because detection was relatively easy and we were able to hold constant the site, habitat type, species, season, and sampling method. We monitored two species of diurnal terrestrial skinks (Carlia ailanpalai [Curious Skink], Emoia caeruleocauda [Pacific Bluetailed Skink]) using glueboards placed on the ground in the shade for 3 h on rainless mornings, yielding 10,286 skink captures. We additionally monitored two species of nocturnal arboreal geckos (Hemidactylus frenatus [Common House Gecko]; Lepidodactylus lugubris [Mourning Gecko]) on the basis of 15,212 sightings. We compared these count samples to a series of complete censuses we conducted from four or more total removal plots (everything removed to mineral soil) totaling 400 m2(about 1% of study site) in each of the years 1995, 1999, and 2012, providing time-stamped quantification of detectability for each species. Unfortunately, the actual population trajectories taken by the four species were masked by unexplained variation in detectability. This observation of debilitating latent variability in lizard detectability under nearly ideal conditions undercuts our trust in population estimation techniques that fail to quantify venue-specific detectability, rely on pooled detection probability estimates, or assume that modulation in predefined environmental covariates suffices for estimating detectability.

  18. Redefinition and global estimation of basal ecosystem respiration rate

    USGS Publications Warehouse

    Yuan, W.; Luo, Y.; Li, X.; Liu, S.; Yu, G.; Zhou, T.; Bahn, M.; Black, A.; Desai, A.R.; Cescatti, A.; Marcolla, B.; Jacobs, C.; Chen, J.; Aurela, M.; Bernhofer, C.; Gielen, B.; Bohrer, G.; Cook, D.R.; Dragoni, D.; Dunn, A.L.; Gianelle, D.; Grnwald, T.; Ibrom, A.; Leclerc, M.Y.; Lindroth, A.; Liu, H.; Marchesini, L.B.; Montagnani, L.; Pita, G.; Rodeghiero, M.; Rodrigues, A.; Starr, G.; Stoy, Paul C.

    2011-01-01

    Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ∼3°S to ∼70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.

  19. Coupling heat and chemical tracer experiments for estimating heat transfer parameters in shallow alluvial aquifers.

    PubMed

    Wildemeersch, S; Jamin, P; Orban, P; Hermans, T; Klepikova, M; Nguyen, F; Brouyère, S; Dassargues, A

    2014-11-15

    Geothermal energy systems, closed or open, are increasingly considered for heating and/or cooling buildings. The efficiency of such systems depends on the thermal properties of the subsurface. Therefore, feasibility and impact studies performed prior to their installation should include a field characterization of thermal properties and a heat transfer model using parameter values measured in situ. However, there is a lack of in situ experiments and methodology for performing such a field characterization, especially for open systems. This study presents an in situ experiment designed for estimating heat transfer parameters in shallow alluvial aquifers with focus on the specific heat capacity. This experiment consists in simultaneously injecting hot water and a chemical tracer into the aquifer and monitoring the evolution of groundwater temperature and concentration in the recovery well (and possibly in other piezometers located down gradient). Temperature and concentrations are then used for estimating the specific heat capacity. The first method for estimating this parameter is based on a modeling in series of the chemical tracer and temperature breakthrough curves at the recovery well. The second method is based on an energy balance. The values of specific heat capacity estimated for both methods (2.30 and 2.54MJ/m(3)/K) for the experimental site in the alluvial aquifer of the Meuse River (Belgium) are almost identical and consistent with values found in the literature. Temperature breakthrough curves in other piezometers are not required for estimating the specific heat capacity. However, they highlight that heat transfer in the alluvial aquifer of the Meuse River is complex and contrasted with different dominant process depending on the depth leading to significant vertical heat exchange between upper and lower part of the aquifer. Furthermore, these temperature breakthrough curves could be included in the calibration of a complex heat transfer model for estimating the entire set of heat transfer parameters and their spatial distribution by inverse modeling. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. ESTIMATING PROPORTION OF AREA OCCUPIED UNDER COMPLEX SURVEY DESIGNS

    EPA Science Inventory

    Estimating proportion of sites occupied, or proportion of area occupied (PAO) is a common problem in environmental studies. Typically, field surveys do not ensure that occupancy of a site is made with perfect detection. Maximum likelihood estimation of site occupancy rates when...

  1. Height intercept for estimating site index in young ponderosa pine plantations and natural stands

    Treesearch

    William W. Oliver

    1972-01-01

    Site index is difficult to estimate with any reliability in ponderosa pine (Pinus ponderosa Laws.) stands below 20 yeas old. A method of estimating site index based on 4-year height intercepts (total length of the first four internodes above breast height) is described. Equations based on two sets of published site-index curves were developed. They...

  2. A phased approach to induced seismicity risk management

    DOE PAGES

    White, Joshua A.; Foxall, William

    2014-01-01

    This work describes strategies for assessing and managing induced seismicity risk during each phase of a carbon storage project. We consider both nuisance and damage potential from induced earthquakes, as well as the indirect risk of enhancing fault leakage pathways. A phased approach to seismicity management is proposed, in which operations are continuously adapted based on available information and an on-going estimate of risk. At each project stage, specific recommendations are made for (a) monitoring and characterization, (b) modeling and analysis, and (c) site operations. The resulting methodology can help lower seismic risk while ensuring site operations remain practical andmore » cost-effective.« less

  3. Estimating Carbon Flux Phenology with Satellite-Derived Land Surface Phenology and Climate Drivers for Different Biomes: A Synthesis of AmeriFlux Observations

    PubMed Central

    Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; Liu, Jianhong; Mou, Minjie

    2013-01-01

    Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this study, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptake (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. This methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data. PMID:24386441

  4. TSS concentration in sewers estimated from turbidity measurements by means of linear regression accounting for uncertainties in both variables.

    PubMed

    Bertrand-Krajewski, J L

    2004-01-01

    In order to replace traditional sampling and analysis techniques, turbidimeters can be used to estimate TSS concentration in sewers, by means of sensor and site specific empirical equations established by linear regression of on-site turbidity Tvalues with TSS concentrations C measured in corresponding samples. As the ordinary least-squares method is not able to account for measurement uncertainties in both T and C variables, an appropriate regression method is used to solve this difficulty and to evaluate correctly the uncertainty in TSS concentrations estimated from measured turbidity. The regression method is described, including detailed calculations of variances and covariance in the regression parameters. An example of application is given for a calibrated turbidimeter used in a combined sewer system, with data collected during three dry weather days. In order to show how the established regression could be used, an independent 24 hours long dry weather turbidity data series recorded at 2 min time interval is used, transformed into estimated TSS concentrations, and compared to TSS concentrations measured in samples. The comparison appears as satisfactory and suggests that turbidity measurements could replace traditional samples. Further developments, including wet weather periods and other types of sensors, are suggested.

  5. Estimating Carbon Flux Phenology with Satellite-Derived Land Surface Phenology and Climate Drivers for Different Biomes: A Synthesis of AmeriFlux Observations

    DOE PAGES

    Zhu, Wenquan; Chen, Guangsheng; Jiang, Nan; ...

    2013-12-27

    Carbon Flux Phenology (CFP) can affect the interannual variation in Net Ecosystem Exchange (NEE) of carbon between terrestrial ecosystems and the atmosphere. In this paper, we proposed a methodology to estimate CFP metrics with satellite-derived Land Surface Phenology (LSP) metrics and climate drivers for 4 biomes (i.e., deciduous broadleaf forest, evergreen needleleaf forest, grasslands and croplands), using 159 site-years of NEE and climate data from 32 AmeriFlux sites and MODIS vegetation index time-series data. LSP metrics combined with optimal climate drivers can explain the variability in Start of Carbon Uptake (SCU) by more than 70% and End of Carbon Uptakemore » (ECU) by more than 60%. The Root Mean Square Error (RMSE) of the estimations was within 8.5 days for both SCU and ECU. The estimation performance for this methodology was primarily dependent on the optimal combination of the LSP retrieval methods, the explanatory climate drivers, the biome types, and the specific CFP metric. In conclusion, this methodology has a potential for allowing extrapolation of CFP metrics for biomes with a distinct and detectable seasonal cycle over large areas, based on synoptic multi-temporal optical satellite data and climate data.« less

  6. Specific Conductance and Dissolved-Solids Characteristics for the Green River and Muddy Creek, Wyoming, Water Years 1999-2008

    USGS Publications Warehouse

    Clark, Melanie L.; Davidson, Seth L.

    2009-01-01

    Southwestern Wyoming is an area of diverse scenery, wildlife, and natural resources that is actively undergoing energy development. The U.S. Department of the Interior's Wyoming Landscape Conservation Initiative is a long-term science-based effort to assess and enhance aquatic and terrestrial habitats at a landscape scale, while facilitating responsible energy development through local collaboration and partnerships. Water-quality monitoring has been conducted by the U.S. Geological Survey on the Green River near Green River, Wyoming, and Muddy Creek near Baggs, Wyoming. This monitoring, which is being conducted in cooperation with State and other Federal agencies and as part of the Wyoming Landscape Conservation Initiative, is in response to concerns about potentially increased dissolved solids in the Colorado River Basin as a result of energy development. Because of the need to provide real-time dissolved-solids concentrations for the Green River and Muddy Creek on the World Wide Web, the U.S. Geological Survey developed regression equations to estimate dissolved-solids concentrations on the basis of continuous specific conductance using relations between measured specific conductance and dissolved-solids concentrations. Specific conductance and dissolved-solids concentrations were less varied and generally lower for the Green River than for Muddy Creek. The median dissolved-solids concentration for the site on the Green River was 318 milligrams per liter, and the median concentration for the site on Muddy Creek was 943 milligrams per liter. Dissolved-solids concentrations ranged from 187 to 594 milligrams per liter in samples collected from the Green River during water years 1999-2008. Dissolved-solids concentrations ranged from 293 to 2,485 milligrams per liter in samples collected from Muddy Creek during water years 2006-08. The differences in dissolved-solids concentrations in samples collected from the Green River compared to samples collected from Muddy Creek reflect the different basin characteristics. Relations between specific conductance and dissolved-solids concentrations were statistically significant for the Green River (p-value less than 0.001) and Muddy Creek (p-value less than 0.001); therefore, specific conductance can be used to estimate dissolved-solids concentrations. Using continuous specific conductance values to estimate dissolved solids in real-time on the World Wide Web increases the amount and improves the timeliness of data available to water managers for assessing dissolved-solids concentrations in the Colorado River Basin.

  7. What constitutes a nesting attempt? Variation in criteria causes bias and hinders comparisons across studies

    USGS Publications Warehouse

    Garcia, V.; Conway, C.J.

    2009-01-01

    Because reliable estimates of nesting success are very important to avian studies, the defnition of a “successful nest” and the use of different analytical methods to estimate success have received much attention. By contrast, variation in the criteria used to determine whether an occupied site that did not produce offspring contained a nesting attempt is a source of bias that has been largely ignored. This problem is especially severe in studies that deal with species whose nest contents are relatively inaccessible because observers cannot determine whether or not an egg was laid for a large proportion of occupied sites. Burrowing Owls (Athene cunicularia) often lay their eggs ≥3 m below ground, so past Burrowing Owl studies have used a variety of criteria to determine whether a nesting attempt was initiated. We searched the literature to document the extent of that variation and examined how that variation influenced estimates of daily nest survival. We found 13 different sets of criteria used by previous authors and applied each criterion to our data set of 1,300 occupied burrows. We found significant variation in estimates of daily nest survival depending on the criteria used. Moreover, differences in daily nest survival among populations were apparent using some sets of criteria but not others. These inconsistencies may lead to incorrect conclusions and invalidate comparisons of the productivity and relative site quality among populations. We encourage future authors working on cavity-, canopy-, or burrow-nesting birds to provide specific details on the criteria they used to identify a nesting attempt.

  8. Mark-recapture using tetracycline and genetics reveal record-high bear density

    USGS Publications Warehouse

    Peacock, E.; Titus, K.; Garshelis, D.L.; Peacock, M.M.; Kuc, M.

    2011-01-01

    We used tetracycline biomarking, augmented with genetic methods to estimate the size of an American black bear (Ursus americanus) population on an island in Southeast Alaska. We marked 132 and 189 bears that consumed remote, tetracycline-laced baits in 2 different years, respectively, and observed 39 marks in 692 bone samples subsequently collected from hunters. We genetically analyzed hair samples from bait sites to determine the sex of marked bears, facilitating derivation of sex-specific population estimates. We obtained harvest samples from beyond the study area to correct for emigration. We estimated a density of 155 independent bears/100 km2, which is equivalent to the highest recorded for this species. This high density appears to be maintained by abundant, accessible natural food. Our population estimate (approx. 1,000 bears) could be used as a baseline and to set hunting quotas. The refined biomarking method for abundance estimation is a useful alternative where physical captures or DNA-based estimates are precluded by cost or logistics. Copyright ?? 2011 The Wildlife Society.

  9. Introducing Computed Tomography Standards for Age Estimation of Modern Australian Subadults Using Postnatal Ossification Timings of Select Cranial and Cervical Sites(.).

    PubMed

    Lottering, Nicolene; MacGregor, Donna M; Alston, Clair L; Watson, Debbie; Gregory, Laura S

    2016-01-01

    Contemporary, population-specific ossification timings of the cranium are lacking in current literature due to challenges in obtaining large repositories of documented subadult material, forcing Australian practitioners to rely on North American, arguably antiquated reference standards for age estimation. This study assessed the temporal pattern of ossification of the cranium and provides recalibrated probabilistic information for age estimation of modern Australian children. Fusion status of the occipital and frontal bones, atlas, and axis was scored using a modified two- to four-tier system from cranial/cervical DICOM datasets of 585 children aged birth to 10 years. Transition analysis was applied to elucidate maximum-likelihood estimates between consecutive fusion stages, in conjunction with Bayesian statistics to calculate credible intervals for age estimation. Results demonstrate significant sex differences in skeletal maturation (p < 0.05) and earlier timings in comparison with major literary sources, underscoring the requisite of updated standards for age estimation of modern individuals. © 2015 American Academy of Forensic Sciences.

  10. Estimation of Constituent Concentrations, Loads, and Yields in Streams of Johnson County, Northeast Kansas, Using Continuous Water-Quality Monitoring and Regression Models, October 2002 through December 2006

    USGS Publications Warehouse

    Rasmussen, Teresa J.; Lee, Casey J.; Ziegler, Andrew C.

    2008-01-01

    Johnson County is one of the most rapidly developing counties in Kansas. Population growth and expanding urban land use affect the quality of county streams, which are important for human and environmental health, water supply, recreation, and aesthetic value. This report describes estimates of streamflow and constituent concentrations, loads, and yields in relation to watershed characteristics in five Johnson County streams using continuous in-stream sensor measurements. Specific conductance, pH, water temperature, turbidity, and dissolved oxygen were monitored in five watersheds from October 2002 through December 2006. These continuous data were used in conjunction with discrete water samples to develop regression models for continuously estimating concentrations of other constituents. Continuous regression-based concentrations were estimated for suspended sediment, total suspended solids, dissolved solids and selected major ions, nutrients (nitrogen and phosphorus species), and fecal-indicator bacteria. Continuous daily, monthly, seasonal, and annual loads were calculated from concentration estimates and streamflow. The data are used to describe differences in concentrations, loads, and yields and to explain these differences relative to watershed characteristics. Water quality at the five monitoring sites varied according to hydrologic conditions; contributing drainage area; land use (including degree of urbanization); relative contributions from point and nonpoint constituent sources; and human activity within each watershed. Dissolved oxygen (DO) concentrations were less than the Kansas aquatic-life-support criterion of 5.0 mg/L less than 10 percent of the time at all sites except Indian Creek, which had DO concentrations less than the criterion about 15 percent of the time. Concentrations of suspended sediment, chloride (winter only), indicator bacteria, and pesticides were substantially larger during periods of increased streamflow. Suspended-sediment concentration was nearly always largest at the Mill Creek site. The Mill Creek watershed is undergoing rapid development that likely contributed to larger sustained sediment concentrations. During most of the time, the smallest sediment concentrations occurred at the Indian Creek site, the most urban of the monitored sites, likely because most of the streamflow originates from wastewater-treatment facilities located just upstream from the monitoring site. However, estimated annual suspended-sediment load and yield were largest annually at the Indian Creek site because of substantial contributions during storm runoff. At least 90 percent of the total annual sediment load in 2005?06 at all five monitoring sites occurred in less than 2 percent of the time, generally associated with large storm runoff. About 50 percent of the 2005 sediment load at the Blue River site occurred during a single 3-day storm, the equivalent of less than 1 percent of the time. Suspended-sediment concentration is statistically related to other water-quality constituents, and these relations have potential implications for implementation of best management practices because, if sediment concentrations are decreased, concentrations of sediment-associated constituents such as suspended solids, some nutrients, and bacteria will also likely decrease. Chloride concentrations were largest at the Indian and Mill Creek sites, the two most urban stream sites which also are most affected by road-salt runoff and wastewater-treatment-facility discharges. Two chloride runoff occurrences in January?February 2005 accounted for 19 percent of the total chloride load in Indian Creek in 2005. Escherichia coli density at the Indian Creek site was nearly always largest of the five sites with a median density more than double that of any other site and 15 times the density at the Blue River site which is primarily nonurban. More than 97 percent of the fecal coliform bacteria load at the Indian Creek site and near the B

  11. Exposure Assessment of Livestock Carcass Management ...

    EPA Pesticide Factsheets

    Report This report describes relative exposures and hazards for different livestock carcass management options in the event of a natural disaster. A quantitative exposure assessment by which livestock carcass management options are ranked relative to one another for a hypothetical site setting, a standardized set of environmental conditions (e.g., meteorology), and following a single set of assumptions about how the carcass management options are designed and implemented. These settings, conditions, and assumptions are not necessarily representative of site-specific carcass management efforts. Therefore, the exposure assessment should not be interpreted as estimating levels of chemical and microbial exposure that can be expected to result from the management options evaluated. The intent of the relative rankings is to support scientifically-based livestock carcass management decisions that consider potential hazards to human health, livestock, and the environment. This exposure assessment also provides information to support choices about mitigation measures to minimize or eliminate specific exposure pathways.

  12. Methanol production from Eucalyptus wood chips. Working Document 9. Economics of producing methanol from Eucalyptus in Central Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fishkind, H.H.

    1982-06-01

    A detailed feasibility study of producing methanol from Eucalyptus in Central Florida encompasses all phases of production - from seedling to delivery of finished methanol. The project includes the following components: (1) production of 55 million, high quality, Eucalyptus seedlings through tissue culture; (2) establishment of a Eucalyptus energy plantation on approximately 70,000 acres; and (3) engineering for a 100 million gallon-per-year methanol production facility. In addition, the potential environmental impacts of the whole project were examined, safety and health aspects of producing and using methanol were analyzed, and site specific cost estimates were made. The economics of the projectmore » are presented here. Each of the three major components of the project - tissue culture lab, energy plantation, and methanol refinery - are examined individually. In each case a site specific analysis of the potential return on investment was conducted.« less

  13. Adsorption properties of HKUST-1 toward hydrogen and other small molecules monitored by IR.

    PubMed

    Bordiga, S; Regli, L; Bonino, F; Groppo, E; Lamberti, C; Xiao, B; Wheatley, P S; Morris, R E; Zecchina, A

    2007-06-07

    Among microporous systems metal organic frameworks are considered promising materials for molecular adsorption. In this contribution infrared spectroscopy is successfully applied to highlight the positive role played by coordinatively unsaturated Cu2+ ions in HKUST-1, acting as specific interaction sites. A properly activated material, obtained after solvent removal, is characterized by a high fraction of coordinatively unsaturated Cu2+ ions acting as preferential adsorption sites that show specific activities towards some of the most common gaseous species (NO, CO2, CO, N2 and H2). From a temperature dependent IR study, it has been estimated that the H2 adsorption energy is as high as 10 kJ mol(-1). A very complex spectral evolution has been observed upon lowering the temperature. A further peculiarity of this material is the fact that it promotes ortho-para conversion of the adsorbed H2 species.

  14. Electrochemical performance of PVA stabilized nickel ferrite nanoparticles via microwave route

    NASA Astrophysics Data System (ADS)

    William, J. Johnson; Babu, I. Manohara; Muralidharan, G.

    2017-05-01

    Nanosized nickel ferrite nanoparticles were effectively synthesized through microwave route.PVA is used as a stabilizer. The cubic inverse spinel crystal structure was identified from the X-ray diffraction pattern. FTIR spectrum identified the octahedral site vibrations of the Ni2+ ions and tetrahedral sites vibrations of Fe3+ ions, which additionally confirms the existence of nickel ferrite nanoparticles. Nano-granular morphology was observed from scanning electron microscope. The tuning of morphology was clearly seen in SEM images. Electrochemical performance of nickel ferrite nanoparticles was studied using cyclic voltammetry and chronopotentiometry. Highest specific capacitance of 459 F g-1 was achieved through cyclic voltammetry at 2 mV s-1 for NF10. Also, non-linearity was observed in chronopotentiometry which confirms the pseudocapacitance nature of nickel ferrite nanoparticles. The estimated specific capacitance was 341 F g-1 at 2.5 A g-1.

  15. Pediatric Price Transparency: Still Opaque With Opportunities for Improvement.

    PubMed

    Faherty, Laura J; Wong, Charlene A; Feingold, Jordyn; Li, Joan; Town, Robert; Fieldston, Evan; Werner, Rachel M

    2017-10-01

    Price transparency is gaining importance as families' portion of health care costs rise. We describe (1) online price transparency data for pediatric care on children's hospital Web sites and state-based price transparency Web sites, and (2) the consumer experience of obtaining an out-of-pocket estimate from children's hospitals for a common procedure. From 2015 to 2016, we audited 45 children's hospital Web sites and 38 state-based price transparency Web sites, describing availability and characteristics of health care prices and personalized cost estimate tools. Using secret shopper methodology, we called children's hospitals and submitted online estimate requests posing as a self-paying family requesting an out-of-pocket estimate for a tonsillectomy-adenoidectomy. Eight children's hospital Web sites (18%) listed prices. Twelve (27%) provided personalized cost estimate tool (online form n = 5 and/or phone number n = 9). All 9 hospitals with a phone number for estimates provided the estimated patient liability for a tonsillectomy-adenoidectomy (mean $6008, range $2622-$9840). Of the remaining 36 hospitals without a dedicated price estimate phone number, 21 (58%) provided estimates (mean $7144, range $1200-$15 360). Two of 4 hospitals with online forms provided estimates. Fifteen (39%) state-based Web sites distinguished between prices for pediatric and adult care. One had a personalized cost estimate tool. Meaningful prices for pediatric care were not widely available online through children's hospital or state-based price transparency Web sites. A phone line or online form for price estimates were effective strategies for hospitals to provide out-of-pocket price information. Opportunities exist to improve pediatric price transparency. Copyright © 2017 by the American Academy of Pediatrics.

  16. Alternative methods of salt disposal at the seven salt sites for a nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-02-01

    This study discusses the various alternative salt management techniques for the disposal of excess mined salt at seven potentially acceptable nuclear waste repository sites: Deaf Smith and Swisher Counties, Texas; Richton and Cypress Creek Domes, Mississippi; Vacherie Dome, Louisiana; and Davis and Lavender Canyons, Utah. Because the repository development involves the underground excavation of corridors and waste emplacement rooms, in either bedded or domed salt formations, excess salt will be mined and must be disposed of offsite. The salt disposal alternatives examined for all the sites include commercial use, ocean disposal, deep well injection, landfill disposal, and underground mine disposal.more » These alternatives (and other site-specific disposal methods) are reviewed, using estimated amounts of excavated, backfilled, and excess salt. Methods of transporting the excess salt are discussed, along with possible impacts of each disposal method and potential regulatory requirements. A preferred method of disposal is recommended for each potentially acceptable repository site. 14 refs., 5 tabs.« less

  17. Influence of rice straw burning on the levels of polycyclic aromatic hydrocarbons in agricultural county of Taiwan.

    PubMed

    Lai, Chia-Hsiang; Chen, Kang-Shin; Wang, Hsin-Kai

    2009-01-01

    Atmospheric particulate and polycyclic aromatic hydrocarbons (PAHs) size distribution were measured at Jhu-Shan (a rural site) and Sin-Gang (a town site) in central Taiwan during the rice straw burning and non-burning periods. The concentrations of total PAHs accounting for a roughly 58% (34%) increment in the concentrations of total PAHs due to rice-straw burning. Combustion-related PAHs during burning periods were 1.54-2.57 times higher than those during non-burning periods. The mass median diameter (MMD) of 0.88-1.21 microm in the particulate phase suggested that rice-straw burning generated the increase in coarse particle number. Chemical mass balance (CMB) receptor model analyses showed that the primary pollution sources at the two sites were similar. However, rice-straw burning emission was specifically identified as a significant source of PAH during burning periods at the two sites. Open burning of rice straws was estimated to contribute approximately 6.3%-24.6% to total atmospheric PAHs at the two sites.

  18. Inter-comparison of source apportionment of PM10 using PMF and CMB in three sites nearby an industrial area in central Italy

    NASA Astrophysics Data System (ADS)

    Cesari, Daniela; Donateo, Antonio; Conte, Marianna; Contini, Daniele

    2016-12-01

    Receptor models (RMs), based on chemical composition of particulate matter (PM), such as Chemical Mass Balance (CMB) and Positive Matrix Factorization (PMF), represent useful tools for determining the impact of PM sources to air quality. This information is useful, especially in areas influenced by anthropogenic activities, to plan mitigation strategies for environmental management. Recent inter-comparison of source apportionment (SA) results showed that one of the difficulties in the comparison of estimated source contributions is the compatibility of the sources, i.e. the chemical profiles of factor/sources used in receptor models. This suggests that SA based on integration of several RMs could give more stable and reliable solutions with respect to a single model. The aim of this work was to perform inter-comparison of PMF (using PMF3.0 and PMF5.0 codes) and CMB outputs, focusing on both source chemical profiles and estimates of source contributions. The dataset included 347 daily PM10 samples collected in three sites in central Italy located near industrial emissions. Samples were chemically analysed for the concentrations of 21 chemical species (NH4+, Ca2 +, Mg2 +, Na+, K+, Mg2 +, SO42 -, NO3-, Cl-, Si, Al, Ti, V, Mn, Fe, Ni, Cu, Zn, Br, EC, and OC) used as input of RMs. The approach identified 9 factor/sources: marine, traffic, resuspended dust, biomass burning, secondary sulphate, secondary nitrate, crustal, coal combustion power plant and harbour-industrial. Results showed that the application of constraints in PMF5.0 improved interpretability of profiles and comparability of estimated source contributions with stoichiometric calculations. The inter-comparison of PMF and CMB gave significant differences for secondary nitrate, biomass burning, and harbour-industrial sources, due to non-compatibility of these source profiles that have local specificities. When these site-dependent specificities were taken into account, optimising the input source profiles of CMB, a significant improvement in the comparison of the estimated source contributions with PMF was obtained.

  19. A surface complexation and ion exchange model of Pb and Cd competitive sorption on natural soils

    NASA Astrophysics Data System (ADS)

    Serrano, Susana; O'Day, Peggy A.; Vlassopoulos, Dimitri; García-González, Maria Teresa; Garrido, Fernando

    2009-02-01

    The bioavailability and fate of heavy metals in the environment are often controlled by sorption reactions on the reactive surfaces of soil minerals. We have developed a non-electrostatic equilibrium model (NEM) with both surface complexation and ion exchange reactions to describe the sorption of Pb and Cd in single- and binary-metal systems over a range of pH and metal concentration. Mineralogical and exchange properties of three different acidic soils were used to constrain surface reactions in the model and to estimate surface densities for sorption sites, rather than treating them as adjustable parameters. Soil heterogeneity was modeled with >FeOH and >SOH functional groups, representing Fe- and Al-oxyhydroxide minerals and phyllosilicate clay mineral edge sites, and two ion exchange sites (X - and Y -), representing clay mineral exchange. An optimization process was carried out using the entire experimental sorption data set to determine the binding constants for Pb and Cd surface complexation and ion exchange reactions. Modeling results showed that the adsorption of Pb and Cd was distributed between ion exchange sites at low pH values and specific adsorption sites at higher pH values, mainly associated with >FeOH sites. Modeling results confirmed the greater tendency of Cd to be retained on exchange sites compared to Pb, which had a higher affinity than Cd for specific adsorption on >FeOH sites. Lead retention on >FeOH occurred at lower pH than for Cd, suggesting that Pb sorbs to surface hydroxyl groups at pH values at which Cd interacts only with exchange sites. The results from the binary system (both Pb and Cd present) showed that Cd retained in >FeOH sites decreased significantly in the presence of Pb, while the occupancy of Pb in these sites did not change in the presence of Cd. As a consequence of this competition, Cd was shifted to ion exchange sites, where it competes with Pb and possibly Ca (from the background electrolyte). Sorption on >SOH functional groups increased with increasing pH but was small compared to >FeOH sites, with little difference between single- and binary-metal systems. Model reactions and conditional sorption constants for Pb and Cd sorption were tested on a fourth soil that was not used for model optimization. The same reactions and constants were used successfully without adjustment by estimating surface site concentrations from soil mineralogy. The model formulation developed in this study is applicable to acidic mineral soils with low organic matter content. Extension of the model to soils of different composition may require selection of surface reactions that account for differences in clay and oxide mineral composition and organic matter content.

  20. External quality-assurance results for the National Atmospheric Deposition Program / National Trends Network and Mercury Deposition Network, 2004

    USGS Publications Warehouse

    Wetherbee, Gregory A.; Latysh, Natalie E.; Greene, Shannon M.

    2006-01-01

    The U.S. Geological Survey (USGS) used five programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and two programs to provide external quality-assurance monitoring for the NADP/Mercury Deposition Network (NADP/MDN) during 2004. An intersite-comparison program was used to estimate accuracy and precision of field-measured pH and specific-conductance. The variability and bias of NADP/NTN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using the sample-handling evaluation (SHE), field-audit, and interlaboratory-comparison programs. Overall variability of NADP/NTN data was estimated using a collocated-sampler program. Variability and bias of NADP/MDN data attributed to field exposure, sample handling and shipping, and laboratory chemical analysis were estimated using a system-blank program and an interlaboratory-comparison program. In two intersite-comparison studies, approximately 89 percent of NADP/NTN site operators met the pH measurement accuracy goals, and 94.7 to 97.1 percent of NADP/NTN site operators met the accuracy goals for specific conductance. Field chemistry measurements were discontinued by NADP at the end of 2004. As a result, the USGS intersite-comparison program also was discontinued at the end of 2004. Variability and bias in NADP/NTN data due to sample handling and shipping were estimated from paired-sample concentration differences and specific conductance differences obtained for the SHE program. Median absolute errors (MAEs) equal to less than 3 percent were indicated for all measured analytes except potassium and hydrogen ion. Positive bias was indicated for most of the measured analytes except for calcium, hydrogen ion and specific conductance. Negative bias for hydrogen ion and specific conductance indicated loss of hydrogen ion and decreased specific conductance from contact of the sample with the collector bucket. Field-audit results for 2004 indicate dissolved analyte loss in more than one-half of NADP/NTN wet-deposition samples for all analytes except chloride. Concentrations of contaminants also were estimated from field-audit data. On the basis of 2004 field-audit results, at least 25 percent of the 2004 NADP/NTN concentrations for sodium, potassium, and chloride were lower than the maximum sodium, potassium, and chloride contamination likely to be found in 90 percent of the samples with 90-percent confidence. Variability and bias in NADP/NTN data attributed to chemical analysis by the NADP Central Analytical Laboratory (CAL) were comparable to the variability and bias estimated for other laboratories participating in the interlaboratory-comparison program for all analytes. Variability in NADP/NTN ammonium data evident in 2002-03 was reduced substantially during 2004. Sulfate, hydrogen-ion, and specific conductance data reported by CAL during 2004 were positively biased. A significant (a = 0.05) bias was identified for CAL sodium, potassium, ammonium, and nitrate data, but the absolute values of the median differences for these analytes were less than the method detection limits. No detections were reported for CAL analyses of deionized-water samples, indicating that contamination was not a problem for CAL. Control charts show that CAL data were within statistical control during at least 90 percent of 2004. Most 2004 CAL interlaboratory-comparison results for synthetic wet-deposition solutions were within ?10 percent of the most probable values (MPVs) for solution concentrations except for chloride, nitrate, sulfate, and specific conductance results from one sample in November and one specific conductance result in December. Overall variability of NADP/NTN wet-deposition measurements was estimated during water year 2004 by the median absolute errors for weekly wet-deposition sample concentrations and precipitation measurements for tw

  1. Can we reliably estimate managed forest carbon dynamics using remotely sensed data?

    NASA Astrophysics Data System (ADS)

    Smallman, Thomas Luke; Exbrayat, Jean-Francois; Bloom, A. Anthony; Williams, Mathew

    2015-04-01

    Forests are an important part of the global carbon cycle, serving as both a large store of carbon and currently as a net sink of CO2. Forest biomass varies significantly in time and space, linked to climate, soils, natural disturbance and human impacts. This variation means that the global distribution of forest biomass and their dynamics are poorly quantified. Terrestrial ecosystem models (TEMs) are rarely evaluated for their predictions of forest carbon stocks and dynamics, due to a lack of knowledge on site specific factors such as disturbance dates and / or managed interventions. In this regard, managed forests present a valuable opportunity for model calibration and improvement. Spatially explicit datasets of planting dates, species and yield classification, in combination with remote sensing data and an appropriate data assimilation (DA) framework can reduce prediction uncertainty and error. We use a Baysian approach to calibrate the data assimilation linked ecosystem carbon (DALEC) model using a Metropolis Hastings-Markov Chain Monte Carlo (MH-MCMC) framework. Forest management information is incorporated into the data assimilation framework as part of ecological and dynamic constraints (EDCs). The key advantage here is that DALEC simulates a full carbon balance, not just the living biomass, and that both parameter and prediction uncertainties are estimated as part of the DA analysis. DALEC has been calibrated at two managed forests, in the USA (Pinus taeda; Duke Forest) and UK (Picea sitchensis; Griffin Forest). At each site DALEC is calibrated twice (exp1 & exp2). Both calibrations (exp1 & exp2) assimilated MODIS LAI and HWSD estimates of soil carbon stored in soil organic matter, in addition to common management information and prior knowledge included in parameter priors and the EDCs. Calibration exp1 also utilises multiple site level estimates of carbon storage in multiple pools. By comparing simulations we determine the impact of site-level observations on uncertainty and error on predictions, and which observations are key to constraining ecosystem processes. Preliminary simulations indicate that DALEC calibration exp1 accurately simulated the assimilated observations for forest and soil carbon stock estimates including, critically for forestry, standing wood stocks (R2 = 0.92, bias = -4.46 MgC ha-1, RMSE = 5.80 MgC ha-1). The results from exp1 indicate the model is able to find parameters that are both consistent with EDC and observations. In the absence of site-level stock observations (exp2) DALEC accurately estimates foliage and fine root pools, while the median estimate of above ground litter and wood stocks (R2 = 0.92, bias = -48.30 MgC ha-1, RMSE = 50.30 MgC ha-1) are over- and underestimated respectively, site-level observations are within model uncertainty. These results indicate that we can estimate managed forests dynamics using remotely sensed data, particularly as remotely sensed above ground biomass maps become available to provide constraint to correct biases in woody accumulation.

  2. Assessing Forest NPP: BIOME-BGC Predictions versus BEF Derived Estimates

    NASA Astrophysics Data System (ADS)

    Hasenauer, H.; Pietsch, S. A.; Petritsch, R.

    2007-05-01

    Forest productivity has always been a major issue within sustainable forest management. While in the past terrestrial forest inventory data have been the major source for assessing forest productivity, recent developments in ecosystem modeling offer an alternative approach using ecosystem models such as Biome-BGC to estimate Net Primary Production (NPP). In this study we compare two terrestrial driven approaches for assessing NPP: (i) estimates from a species specific adaptation of the biogeochemical ecosystem model BIOME-BGC calibrated for Alpine conditions; and (ii) NPP estimates derived from inventory data using biomass expansion factors (BEF). The forest inventory data come from 624 sample plots across Austria and consist of repeated individual tree observations and include growth as well as soil and humus information. These locations are covered with spruce, beech, oak, pine and larch stands, thus addressing the main Austrian forest types. 144 locations were previously used in a validating effort to produce species-specific parameter estimates of the ecosystem model. The remaining 480 sites are from the Austrian National Forest Soil Survey carried out at the Federal Research and Training Centre for Forests, Natural Hazards and Landscape (BFW). By using diameter at breast height (dbh) and height (h) volume and subsequently biomass of individual trees were calculated, aggregated for the whole forest stand and compared with the model output. Regression analyses were performed for both volume and biomass estimates.

  3. Two-generation analysis of pollen flow across a landscape. V. A stepwise approach for extracting factors contributing to pollen structure.

    Treesearch

    R. J. Dyer; R. D. Westfall; V. L. Sork; P. E. Smouse

    2004-01-01

    Patterns of pollen dispersal are central to both the ecology and evolution of plant populations. However, the mechanisms controlling either the dispersal process itself or our estimation of that process may be influenced by site-specific factors such as local forest structure and nonuniform adult genetic structure. Here, we present an extension of the AMOVA model...

  4. Optimizing fish and stream-water mercury metrics for calculation of fish bioaccumulation factors

    Treesearch

    Paul Bradley; Karen Riva Murray; Barbara C. Scudder Elkenberry; Christopher D. Knightes; Celeste A. Journey; Mark A. Brigham

    2016-01-01

    Mercury (Hg) bioaccumulation factors (BAFs; ratios of Hg in fish [Hgfish] and water[Hgwater]) are used to develop Total Maximum Daily Load and water quality criteria for Hg-impaired waters. Protection of wildlife and human health depends directly on the accuracy of site-specific estimates of Hgfish and Hgwater and the predictability of the relation between these...

  5. Interaction of benzo[a]pyrene diol epoxide isomers with human serum albumin: Site specific characterisation of adducts and associated kinetics

    NASA Astrophysics Data System (ADS)

    Motwani, Hitesh V.; Westberg, Emelie; Törnqvist, Margareta

    2016-11-01

    Carcinogenicity of benzo[a]pyrene {B[a]P, a polycyclic aromatic hydrocarbon (PAH)} involves DNA-modification by B[a]P diol epoxide (BPDE) metabolites. Adducts to serum albumin (SA) are not repaired, unlike DNA adducts, and therefore considered advantageous in assessment of in vivo dose of BPDEs. In the present work, kinetic experiments were performed in relation to the dose (i.e. concentration over time) of different BPDE isomers, where human SA (hSA) was incubated with respective BPDEs under physiological conditions. A liquid chromatography (LC) tandem mass spectrometry methodology was employed for characterising respective BPDE-adducts at histidine and lysine. This strategy allowed to structurally distinguish between the adducts from racemic anti- and syn-BPDE and between (+)- and (-)-anti-BPDE, which has not been attained earlier. The adduct levels quantified by LC-UV and the estimated rate of disappearance of BPDEs in presence of hSA gave an insight into the reactivity of the diol epoxides towards the N-sites on SA. The structure specific method and dosimetry described in this work could be used for accurate estimation of in vivo dose of the BPDEs following exposure to B[a]P, primarily in dose response studies of genotoxicity, e.g. in mice, to aid in quantitative risk assessment of PAHs.

  6. Statistical adjustment of culture-independent diagnostic tests for trend analysis in the Foodborne Diseases Active Surveillance Network (FoodNet), USA.

    PubMed

    Gu, Weidong; Dutta, Vikrant; Patrick, Mary; Bruce, Beau B; Geissler, Aimee; Huang, Jennifer; Fitzgerald, Collette; Henao, Olga

    2018-03-19

    Culture-independent diagnostic tests (CIDTs) are increasingly used to diagnose Campylobacter infection in the Foodborne Diseases Active Surveillance Network (FoodNet). Because CIDTs have different performance characteristics compared with culture, which has been used historically and is still used to diagnose campylobacteriosis, adjustment of cases diagnosed by CIDT is needed to compare with culture-confirmed cases for monitoring incidence trends. We identified the necessary parameters for CIDT adjustment using culture as the gold standard, and derived formulas to calculate positive predictive values (PPVs). We conducted a literature review and meta-analysis to examine the variability in CIDT performance and Campylobacter prevalence applicable to FoodNet sites. We then developed a Monte Carlo method to estimate test-type and site-specific PPVs with their associated uncertainties. The uncertainty in our estimated PPVs was largely derived from uncertainty about the specificity of CIDTs and low prevalence of Campylobacter in tested samples. Stable CIDT-adjusted incidences of Campylobacter cases from 2012 to 2015 were observed compared with a decline in culture-confirmed incidence. We highlight the lack of data on the total numbers of tested samples as one of main limitations for CIDT adjustment. Our results demonstrate the importance of adjusting CIDTs for understanding trends in Campylobacter incidence in FoodNet.

  7. Coordinating across scales: Building a regional marsh bird monitoring program from national and state Initiatives

    USGS Publications Warehouse

    Shriver, G.W.; Sauer, J.R.

    2008-01-01

    Salt marsh breeding bird populations (rails, bitterns, sparrows, etc.) in eastern North America are high conservation priorities in need of site specific and regional monitoring designed to detect population changes over time. The present status and trends of these species are unknown but anecdotal evidence of declines in many of the species has raised conservation concerns. Most of these species are listed as conservation priorities on comprehensive wildlife plans throughout the eastern U.S. National Wildlife Refuges, National Park Service units, and other wildlife conservation areas provide important salt marsh habitat. To meet management needs for these areas, and to assist regional conservation planning, survey designs are being developed to estimate abundance and population trends for these breeding bird species. The primary purpose of this project is to develop a hierarchical sampling frame for salt marsh birds in Bird Conservation Region (BCR) 30 that will provide the ability to estimate species population abundances on 1) specific sites (i.e. National Parks and National Wildlife Refuges), 2) within states or regions, and 3) within BCR 30. The entire breeding range of Saltmarsh Sharp-tailed and Coastal Plain Swamp sparrows are within BCR 30, providing an opportunity to detect population trends within the entire breeding ranges of two priority species.

  8. Estimated monthly percentile discharges at ungaged sites in the Upper Yellowstone River Basin in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1986-01-01

    Once-monthly streamflow measurements were used to estimate selected percentile discharges on flow-duration curves of monthly mean discharge for 40 ungaged stream sites in the upper Yellowstone River basin in Montana. The estimation technique was a modification of the concurrent-discharge method previously described and used by H.C. Riggs to estimate annual mean discharge. The modified technique is based on the relationship of various mean seasonal discharges to the required discharges on the flow-duration curves. The mean seasonal discharges are estimated from the monthly streamflow measurements, and the percentile discharges are calculated from regression equations. The regression equations, developed from streamflow record at nine gaging stations, indicated a significant log-linear relationship between mean seasonal discharge and various percentile discharges. The technique was tested at two discontinued streamflow-gaging stations; the differences between estimated monthly discharges and those determined from the discharge record ranged from -31 to +27 percent at one site and from -14 to +85 percent at the other. The estimates at one site were unbiased, and the estimates at the other site were consistently larger than the recorded values. Based on the test results, the probable average error of the technique was + or - 30 percent for the 21 sites measured during the first year of the program and + or - 50 percent for the 19 sites measured during the second year. (USGS)

  9. Estimation of the epidemiological burden of human papillomavirus-related cancers and non-malignant diseases in men in Europe: a review

    PubMed Central

    2012-01-01

    Background The role of human papillomavirus (HPV) in malignant and non-malignant genital diseases in women is well known and the corresponding epidemiological burden has been widely described. However, less is known about the role of HPV in anal, penile and head and neck cancer, and the burden of malignant and non-malignant HPV-related diseases in men. The objective of this review is to estimate the epidemiological burden of HPV-related cancers and non-malignant diseases in men in Europe. Methods The annual number of new HPV-related cancers in men in Europe was estimated using Eurostat population data and applying cancer incidence rates published by the International Agency for Research on Cancer. The number of cancer cases attributable to HPV, and specifically to HPV16/18, was calculated based on the most relevant prevalence estimates. The annual number of new cases of genital warts was calculated from the most robust European studies; and latest HPV6/11 prevalence estimates were then applied. A literature review was also performed to retrieve exhaustive data on HPV infection at all anatomical sites under study, as well as incidence and prevalence of external genital warts, recurrent respiratory papillomatosis and HPV-related cancer trends in men in Europe. Results A total of 72, 694 new cancer cases at HPV-related anatomical sites were estimated to occur each year in men in Europe. 17,403 of these cancer cases could be attributable to HPV, with 15,497 of them specifically attributable to HPV16/18. In addition, between 286,682 and 325,722 new cases of genital warts attributable to HPV6/11were estimated to occur annually in men in Europe. Conclusions The overall estimated epidemiological burden of HPV-related cancers and non-malignant diseases is high in men in Europe. Approximately 30% of all new cancer cases attributable to HPV16/18 that occur yearly in Europe were estimated to occur in men. As in women, the vast majority of HPV-positive cancer in men is related to HPV16/18, while almost all HPV-related non-malignant diseases are due to HPV6/11. A substantial number of these malignant and non-malignant diseases may potentially be prevented by quadrivalent HPV vaccination. PMID:22260541

  10. CO2 storage capacity estimation: Methodology and gaps

    USGS Publications Warehouse

    Bachu, S.; Bonijoly, D.; Bradshaw, J.; Burruss, R.; Holloway, S.; Christensen, N.P.; Mathiassen, O.M.

    2007-01-01

    Implementation of CO2 capture and geological storage (CCGS) technology at the scale needed to achieve a significant and meaningful reduction in CO2 emissions requires knowledge of the available CO2 storage capacity. CO2 storage capacity assessments may be conducted at various scales-in decreasing order of size and increasing order of resolution: country, basin, regional, local and site-specific. Estimation of the CO2 storage capacity in depleted oil and gas reservoirs is straightforward and is based on recoverable reserves, reservoir properties and in situ CO2 characteristics. In the case of CO2-EOR, the CO2 storage capacity can be roughly evaluated on the basis of worldwide field experience or more accurately through numerical simulations. Determination of the theoretical CO2 storage capacity in coal beds is based on coal thickness and CO2 adsorption isotherms, and recovery and completion factors. Evaluation of the CO2 storage capacity in deep saline aquifers is very complex because four trapping mechanisms that act at different rates are involved and, at times, all mechanisms may be operating simultaneously. The level of detail and resolution required in the data make reliable and accurate estimation of CO2 storage capacity in deep saline aquifers practical only at the local and site-specific scales. This paper follows a previous one on issues and development of standards for CO2 storage capacity estimation, and provides a clear set of definitions and methodologies for the assessment of CO2 storage capacity in geological media. Notwithstanding the defined methodologies suggested for estimating CO2 storage capacity, major challenges lie ahead because of lack of data, particularly for coal beds and deep saline aquifers, lack of knowledge about the coefficients that reduce storage capacity from theoretical to effective and to practical, and lack of knowledge about the interplay between various trapping mechanisms at work in deep saline aquifers. ?? 2007 Elsevier Ltd. All rights reserved.

  11. Annualized earthquake loss estimates for California and their sensitivity to site amplification

    USGS Publications Warehouse

    Chen, Rui; Jaiswal, Kishor; Bausch, D; Seligson, H; Wills, C.J.

    2016-01-01

    Input datasets for annualized earthquake loss (AEL) estimation for California were updated recently by the scientific community, and include the National Seismic Hazard Model (NSHM), site‐response model, and estimates of shear‐wave velocity. Additionally, the Federal Emergency Management Agency’s loss estimation tool, Hazus, was updated to include the most recent census and economic exposure data. These enhancements necessitated a revisit to our previous AEL estimates and a study of the sensitivity of AEL estimates subjected to alternate inputs for site amplification. The NSHM ground motions for a uniform site condition are modified to account for the effect of local near‐surface geology. The site conditions are approximated in three ways: (1) by VS30 (time‐averaged shear‐wave velocity in the upper 30 m) value obtained from a geology‐ and topography‐based map consisting of 15 VS30 groups, (2) by site classes categorized according to National Earthquake Hazards Reduction Program (NEHRP) site classification, and (3) by a uniform NEHRP site class D. In case 1, ground motions are amplified using the Seyhan and Stewart (2014) semiempirical nonlinear amplification model. In cases 2 and 3, ground motions are amplified using the 2014 version of the NEHRP site amplification factors, which are also based on the Seyhan and Stewart model but are approximated to facilitate their use for building code applications. Estimated AELs are presented at multiple resolutions, starting with the state level assessment and followed by detailed assessments for counties, metropolitan statistical areas (MSAs), and cities. AEL estimate at the state level is ∼$3.7  billion, 70% of which is contributed from Los Angeles–Long Beach–Santa Ana, San Francisco–Oakland–Fremont, and Riverside–San Bernardino–Ontario MSAs. The statewide AEL estimate is insensitive to alternate assumptions of site amplification. However, we note significant differences in AEL estimates among the three sensitivity cases for smaller geographic units.

  12. Quantitative PET/CT scanner performance characterization based upon the society of nuclear medicine and molecular imaging clinical trials network oncology clinical simulator phantom.

    PubMed

    Sunderland, John J; Christian, Paul E

    2015-01-01

    The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing trial sites to use their preferred reconstruction methodologies. Predictably, time-of-flight-enabled scanners exhibited less size-based partial-volume bias than non-time-of-flight scanners. The CTN scanner validation experience over the past 5 y has generated a rich, well-curated phantom dataset from which PET/CT make-and-model and reconstruction-dependent quantitative behaviors were characterized for the purposes of understanding and estimating scanner-based variances in clinical trials. These results should make it possible to identify and recommend make-and-model-specific reconstruction strategies to minimize measurement variability in cancer clinical trials. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  13. Estimates of External Validity Bias When Impact Evaluations Select Sites Nonrandomly

    ERIC Educational Resources Information Center

    Bell, Stephen H.; Olsen, Robert B.; Orr, Larry L.; Stuart, Elizabeth A.

    2016-01-01

    Evaluations of educational programs or interventions are typically conducted in nonrandomly selected samples of schools or districts. Recent research has shown that nonrandom site selection can yield biased impact estimates. To estimate the external validity bias from nonrandom site selection, we combine lists of school districts that were…

  14. Analysis of low flows and selected methods for estimating low-flow characteristics at partial-record and ungaged stream sites in western Washington

    USGS Publications Warehouse

    Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.

    2012-01-01

    Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.

  15. Estimation of Leakage Potential of Selected Sites in Interstate and Tri-State Canals Using Geostatistical Analysis of Selected Capacitively Coupled Resistivity Profiles, Western Nebraska, 2004

    USGS Publications Warehouse

    Vrabel, Joseph; Teeple, Andrew; Kress, Wade H.

    2009-01-01

    With increasing demands for reliable water supplies and availability estimates, groundwater flow models often are developed to enhance understanding of surface-water and groundwater systems. Specific hydraulic variables must be known or calibrated for the groundwater-flow model to accurately simulate current or future conditions. Surface geophysical surveys, along with selected test-hole information, can provide an integrated framework for quantifying hydrogeologic conditions within a defined area. In 2004, the U.S. Geological Survey, in cooperation with the North Platte Natural Resources District, performed a surface geophysical survey using a capacitively coupled resistivity technique to map the lithology within the top 8 meters of the near-surface for 110 kilometers of the Interstate and Tri-State Canals in western Nebraska and eastern Wyoming. Assuming that leakage between the surface-water and groundwater systems is affected primarily by the sediment directly underlying the canal bed, leakage potential was estimated from the simple vertical mean of inverse-model resistivity values for depth levels with geometrically increasing layer thickness with depth which resulted in mean-resistivity values biased towards the surface. This method generally produced reliable results, but an improved analysis method was needed to account for situations where confining units, composed of less permeable material, underlie units with greater permeability. In this report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, the authors use geostatistical analysis to develop the minimum-unadjusted method to compute a relative leakage potential based on the minimum resistivity value in a vertical column of the resistivity model. The minimum-unadjusted method considers the effects of homogeneous confining units. The minimum-adjusted method also is developed to incorporate the effect of local lithologic heterogeneity on water transmission. Seven sites with differing geologic contexts were selected following review of the capacitively coupled resistivity data collected in 2004. A reevaluation of these sites using the mean, minimum-unadjusted, and minimum-adjusted methods was performed to compare the different approaches for estimating leakage potential. Five of the seven sites contained underlying confining units, for which the minimum-unadjusted and minimum-adjusted methods accounted for the confining-unit effect. Estimates of overall leakage potential were lower for the minimum-unadjusted and minimum-adjusted methods than those estimated by the mean method. For most sites, the local heterogeneity adjustment procedure of the minimum-adjusted method resulted in slightly larger overall leakage-potential estimates. In contrast to the mean method, the two minimum-based methods allowed the least permeable areas to control the overall vertical permeability of the subsurface. The minimum-adjusted method refined leakage-potential estimation by additionally including local lithologic heterogeneity effects.

  16. Estimating peak discharges, flood volumes, and hydrograph shapes of small ungaged urban streams in Ohio

    USGS Publications Warehouse

    Sherwood, J.M.

    1986-01-01

    Methods are presented for estimating peak discharges, flood volumes and hydrograph shapes of small (less than 5 sq mi) urban streams in Ohio. Examples of how to use the various regression equations and estimating techniques also are presented. Multiple-regression equations were developed for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The significant independent variables affecting peak discharge are drainage area, main-channel slope, average basin-elevation index, and basin-development factor. Standard errors of regression and prediction for the peak discharge equations range from +/-37% to +/-41%. An equation also was developed to estimate the flood volume of a given peak discharge. Peak discharge, drainage area, main-channel slope, and basin-development factor were found to be the significant independent variables affecting flood volumes for given peak discharges. The standard error of regression for the volume equation is +/-52%. A technique is described for estimating the shape of a runoff hydrograph by applying a specific peak discharge and the estimated lagtime to a dimensionless hydrograph. An equation for estimating the lagtime of a basin was developed. Two variables--main-channel length divided by the square root of the main-channel slope and basin-development factor--have a significant effect on basin lagtime. The standard error of regression for the lagtime equation is +/-48%. The data base for the study was established by collecting rainfall-runoff data at 30 basins distributed throughout several metropolitan areas of Ohio. Five to eight years of data were collected at a 5-min record interval. The USGS rainfall-runoff model A634 was calibrated for each site. The calibrated models were used in conjunction with long-term rainfall records to generate a long-term streamflow record for each site. Each annual peak-discharge record was fitted to a Log-Pearson Type III frequency curve. Multiple-regression techniques were then used to analyze the peak discharge data as a function of the basin characteristics of the 30 sites. (Author 's abstract)

  17. Path spectra derived from inversion of source and site spectra for earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.

    2017-12-01

    A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the residuals for our different sets independently to see how path terms differ between event-to-station collections. The path-specific information gained from this can inform development of terms for regional GMPEs, through understanding of these seismological phenomena.

  18. Ground Motion Prediction Models for Caucasus Region

    NASA Astrophysics Data System (ADS)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  19. Mapping Application for Penguin Populations and Projected Dynamics (MAPPPD): Data and Tools for Dynamic Management and Decision Support

    NASA Technical Reports Server (NTRS)

    Humphries, G. R. W.; Naveen, R.; Schwaller, M.; Che-Castaldo, C.; McDowall, P.; Schrimpf, M.; Schrimpf, Michael; Lynch, H. J.

    2017-01-01

    The Mapping Application for Penguin Populations and Projected Dynamics (MAPPPD) is a web-based, open access, decision-support tool designed to assist scientists, non-governmental organizations and policy-makers working to meet the management objectives as set forth by the Commission for the Conservation of Antarctic Marine Living Resources (CCAMLR) and other components of the Antarctic Treaty System (ATS) (that is, Consultative Meetings and the ATS Committee on Environmental Protection). MAPPPD was designed specifically to complement existing efforts such as the CCAMLR Ecosystem Monitoring Program (CEMP) and the ATS site guidelines for visitors. The database underlying MAPPPD includes all publicly available (published and unpublished) count data on emperor, gentoo, Adelie) and chinstrap penguins in Antarctica. Penguin population models are used to assimilate available data into estimates of abundance for each site and year.Results are easily aggregated across multiple sites to obtain abundance estimates over any user-defined area of interest. A front end web interface located at www.penguinmap.com provides free and ready access to the most recent count and modelled data, and can act as a facilitator for data transfer between scientists and Antarctic stakeholders to help inform management decisions for the continent.

  20. A simple algorithm for identifying periods of snow accumulation on a radiometer

    NASA Astrophysics Data System (ADS)

    Lapo, Karl E.; Hinkelman, Laura M.; Landry, Christopher C.; Massmann, Adam K.; Lundquist, Jessica D.

    2015-09-01

    Downwelling solar, Qsi, and longwave, Qli, irradiances at the earth's surface are the primary energy inputs for many hydrologic processes, and uncertainties in measurements of these two terms confound evaluations of estimated irradiances and negatively impact hydrologic modeling. Observations of Qsi and Qli in cold environments are subject to conditions that create additional uncertainties not encountered in other climates, specifically the accumulation of snow on uplooking radiometers. To address this issue, we present an automated method for estimating these periods of snow accumulation. Our method is based on forest interception of snow and uses common meteorological observations. In this algorithm, snow accumulation must exceed a threshold to obscure the sensor and is only removed through scouring by wind or melting. The algorithm is evaluated at two sites representing different mountain climates: (1) Snoqualmie Pass, Washington (maritime) and (2) the Senator Beck Basin Study Area, Colorado (continental). The algorithm agrees well with time-lapse camera observations at the Washington site and with multiple measurements at the Colorado site, with 70-80% of observed snow accumulation events correctly identified. We suggest using the method for quality controlling irradiance observations in snow-dominated climates where regular, daily maintenance is not possible.

Top