Sample records for estimates source terms

  1. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  2. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  3. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  4. Source Term Estimation of Radioxenon Released from the Fukushima Dai-ichi Nuclear Reactors Using Measured Air Concentrations and Atmospheric Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less

  5. Watershed nitrogen and phosphorus balance: The upper Potomac River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaworski, N.A.; Groffman, P.M.; Keller, A.A.

    1992-01-01

    Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less

  6. Prioritized packet video transmission over time-varying wireless channel using proactive FEC

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-12-01

    Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.

  7. Attenuation Tomography of Northern California and the Yellow Sea / Korean Peninsula from Coda-source Normalized and Direct Lg Amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Dreger, D S; Phillips, W S

    2008-07-16

    Inversions for regional attenuation (1/Q) of Lg are performed in two different regions. The path attenuation component of the Lg spectrum is isolated using the coda-source normalization method, which corrects the Lg spectral amplitude for the source using the stable, coda-derived source spectra. Tomographic images of Northern California agree well with one-dimensional (1-D) Lg Q estimated from five different methods. We note there is some tendency for tomographic smoothing to increase Q relative to targeted 1-D methods. For example in the San Francisco Bay Area, which contains high attenuation relative to the rest of it's region, Q is over-estimated bymore » {approx}30. Coda-source normalized attenuation tomography is also carried out for the Yellow Sea/Korean Peninsula (YSKP) where output parameters (site, source, and path terms) are compared with those from the amplitude tomography method of Phillips et al. (2005) as well as a new method that ties the source term to the MDAC formulation (Walter and Taylor, 2001). The source terms show similar scatter between coda-source corrected and MDAC source perturbation methods, whereas the amplitude method has the greatest correlation with estimated true source magnitude. The coda-source better represents the source spectra compared to the estimated magnitude and could be the cause of the scatter. The similarity in the source terms between the coda-source and MDAC-linked methods shows that the latter method may approximate the effect of the former, and therefore could be useful in regions without coda-derived sources. The site terms from the MDAC-linked method correlate slightly with global Vs30 measurements. While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements, they do correlate with one another, which provides confidence that the two methods are consistent. The path Q{sup -1} values are very similar between the coda-source and amplitude ratio methods except for small differences in the Da-xin-anling Mountains, in the northern YSKP. However there is one large difference between the MDAC-linked method and the others in the region near stations TJN and INCN, which point to site-effect as the cause for the difference.« less

  8. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  9. The Funding of Long-Term Care in Canada: What Do We Know, What Should We Know?

    PubMed

    Grignon, Michel; Spencer, Byron G

    2018-06-01

    ABSTRACTLong-term care is a growing component of health care spending but how much is spent or who bears the cost is uncertain, and the measures vary depending on the source used. We drew on regularly published series and ad hoc publications to compile preferred estimates of the share of long-term care spending in total health care spending, the private share of long-term care spending, and the share of residential care within long-term care. For each series, we compared estimates obtainable from published sources (CIHI [Canadian Institute for Health Information] and OECD [Organization for Economic Cooperation and Development]) with our preferred estimates. We conclude that using published series without adjustment would lead to spurious conclusions on the level and evolution of spending on long-term care in Canada as well as on the distribution of costs between private and public funders and between residential and home care.

  10. Source inventory for Department of Energy solid low-level radioactive waste disposal facilities: What it means and how to get one of your own

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M.A.

    1991-12-31

    In conducting a performance assessment for a low-level waste (LLW) disposal facility, one of the important considerations for determining the source term, which is defined as the amount of radioactivity being released from the facility, is the quantity of radioactive material present. This quantity, which will be referred to as the source inventory, is generally estimated through a review of historical records and waste tracking systems at the LLW facility. In theory, estimating the total source inventory for Department of Energy (DOE) LLW disposal facilities should be possible by reviewing the national data base maintained for LLW operations, the Solidmore » Waste Information Management System (SWIMS), or through the annual report that summarizes the SWIMS data, the Integrated Data Base (IDB) report. However, in practice, there are some difficulties in making this estimate. This is not unexpected, since the SWIMS and the IDB were not developed with the goal of developing a performance assessment source term in mind. The practical shortcomings using the existing data to develop a source term for DOE facilities will be discussed in this paper.« less

  11. Estimation of the Cesium-137 Source Term from the Fukushima Daiichi Power Plant Using Air Concentration and Deposition Data

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2013-04-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.

  12. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  13. INEEL Subregional Conceptual Model Report Volume 3: Summary of Existing Knowledge of Natural and Anthropogenic Influences on the Release of Contaminants to the Subsurface Environment from Waste Source Terms at the INEEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul L. Wichlacz

    2003-09-01

    This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less

  14. DESIGN OF AQUIFER REMEDIATION SYSTEMS: (2) Estimating site-specific performance and benefits of partial source removal

    EPA Science Inventory

    A Lagrangian stochastic model is proposed as a tool that can be utilized in forecasting remedial performance and estimating the benefits (in terms of flux and mass reduction) derived from a source zone remedial effort. The stochastic functional relationships that describe the hyd...

  15. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. The need for harmonization of methods for finding locations and magnitudes of air pollution sources using observations of concentrations and wind fields

    NASA Astrophysics Data System (ADS)

    Hanna, Steven R.; Young, George S.

    2017-01-01

    What do the terms "top-down", "inverse", "backwards", "adjoint", "sensor data fusion", "receptor", "source term estimation (STE)", to name several appearing in the current literature, have in common? These varied terms are used by different disciplines to describe the same general methodology - the use of observations of air pollutant concentrations and knowledge of wind fields to identify air pollutant source locations and/or magnitudes. Academic journals are publishing increasing numbers of papers on this topic. Examples of scenarios related to this growing interest, ordered from small scale to large scale, are: use of real-time samplers to quickly estimate the location of a toxic gas release by a terrorist at a large public gathering (e.g., Haupt et al., 2009);

  17. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  18. Bayesian source term determination with unknown covariance of measurements

    NASA Astrophysics Data System (ADS)

    Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav

    2017-04-01

    Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  19. Generation of GHS Scores from TEST and online sources ...

    EPA Pesticide Factsheets

    Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat toxicity, developmental toxicity, endocrine activity, and mutagenicity. It can be used to evaluate ecotoxicity (in terms of acute fathead minnow toxicity) and fate (in terms of bioconcentration factor). It also be used to estimate a variety of key physicochemical properties such as melting point, boiling point, vapor pressure, water solubility, and bioconcentration factor. A web-based version of T.E.S.T. is currently being developed to allow predictions to be made from other web tools. Online data sources such as from NCCT’s Chemistry Dashboard, REACH dossiers, or from ChemHat.org can also be utilized to obtain GHS (Global Harmonization System) scores for comparing alternatives. The purpose of this talk is to show how GHS (Global Harmonization Score) data can be obtained from literature sources and from T.E.S.T. (Toxicity Estimation Software Tool). This data will be used to compare chemical alternatives in the alternatives assessment dashboard (a 2018 CSS product).

  20. Spent fuel radionuclide source-term model for assessing spent fuel performance in geological disposal. Part I: Assessment of the instant release fraction

    NASA Astrophysics Data System (ADS)

    Johnson, Lawrence; Ferry, Cécile; Poinssot, Christophe; Lovera, Patrick

    2005-11-01

    A source-term model for the short-term release of radionuclides from spent nuclear fuel (SNF) has been developed. It provides quantitative estimates of the fraction of various radionuclides that are expected to be released rapidly (the instant release fraction, or IRF) when water contacts the UO 2 or MOX fuel after container breaching in a geological repository. The estimates are based on correlation of leaching data for radionuclides with fuel burnup and fission gas release. Extrapolation of the data to higher fuel burnup values is based on examination of data on fuel restructuring, such as rim development, and on fission gas release data, which permits bounding IRF values to be estimated assuming that radionuclide releases will be less than fission gas release. The consideration of long-term solid-state changes influencing the IRF prior to canister breaching is addressed by evaluating alpha self-irradiation enhanced diffusion, which may gradually increase the accumulation of fission products at grain boundaries.

  1. Low birth weight and air pollution in California: Which sources and components drive the risk?

    PubMed

    Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Kleeman, Michael J; Bartell, Scott M; Cockburn, Myles; Escobedo, Loraine; Wu, Jun

    2016-01-01

    Intrauterine growth restriction has been associated with exposure to air pollution, but there is a need to clarify which sources and components are most likely responsible. This study investigated the associations between low birth weight (LBW, <2500g) in term born infants (≥37 gestational weeks) and air pollution by source and composition in California, over the period 2001-2008. Complementary exposure models were used: an empirical Bayesian kriging model for the interpolation of ambient pollutant measurements, a source-oriented chemical transport model (using California emission inventories) that estimated fine and ultrafine particulate matter (PM2.5 and PM0.1, respectively) mass concentrations (4km×4km) by source and composition, a line-source roadway dispersion model at fine resolution, and traffic index estimates. Birth weight was obtained from California birth certificate records. A case-cohort design was used. Five controls per term LBW case were randomly selected (without covariate matching or stratification) from among term births. The resulting datasets were analyzed by logistic regression with a random effect by hospital, using generalized additive mixed models adjusted for race/ethnicity, education, maternal age and household income. In total 72,632 singleton term LBW cases were included. Term LBW was positively and significantly associated with interpolated measurements of ozone but not total fine PM or nitrogen dioxide. No significant association was observed between term LBW and primary PM from all sources grouped together. A positive significant association was observed for secondary organic aerosols. Exposure to elemental carbon (EC), nitrates and ammonium were also positively and significantly associated with term LBW, but only for exposure during the third trimester of pregnancy. Significant positive associations were observed between term LBW risk and primary PM emitted by on-road gasoline and diesel or by commercial meat cooking sources. Primary PM from wood burning was inversely associated with term LBW. Significant positive associations were also observed between term LBW and ultrafine particle numbers modeled with the line-source roadway dispersion model, traffic density and proximity to roadways. This large study based on complementary exposure metrics suggests that not only primary pollution sources (traffic and commercial meat cooking) but also EC and secondary pollutants are risk factors for term LBW. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Source terms, shielding calculations and soil activation for a medical cyclotron.

    PubMed

    Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E

    2016-12-01

    Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .

  3. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  4. Time-frequency approach to underdetermined blind source separation.

    PubMed

    Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong

    2012-02-01

    This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.

  5. A new DOD and DOA estimation method for MIMO radar

    NASA Astrophysics Data System (ADS)

    Gong, Jian; Lou, Shuntian; Guo, Yiduo

    2018-04-01

    The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.

  6. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  7. Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal

    NASA Astrophysics Data System (ADS)

    Johnston, C. D.; Davis, G. B.; Bastow, T. P.; Woodbury, R. J.; Rao, P. S. C.; Annable, M. D.; Rhodes, S.

    2014-08-01

    Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L3/L2/T) and mass fluxes (Jc; M/L2/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104 g day- 1 to 24-31 g day- 1 (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions also provided further evidence of the source zone architecture and DNAPL mass depletion processes. This was especially apparent in different mass-depletion rates from distinct parts of the CP. High mass fluxes and groundwater fluxes located near the base of the aquifer dominated in terms of the dissolved mass flux in the profile, although not in terms of concentrations. Reductions observed in Jc and MD were used to better target future remedial efforts. Integration of the observations from the PFM deployments and the source mass depletion provided a basis for establishing flux-based management criteria for the site.

  8. Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal.

    PubMed

    Johnston, C D; Davis, G B; Bastow, T P; Woodbury, R J; Rao, P S C; Annable, M D; Rhodes, S

    2014-08-01

    Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L(3)/L(2)/T) and mass fluxes (Jc; M/L(2)/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104gday(-1) to 24-31gday(-1) (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions also provided further evidence of the source zone architecture and DNAPL mass depletion processes. This was especially apparent in different mass-depletion rates from distinct parts of the CP. High mass fluxes and groundwater fluxes located near the base of the aquifer dominated in terms of the dissolved mass flux in the profile, although not in terms of concentrations. Reductions observed in Jc and MD were used to better target future remedial efforts. Integration of the observations from the PFM deployments and the source mass depletion provided a basis for establishing flux-based management criteria for the site. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Extending the Lincoln-Petersen estimator for multiple identifications in one source.

    PubMed

    Köse, T; Orman, M; Ikiz, F; Baksh, M F; Gallagher, J; Böhning, D

    2014-10-30

    The Lincoln-Petersen estimator is one of the most popular estimators used in capture-recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11 ,f10 ,f01 ,f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln-Petersen estimator provides an estimate for f00 . In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln-Petersen's, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln-Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  11. Assessment of source-specific health effects associated with an unknown number of major sources of multiple air pollutants: a unified Bayesian approach.

    PubMed

    Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H

    2014-07-01

    There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Evaluation of Long-term Performance of Enhanced Anaerobic Source Zone Bioremediation using mass flux

    NASA Astrophysics Data System (ADS)

    Haluska, A.; Cho, J.; Hatzinger, P.; Annable, M. D.

    2017-12-01

    Chlorinated ethene DNAPL source zones in groundwater act as potential long term sources of contamination as they dissolve yielding concentrations well above MCLs, posing an on-going public health risk. Enhanced bioremediation has been applied to treat many source zones with significant promise, but long-term sustainability of this technology has not been thoroughly assessed. This study evaluated the long-term effectiveness of enhanced anaerobic source zone bioremediation at chloroethene contaminated sites to determine if the treatment prevented contaminant rebound and removed NAPL from the source zone. Long-term performance was evaluated based on achieving MCL-based contaminant mass fluxes in parent compound concentrations during different monitoring periods. Groundwater concertation versus time data was compiled for 6-sites and post-remedial contaminant mass flux data was then measured using passive flux meters at wells both within and down-gradient of the source zone. Post-remedial mass flux data was then combined with pre-remedial water quality data to estimate pre-remedial mass flux. This information was used to characterize a DNAPL dissolution source strength function, such as the Power Law Model and the Equilibrium Stream tube model. The six-sites characterized for this study were (1) Former Charleston Air Force Base, Charleston, SC; (2) Dover Air Force Base, Dover, DE; (3) Treasure Island Naval Station, San Francisco, CA; (4) Former Raritan Arsenal, Edison, NJ; (5) Naval Air Station, Jacksonville, FL; and, (6) Former Naval Air Station, Alameda, CA. Contaminant mass fluxes decreased for all the sites by the end of the post-treatment monitoring period and rebound was limited within the source zone. Post remedial source strength function estimates suggest that decreases in contaminant mass flux will continue to occur at these sites, but a mass flux based on MCL levels may never be exceeded. Thus, site clean-up goals should be evaluated as order-of-magnitude reductions. Additionally, sites may require monitoring for a minimum of 5-years in order to sufficiently evaluate remedial performance. The study shows that enhanced anaerobic source zone bioremediation contributed to a modest reduction of source zone contaminant mass discharge and appears to have mitigated rebound of chlorinated ethenes.

  13. Trends in Mortality of Tuberculosis Patients in the United States: The Long-term Perspective

    PubMed Central

    Barnes, Richard F.W.; Moore, Maria Luisa; Garfein, Richard S.; Brodine, Stephanie; Strathdee, Steffanie A.; Rodwell, Timothy C.

    2011-01-01

    PURPOSE To describe long-term trends in TB mortality and to compare trends estimated from two different sources of public health surveillance data. METHODS Trends and changes in trend were estimated by joinpoint regression. Comparisons between datasets were made by fitting a Poisson regression model. RESULTS Since 1900, TB mortality rates estimated from death certificates have declined steeply, except for a period of no change in the 1980s. This decade had long-term consequences resulting in more TB deaths in later years than would have occurred had there been no flattening of the trend. Recent trends in TB mortality estimated from National Tuberculosis Surveillance System (NTSS) data, which record all-cause mortality, differed from trends based on death certificates. In particular, NTSS data showed TB mortality rates flattening since 2002. CONCLUSIONS Estimates of trends in TB mortality vary by data source, and therefore interpretation of the success of control efforts will depend upon the surveillance dataset used. The datasets may be subject to different biases that vary with time. One dataset showed a sustained improvement in the control of TB since the early 1990s while the other indicated that the rate of TB mortality was no longer declining. PMID:21820320

  14. Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, D.; Brunett, A.; Passerini, S.

    Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less

  15. Source term estimation of radioxenon released from the Fukushima Dai-ichi nuclear reactors using measured air concentrations and atmospheric transport modeling.

    PubMed

    Eslinger, P W; Biegalski, S R; Bowyer, T W; Cooper, M W; Haas, D A; Hayes, J C; Hoffman, I; Korpach, E; Yi, J; Miley, H S; Rishel, J P; Ungar, K; White, B; Woods, V T

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout across the northern hemisphere resulting from the Fukushima Dai-ichi Nuclear Power Plant accident in March 2011. Sampling data from multiple International Modeling System locations are combined with atmospheric transport modeling to estimate the magnitude and time sequence of releases of (133)Xe. Modeled dilution factors at five different detection locations were combined with 57 atmospheric concentration measurements of (133)Xe taken from March 18 to March 23 to estimate the source term. This analysis suggests that 92% of the 1.24 × 10(19) Bq of (133)Xe present in the three operating reactors at the time of the earthquake was released to the atmosphere over a 3 d period. An uncertainty analysis bounds the release estimates to 54-129% of available (133)Xe inventory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  17. Simultaneous optical flow and source estimation: Space–time discretization and preconditioning

    PubMed Central

    Andreev, R.; Scherzer, O.; Zulehner, W.

    2015-01-01

    We consider the simultaneous estimation of an optical flow field and an illumination source term in a movie sequence. The particular optical flow equation is obtained by assuming that the image intensity is a conserved quantity up to possible sources and sinks which represent varying illumination. We formulate this problem as an energy minimization problem and propose a space–time simultaneous discretization for the optimality system in saddle-point form. We investigate a preconditioning strategy that renders the discrete system well-conditioned uniformly in the discretization resolution. Numerical experiments complement the theory. PMID:26435561

  18. LONG TERM HYDROLOGICAL IMPACT ASSESSMENT (LTHIA)

    EPA Science Inventory

    LTHIA is a universal Urban Sprawl analysis tool that is available to all at no charge through the Internet. It estimates impacts on runoff, recharge and nonpoint source pollution resulting from past or proposed land use changes. It gives long-term average annual runoff for a lan...

  19. Physical/chemical closed-loop water-recycling

    NASA Technical Reports Server (NTRS)

    Herrmann, Cal C.; Wydeven, Theodore

    1991-01-01

    Water needs, water sources, and means for recycling water are examined in terms appropriate to the water quality requirements of a small crew and spacecraft intended for long duration exploration missions. Inorganic, organic, and biological hazards are estimated for waste water sources. Sensitivities to these hazards for human uses are estimated. The water recycling processes considered are humidity condensation, carbon dioxide reduction, waste oxidation, distillation, reverse osmosis, pervaporation, electrodialysis, ion exchange, carbon sorption, and electrochemical oxidation. Limitations and applications of these processes are evaluated in terms of water quality objectives. Computerized simulation of some of these chemical processes is examined. Recommendations are made for development of new water recycling technology and improvement of existing technology for near term application to life support systems for humans in space. The technological developments are equally applicable to water needs on Earth, in regions where extensive water recycling is needed or where advanced water treatment is essential to meet EPA health standards.

  20. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  1. Nutrients in waters on the inner shelf between Cape Charles and Cape Hatteras

    NASA Technical Reports Server (NTRS)

    Wong, G. T. F.; Todd, J. F.

    1981-01-01

    The distribution of nutrients in the shelf waters of the southern tip of the Middle Atlantic Bight was investigated. It is concluded that the outflow of freshwater from the Chesapeake Bay is a potential source of nutrients to the adjacent shelf waters. However, a quantitative estimation of its importance cannot yet be made because (1) there are other sources of nutrients to the study area and these sources cannot yet be quantified and (2) the concentrations of nutrients in the outflow from Chesapeake Bay exhibit significant short-term and long-term temporal variabilities.

  2. Generation of GHS Scores from TEST and online sources

    EPA Science Inventory

    Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat tox...

  3. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  4. Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer

    NASA Astrophysics Data System (ADS)

    Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.

    2015-04-01

    The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.

  5. Basic repository source term and data sheet report: Lavender Canyon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-01-01

    This report is one of a series describing studies undertaken in support of the US Department of Energy Civilian Radioactive Waste Management (CRWM) Program. This study contains the derivation of values for environmental source terms and resources consumed for a CRWM repository. Estimates include heavy construction equipment; support equipment; shaft-sinking equipment; transportation equipment; and consumption of fuel, water, electricity, and natural gas. Data are presented for construction and operation at an assumed site in Lavender Canyon, Utah. 3 refs; 6 tabs.

  6. Generation of Alternative Assessment Scores using TEST and online data sources

    EPA Science Inventory

    Alternatives assessment frameworks such as DfE (Design for the Environment) evaluate chemical alternatives in terms of human health effects, ecotoxicity, and fate. T.E.S.T. (Toxicity Estimation Software Tool) can be utilized to evaluate human health in terms of acute oral rat tox...

  7. Source term evaluation model for high-level radioactive waste repository with decay chain build-up.

    PubMed

    Chopra, Manish; Sunny, Faby; Oza, R B

    2016-09-18

    A source term model based on two-component leach flux concept is developed for a high-level radioactive waste repository. The long-lived radionuclides associated with high-level waste may give rise to the build-up of activity because of radioactive decay chains. The ingrowths of progeny are incorporated in the model using Bateman decay chain build-up equations. The model is applied to different radionuclides present in the high-level radioactive waste, which form a part of decay chains (4n to 4n + 3 series), and the activity of the parent and daughter radionuclides leaching out of the waste matrix is estimated. Two cases are considered: one when only parent is present initially in the waste and another where daughters are also initially present in the waste matrix. The incorporation of in situ production of daughter radionuclides in the source is important to carry out realistic estimates. It is shown that the inclusion of decay chain build-up is essential to avoid underestimation of the radiological impact assessment of the repository. The model can be a useful tool for evaluating the source term of the radionuclide transport models used for the radiological impact assessment of high-level radioactive waste repositories.

  8. Detection and Estimation of 2-D Distributions of Greenhouse Gas Source Concentrations and Emissions over Complex Urban Environments and Industrial Sites

    NASA Astrophysics Data System (ADS)

    Zaccheo, T. S.; Pernini, T.; Dobler, J. T.; Blume, N.; Braun, M.

    2017-12-01

    This work highlights the use of the greenhouse-gas laser imaging tomography experiment (GreenLITETM) data in conjunction with a sparse tomography approach to identify and quantify both urban and industrial sources of CO2 and CH4. The GreenLITETM system provides a user-defined set of time-sequenced intersecting chords or integrated column measurements at a fixed height through a quasi-horizontal plane of interest. This plane, with unobstructed views along the lines of sight, may range from complex industrial facilities to a small city scale or urban sector. The continuous time phased absorption measurements are converted to column concentrations and combined with a plume based model to estimate the 2-D distribution of gas concentration over extended areas ranging from 0.04-25 km2. Finally, these 2-D maps of concentration are combined with ancillary meteorological and atmospheric data to identify potential emission sources and provide first order estimates of their associated fluxes. In this presentation, we will provide a brief overview of the systems and results from both controlled release experiments and a long-term system deployment in Paris, FR. These results provide a quantitative assessment of the system's ability to detect and estimate CO2 and CH4 sources, and demonstrate its ability to perform long-term autonomous monitoring and quantification of either persistent or sporadic emissions that may have both health and safety as well as environmental impacts.

  9. Estimating tree species richness from forest inventory plot data

    Treesearch

    Ronald E. McRoberts; Dacia M. Meneguzzo

    2007-01-01

    Montreal Process Criterion 1, Conservation of Biological Diversity, expresses species diversity in terms of number of forest dependent species. Species richness, defined as the total number of species present, is a common metric for analyzing species diversity. A crucial difficulty in estimating species richness from sample data obtained from sources such as inventory...

  10. CONTRIBUTIONS OF CURRENT YEAR PHOTOSYNTHATE TO FINE ROOTS ESTIMATED USING A 13C-DEPLETED CO2 SOURCE

    EPA Science Inventory

    The quantification of root turnover is necessary for a complete understanding of plant carbon (C) budgets, especially in terms of impacts of global climate change. To improve estimates of root turnover, we present a method to distinguish current- from prior-year allocation of ca...

  11. IDENTIFYING THE COGNITIVE AND VASCULAR EFFECTS OF AIR POLLUTION SOURCES AND MIXTURES IN THE FRAMINGHARN OFFSPRING AND THIRD GENERATION COHORTS

    EPA Science Inventory

    We will estimate health risks associated with short- and long-term exposure to individual air pollutants, sources and air pollution mixtures within the Framingham Offspring and Third Generation populations. We will address which individual and area-level factors, measuring vul...

  12. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  13. The Application of Function Points to Predict Source Lines of Code for Software Development

    DTIC Science & Technology

    1992-09-01

    there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available

  14. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  15. Physical/chemical closed-loop water-recycling for long-duration missions

    NASA Technical Reports Server (NTRS)

    Herrmann, Cal C.; Wydeven, Ted

    1990-01-01

    Water needs, water sources, and means for recycling water are examined in terms appropriate to the water quality requirements of a small crew and spacecraft intended for long duration exploration missions. Inorganic, organic, and biological hazards are estimated for waste water sources. Sensitivities to these hazards for human uses are estimated. The water recycling processes considered are humidity condensation, carbon dioxide reduction, waste oxidation, distillation, reverse osmosis, pervaporation, electrodialysis, ion exchange, carbon sorption, and electrochemical oxidation. Limitations and applications of these processes are evaluated in terms of water quality objectives. Computerized simulation of some of these chemical processes is examined. Recommendations are made for development of new water recycling technology and improvement of existing technology for near term application to life support systems for humans in space. The technological developments are equally applicable to water needs on earth, in regions where extensive water ecycling is needed or where advanced water treatment is essential to meet EPA health standards.

  16. Estimating Differences in Area-Level Impacts of Various Recruiting Resources: Can Different Recruiting Areas and Years by Pooled?

    DTIC Science & Technology

    1983-08-01

    Local Leads (Qualified and Interested) from LAMS Advertising (Based on FY82 Experience) Table 7 - Long Term Elasticities for Navy-Sourced NOIC Leads...Area Level Elasticities for Total NOIC Leads (Regardless of Source of Advertising ) for FY79, FY80 (FY80: 146,465) Appendix - Table la - Comparison of...of national .1*S leads (e.g., NOIC leads from a Navy source or from Joint DOD advertising (JADOR) sources), and for local leads. An Appendix

  17. Neutron crosstalk between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.

    2015-05-01

    We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less

  18. Refinement of Regional Distance Seismic Moment Tensor and Uncertainty Analysis for Source-Type Identification

    DTIC Science & Technology

    2011-09-01

    a NSS that lies in this negative explosion positive CLVD quadrant due to the large degree of tectonic release in this event that reversed the phase...Mellman (1986) in their analysis of fundamental model Love and Rayleigh wave amplitude and phase for nuclear and tectonic release source terms, and...1986). Estimating explosion and tectonic release source parameters of underground nuclear explosions from Rayleigh and Love wave observations, Air

  19. The value of long-term stream invertebrate data collected by citizen scientists

    Treesearch

    Patrick M. Edwards; Stefano Goffredo

    2016-01-01

    The purpose of this investigation was to systematically examine the variability associated with temporally-oriented invertebrate data collected by citizen scientists and consider the value of such data for use in stream management. Variability in invertebrate data was estimated for three sources of variation: sampling, within-reach spatial and long-term temporal. Long-...

  20. POGO-FAN: Remarkable Empirical Indicators for the Local Chemical Production of Smog- Ozone and NOx-Sensitivity of Air Parcels

    NASA Astrophysics Data System (ADS)

    Chatfield, R. B.; Browell, E. V.; Brune, W. H.; Crawford, J. H.; Esswein, R.; Fried, A.; Olson, J. R.; Shetter, R. E.; Singh, H. B.

    2006-12-01

    We propose and evaluate two related and surprisingly simple empirical estimators for the local chemical production term for photochemical ozone; each uses two moderate-technology chemical measurements and a measurement of ultraviolet light. We nickname the techniques POGO-FAN: Production of Ozone by Gauging Oxidation: Formaldehyde and NO. (1) A non-linear function of a single three-factor index-variable, j (HCHO=>rads) [HCHO] [NO] seems to provide a good estimator of the largest single term in the production of smog ozone, the HOO+NO term, over a very wide range of situations. (2) By considering empirical contour plots summarizing isopleths of HOO+NO using j (HCHO=>rads) [HCHO] and [NO] separately as coordinates, we provide a slightly more complex 2-d indicator of smog ozone production that additionally allows an estimate of the NOx-sensitivity or NOx-saturation (i.e., VOC-sensitivity) of sampled air parcels. ~85 to >90 % of the variance is explained. The correspondence to "EKMA" contour plots, estimating afternoon ozone based on morningtime organics and NOx mixes, is not coincidental. We utilize a broad set of urban plume, regionally polluted and cleaner NASA DC-8 PBL samples from the Intercontinental Transport Experiment-North America (INTEX-NA), in which each of the variables was measured, to help establish our relationship. The estimator is described in terms both both of asymptotic smog photochemistry theory; primarily this suggests appropriate statistical approaches which can capture some of the complex interrelations of lower-tropospheric smog mix through correlation of reactive mixture components. HCHO is not only an important source of HOO radicals, but it more important serves as a "gauge" of all photochemical processing of volatile organic compounds. It probably captures information related to coincident VOC sources of various compounds and parallels in photochemical processing. Constrained modeling of observed atmospheric concentrations suggests that the prime source of ozone from HOO+NO reaction and other peroxy radical ozone formation reactions (ROO+NO), thus all ozone production, are closely related. Additionally, modeling allows us to follow ozone production and NOx-sensitivity throughout the varying photolytic cycle.

  1. Feasibility of Active Monitoring for Plate Coupling Using ACROSS

    NASA Astrophysics Data System (ADS)

    Yamaoka, K.; Watanabe, T.; Ikuta, R.

    2004-12-01

    Detectability of temporal changes in reflected wave from the boundary of subducting plates in Tokai district with active sources are studied. Based on rock experiments the change in the intensity of reflection wave can be caused by change in coupling between subducting and overriding plates. ACROSS (Accurately-Controlled Rountine-Operated Signal System) consists of sinusoidal vibration sources and receivers is proved to provide a data of excellent signal resolution. The following technical issues should be overcome to monitor the returned signal from boundaries of subducting plates. (1) Long term operation of the source. (2) Detection of temporal change. (3) Accurate estimation of source functions and their temporal change. First two issues have already overcome. We have already succeeded a long-term operation experiment with the ACROSS system in Awaji, Japan. The operation was carried out for 15 months with only minor troubles. Continuous signal during the experiment are successfully obtained. In the experiment we developed a technique to monitor the temporal change of travel time with a resolution of several tens of microseconds. The third issue is one of the most difficult problem for practical monitoring using artificial sources. In the 15-month experiment we correct the source function using the record of seismometers that were deployed around the source We also estimate the efficiency of the reflected wave detection using ACROSS system. We use a data of seismic exploration experiment by blasts that carried out above subducting plate in Tokai district. Clear reflection from the surface of the Philippine Sea plate is observed in the waveform. Assuming that the ACROSS source is installed at the same place of the blast source, the detectability of temporal variation of reflection wave can be estimated. As we have measured the variation of signal amplitude that depends on the distance from an ACROSS source, ground noise at seismic stations (receivers) provide us the signal-to-noise ratio for the signal from ACROSS. The resolution can be estimated only by the signal-to-noise ratio. We surveyed the noise level at the place where reflection from the boundary of subducting Philippine Sea Plate can be detected. The results show that the resolution will be better than 1% in amplitude and 0.1milisecond in travel time for the stacking of one week using three-unit source and ten-elements receiver arrays.

  2. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less

  3. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. An Inventory of Post-Compulsory Education and Training Programs in the U.S. and Sources of Support.

    ERIC Educational Resources Information Center

    Wagner, Alan P.

    In terms of both dollars and number of participants, the scope of postcompulsory or lifelong learning in the U.S. is extensive. This report enumerates the number of participants in each lifelong learning program, estimates the cost of each program, indicates its funding sources, and describes program participants' demographic and economic…

  5. Approaches to Refining Estimates of Global Burden and Economics of Dengue

    PubMed Central

    Shepard, Donald S.; Undurraga, Eduardo A.; Betancourt-Cravioto, Miguel; Guzmán, María G.; Halstead, Scott B.; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O.; Tapia-Conyer, Roberto; Gubler, Duane J.

    2014-01-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions. PMID:25412506

  6. Approaches to refining estimates of global burden and economics of dengue.

    PubMed

    Shepard, Donald S; Undurraga, Eduardo A; Betancourt-Cravioto, Miguel; Guzmán, María G; Halstead, Scott B; Harris, Eva; Mudin, Rose Nani; Murray, Kristy O; Tapia-Conyer, Roberto; Gubler, Duane J

    2014-11-01

    Dengue presents a formidable and growing global economic and disease burden, with around half the world's population estimated to be at risk of infection. There is wide variation and substantial uncertainty in current estimates of dengue disease burden and, consequently, on economic burden estimates. Dengue disease varies across time, geography and persons affected. Variations in the transmission of four different viruses and interactions among vector density and host's immune status, age, pre-existing medical conditions, all contribute to the disease's complexity. This systematic review aims to identify and examine estimates of dengue disease burden and costs, discuss major sources of uncertainty, and suggest next steps to improve estimates. Economic analysis of dengue is mainly concerned with costs of illness, particularly in estimating total episodes of symptomatic dengue. However, national dengue disease reporting systems show a great diversity in design and implementation, hindering accurate global estimates of dengue episodes and country comparisons. A combination of immediate, short-, and long-term strategies could substantially improve estimates of disease and, consequently, of economic burden of dengue. Suggestions for immediate implementation include refining analysis of currently available data to adjust reported episodes and expanding data collection in empirical studies, such as documenting the number of ambulatory visits before and after hospitalization and including breakdowns by age. Short-term recommendations include merging multiple data sources, such as cohort and surveillance data to evaluate the accuracy of reporting rates (by health sector, treatment, severity, etc.), and using covariates to extrapolate dengue incidence to locations with no or limited reporting. Long-term efforts aim at strengthening capacity to document dengue transmission using serological methods to systematically analyze and relate to epidemiologic data. As promising tools for diagnosis, vaccination, vector control, and treatment are being developed, these recommended steps should improve objective, systematic measures of dengue burden to strengthen health policy decisions.

  7. Source apportionment and a novel approach of estimating regional contributions to ambient PM2.5 in Haikou, China.

    PubMed

    Liu, Baoshuang; Li, Tingkun; Yang, Jiamei; Wu, Jianhui; Wang, Jiao; Gao, Jixin; Bi, Xiaohui; Feng, Yinchang; Zhang, Yufen; Yang, Haihang

    2017-04-01

    A novel approach was developed to estimate regional contributions to ambient PM 2.5 in Haikou, China. In this paper, the investigation was divided into two main steps. The first step: analysing the characteristics of the chemical compositions of ambient PM 2.5 , as well as the source profiles, and then conducting source apportionments by using the CMB and CMB-Iteration models. The second step: the development of estimation approaches for regional contributions in terms of local features of Haikou and the results of source apportionment, and estimating regional contributions to ambient PM 2.5 in Haikou by this new approach. The results indicate that secondary sulphate, resuspended dust and vehicle exhaust were the major sources of ambient PM 2.5 in Haikou, contributing 9.9-21.4%, 10.1-19.0% and 10.5-20.2%, respectively. Regional contributions to ambient PM 2.5 in Haikou in spring, autumn and winter were 22.5%, 11.6% and 32.5%, respectively. The regional contribution in summer was assumed to be zero according to the better atmospheric quality and assumptions of this new estimation approach. The higher regional contribution in winter might be mainly attributable to the transport of polluted air originating in mainland China, especially from the north, where coal is burned for heating in winter. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Possible Dual Earthquake-Landslide Source of the 13 November 2016 Kaikoura, New Zealand Tsunami

    NASA Astrophysics Data System (ADS)

    Heidarzadeh, Mohammad; Satake, Kenji

    2017-10-01

    A complicated earthquake ( M w 7.8) in terms of rupture mechanism occurred in the NE coast of South Island, New Zealand, on 13 November 2016 (UTC) in a complex tectonic setting comprising a transition strike-slip zone between two subduction zones. The earthquake generated a moderate tsunami with zero-to-crest amplitude of 257 cm at the near-field tide gauge station of Kaikoura. Spectral analysis of the tsunami observations showed dual peaks at 3.6-5.7 and 5.7-56 min, which we attribute to the potential landslide and earthquake sources of the tsunami, respectively. Tsunami simulations showed that a source model with slip on an offshore plate-interface fault reproduces the near-field tsunami observation in terms of amplitude, but fails in terms of tsunami period. On the other hand, a source model without offshore slip fails to reproduce the first peak, but the later phases are reproduced well in terms of both amplitude and period. It can be inferred that an offshore source is necessary to be involved, but it needs to be smaller in size than the plate interface slip, which most likely points to a confined submarine landslide source, consistent with the dual-peak tsunami spectrum. We estimated the dimension of the potential submarine landslide at 8-10 km.

  9. Hanford Environmental Dose Reconstruction Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  10. Long-term trends in California mobile source emissions and ambient concentrations of black carbon and organic aerosol.

    PubMed

    McDonald, Brian C; Goldstein, Allen H; Harley, Robert A

    2015-04-21

    A fuel-based approach is used to assess long-term trends (1970-2010) in mobile source emissions of black carbon (BC) and organic aerosol (OA, including both primary emissions and secondary formation). The main focus of this analysis is the Los Angeles Basin, where a long record of measurements is available to infer trends in ambient concentrations of BC and organic carbon (OC), with OC used here as a proxy for OA. Mobile source emissions and ambient concentrations have decreased similarly, reflecting the importance of on- and off-road engines as sources of BC and OA in urban areas. In 1970, the on-road sector accounted for ∼90% of total mobile source emissions of BC and OA (primary + secondary). Over time, as on-road engine emissions have been controlled, the relative importance of off-road sources has grown. By 2010, off-road engines were estimated to account for 37 ± 20% and 45 ± 16% of total mobile source contributions to BC and OA, respectively, in the Los Angeles area. This study highlights both the success of efforts to control on-road emission sources, and the importance of considering off-road engine and other VOC source contributions when assessing long-term emission and ambient air quality trends.

  11. High-Energy, High-Pulse-Rate Light Sources for Enhanced Time-Resolved Tomographic PIV of Unsteady and Turbulent Flows

    DTIC Science & Technology

    2017-07-31

    Report: High-Energy, High-Pulse-Rate Light Sources for Enhanced Time -Resolved Tomographic PIV of Unsteady & Turbulent Flows The views, opinions and/or...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...High-Energy, High-Pulse-Rate Light Sources for Enhanced Time -Resolved Tomographic PIV of Unsteady & Turbulent Flows Report Term: 0-Other Email

  12. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas

    2017-07-01

    This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration and deposition observations over Europe. The results of the present inversion were confirmed using an independent Eulerian model, for which deposition patterns were also improved when using the estimated posterior releases. Although the independent model tends to underestimate deposition in countries that are not in the main direction of the plume, it reproduces country levels of deposition very efficiently. The results were also tested for robustness against different setups of the inversion through sensitivity runs. The source term data from this study are publicly available.

  13. Stochastic Estimation and Non-Linear Wall-Pressure Sources in a Separating/Reattaching Flow

    NASA Technical Reports Server (NTRS)

    Naguib, A.; Hudy, L.; Humphreys, W. M., Jr.

    2002-01-01

    Simultaneous wall-pressure and PIV measurements are used to study the conditional flow field associated with surface-pressure generation in a separating/reattaching flow established over a fence-with-splitter-plate geometry. The conditional flow field is captured using linear and quadratic stochastic estimation based on the occurrence of positive and negative pressure events in the vicinity of the mean reattachment location. The results shed light on the dominant flow structures associated with significant wall-pressure generation. Furthermore, analysis based on the individual terms in the stochastic estimation expansion shows that both the linear and non-linear flow sources of the coherent (conditional) velocity field are equally important contributors to the generation of the conditional surface pressure.

  14. Estimates of long-term mean-annual nutrient loads considered for use in SPARROW models of the Midcontinental region of Canada and the United States, 2002 base year

    USGS Publications Warehouse

    Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.

    2018-05-11

    Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.

  15. Upper and lower bounds of ground-motion variabilities: implication for source properties

    NASA Astrophysics Data System (ADS)

    Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino

    2017-04-01

    One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).

  16. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    NASA Astrophysics Data System (ADS)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  17. Highlighting Uncertainty and Recommendations for Improvement of Black Carbon Biomass Fuel-Based Emission Inventories in the Indo-Gangetic Plain Region.

    PubMed

    Soneja, Sutyajeet I; Tielsch, James M; Khatry, Subarna K; Curriero, Frank C; Breysse, Patrick N

    2016-03-01

    Black carbon (BC) is a major contributor to hydrological cycle change and glacial retreat within the Indo-Gangetic Plain (IGP) and surrounding region. However, significant variability exists for estimates of BC regional concentration. Existing inventories within the IGP suffer from limited representation of rural sources, reliance on idealized point source estimates (e.g., utilization of emission factors or fuel-use estimates for cooking along with demographic information), and difficulty in distinguishing sources. Inventory development utilizes two approaches, termed top down and bottom up, which rely on various sources including transport models, emission factors, and remote sensing applications. Large discrepancies exist for BC source attribution throughout the IGP depending on the approach utilized. Cooking with biomass fuels, a major contributor to BC production has great source apportionment variability. Areas requiring attention tied to research of cookstove and biomass fuel use that have been recognized to improve emission inventory estimates include emission factors, particulate matter speciation, and better quantification of regional/economic sectors. However, limited attention has been given towards understanding ambient small-scale spatial variation of BC between cooking and non-cooking periods in low-resource environments. Understanding the indoor to outdoor relationship of BC emissions due to cooking at a local level is a top priority to improve emission inventories as many health and climate applications rely upon utilization of accurate emission inventories.

  18. The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study

    NASA Astrophysics Data System (ADS)

    Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.

    2017-01-01

    Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.

  19. Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments

    DOE PAGES

    Liang, Taiee; Bauer, Johannes M.; Liu, James C.; ...

    2016-12-01

    A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less

  20. Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Taiee; Bauer, Johannes M.; Liu, James C.

    A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less

  1. Evaluation of stormwater micropollutant source control and end-of-pipe control strategies using an uncertainty-calibrated integrated dynamic simulation model.

    PubMed

    Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S

    2015-03-15

    The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Use of Multiple Data Sources to Estimate the Economic Cost of Dengue Illness in Malaysia

    PubMed Central

    Shepard, Donald S.; Undurraga, Eduardo A.; Lees, Rosemary Susan; Halasa, Yara; Lum, Lucy Chai See; Ng, Chiu Wan

    2012-01-01

    Dengue represents a substantial burden in many tropical and sub-tropical regions of the world. We estimated the economic burden of dengue illness in Malaysia. Information about economic burden is needed for setting health policy priorities, but accurate estimation is difficult because of incomplete data. We overcame this limitation by merging multiple data sources to refine our estimates, including an extensive literature review, discussion with experts, review of data from health and surveillance systems, and implementation of a Delphi process. Because Malaysia has a passive surveillance system, the number of dengue cases is under-reported. Using an adjusted estimate of total dengue cases, we estimated an economic burden of dengue illness of US$56 million (Malaysian Ringgit MYR196 million) per year, which is approximately US$2.03 (Malaysian Ringgit 7.14) per capita. The overall economic burden of dengue would be even higher if we included costs associated with dengue prevention and control, dengue surveillance, and long-term sequelae of dengue. PMID:23033404

  3. Use of multiple data sources to estimate the economic cost of dengue illness in Malaysia.

    PubMed

    Shepard, Donald S; Undurraga, Eduardo A; Lees, Rosemary Susan; Halasa, Yara; Lum, Lucy Chai See; Ng, Chiu Wan

    2012-11-01

    Dengue represents a substantial burden in many tropical and sub-tropical regions of the world. We estimated the economic burden of dengue illness in Malaysia. Information about economic burden is needed for setting health policy priorities, but accurate estimation is difficult because of incomplete data. We overcame this limitation by merging multiple data sources to refine our estimates, including an extensive literature review, discussion with experts, review of data from health and surveillance systems, and implementation of a Delphi process. Because Malaysia has a passive surveillance system, the number of dengue cases is under-reported. Using an adjusted estimate of total dengue cases, we estimated an economic burden of dengue illness of US$56 million (Malaysian Ringgit MYR196 million) per year, which is approximately US$2.03 (Malaysian Ringgit 7.14) per capita. The overall economic burden of dengue would be even higher if we included costs associated with dengue prevention and control, dengue surveillance, and long-term sequelae of dengue.

  4. A large and persistent carbon sink in the world's forests

    Treesearch

    Yude Pan; Richard A. Birdsey; Jingyun Fang; Richard Houghton; Pekka E. Kauppi; Werner A. Kurz; Oliver L. Phillips; Anatoly Shvidenko; Simon L. Lewis; Josep G. Canadell; Philippe Ciais; Robert B. Jackson; Stephen W. Pacala; A. David McGuire; Shilong Piao; Aapo Rautiainen; Stephen Sitch; Daniel Hayes

    2011-01-01

    The terrestrial carbon sink has been large in recent decades, but its size and location remain uncertain. Using forest inventory data and long-term ecosystem carbon studies, we estimate a total forest sink of 2.4 ± 0.4 petagrams of carbon per year (Pg C year-1) globally for 1990 to 2007. We also estimate a source of 1.3 ± 0.7 Pg...

  5. Estimating the Benefits of the Air Force Purchasing and Supply Chain Management Initiative

    DTIC Science & Technology

    2008-01-01

    sector, known as strategic sourcing.6 The Customer Relationship Management initiative ( CRM ) pro- vides a single customer point of contact for all... Customer Relationship Management initiative. commodity council A term used to describe a cross-functional sourc- ing group charged with formulating a...initiative has four major components, all based on commercial best practices (Gabreski, 2004): commodity councils customer relationship management

  6. Hanford Environmental Dose Reconstruction Project. Monthly report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  7. Production and use of estimates for monitoring progress in the health sector: the case of Bangladesh

    PubMed Central

    Ahsan, Karar Zunaid; Tahsina, Tazeen; Iqbal, Afrin; Ali, Nazia Binte; Chowdhury, Suman Kanti; Huda, Tanvir M.; Arifeen, Shams El

    2017-01-01

    ABSTRACT Background: In order to support the progress towards the post-2015 development agenda for the health sector, the importance of high-quality and timely estimates has become evident both globally and at the country level. Objective and Methods: Based on desk review, key informant interviews and expert panel discussions, the paper critically reviews health estimates from both the local (i.e. nationally generated information by the government and other agencies) and the global sources (which are mostly modeled or interpolated estimates developed by international organizations based on different sources of information), and assesses the country capacity and monitoring strategies to meet the increasing data demand in the coming years. Primarily, this paper provides a situation analysis of Bangladesh in terms of production and use of health estimates for monitoring progress towards the post-2015 development goals for the health sector. Results: The analysis reveals that Bangladesh is data rich, particularly from household surveys and health facility assessments. Practices of data utilization also exist, with wide acceptability of survey results for informing policy, programme review and course corrections. Despite high data availability from multiple sources, the country capacity for providing regular updates of major global health estimates/indicators remains low. Major challenges also include limited human resources, capacity to generate quality data and multiplicity of data sources, where discrepancy and lack of linkages among different data sources (local sources and between local and global estimates) present emerging challenges for interpretation of the resulting estimates. Conclusion: To fulfill the increased data requirement for the post-2015 era, Bangladesh needs to invest more in electronic data capture and routine health information systems. Streamlining of data sources, integration of parallel information systems into a common platform, and capacity building for data generation and analysis are recommended as priority actions for Bangladesh in the coming years. In addition to automation of routine health information systems, establishing an Indicator Reference Group for Bangladesh to analyze data; building country capacity in data quality assessment and triangulation; and feeding into global, inter-agency estimates for better reporting would address a number of mentioned challenges in the short- and long-run. PMID:28532305

  8. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates on the other hand are observed routinely on a much denser grid and higher temporal resolution. Gamma dose rate measurements contain no explicit information on the observed spectrum of radionuclides and have to be interpreted carefully. Nevertheless, they provide valuable information for the inverse evaluation of the source term due to their availability (Saunier et al., 2013). We present a new inversion approach combining an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The gamma dose rates are calculated from the modelled activity concentrations. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008). The a priori information on the source term is a first guess. The gamma dose rate observations will be used with inverse modelling to improve this first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  9. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  10. Real time estimation of generation, extinction and flow of muscle fibre action potentials in high density surface EMG.

    PubMed

    Mesin, Luca

    2015-02-01

    Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Variational Iterative Refinement Source Term Estimation Algorithm Assessment for Rural and Urban Environments

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Rodriguez, L. M.; Meech, S.; Hahn, D.; Betancourt, T.; Steinhoff, D.

    2016-12-01

    It is necessary to accurately estimate the initial source characteristics in the event of an accidental or intentional release of a Chemical, Biological, Radiological, or Nuclear (CBRN) agent into the atmosphere. The accurate estimation of the source characteristics are important because many times they are unknown and the Atmospheric Transport and Dispersion (AT&D) models rely heavily on these estimates to create hazard assessments. To correctly assess the source characteristics in an operational environment where time is critical, the National Center for Atmospheric Research (NCAR) has developed a Source Term Estimation (STE) method, known as the Variational Iterative Refinement STE algorithm (VIRSA). VIRSA consists of a combination of modeling systems. These systems include an AT&D model, its corresponding STE model, a Hybrid Lagrangian-Eulerian Plume Model (H-LEPM), and its mathematical adjoint model. In an operational scenario where we have information regarding the infrastructure of a city, the AT&D model used is the Urban Dispersion Model (UDM) and when using this model in VIRSA we refer to the system as uVIRSA. In all other scenarios where we do not have the city infrastructure information readily available, the AT&D model used is the Second-order Closure Integrated PUFF model (SCIPUFF) and the system is referred to as sVIRSA. VIRSA was originally developed using SCIPUFF 2.4 for the Defense Threat Reduction Agency and integrated into the Hazard Prediction and Assessment Capability and Joint Program for Information Systems Joint Effects Model. The results discussed here are the verification and validation of the upgraded system with SCIPUFF 3.0 and the newly implemented UDM capability. To verify uVIRSA and sVIRSA, synthetic concentration observation scenarios were created in urban and rural environments and the results of this verification are shown. Finally, we validate the STE performance of uVIRSA using scenarios from the Joint Urban 2003 (JU03) experiment, which was held in Oklahoma City and also validate the performance of sVIRSA using scenarios from the FUsing Sensor Integrated Observing Network (FUSION) Field Trial 2007 (FFT07), held at Dugway Proving Grounds in rural Utah.

  12. Galaxy–galaxy lensing estimators and their covariance properties

    DOE PAGES

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...

    2017-07-21

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  13. Galaxy–galaxy lensing estimators and their covariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less

  14. Galaxy-galaxy lensing estimators and their covariance properties

    NASA Astrophysics Data System (ADS)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  15. An initial SPARROW model of land use and in-stream controls on total organic carbon in streams of the conterminous United States

    USGS Publications Warehouse

    Shih, Jhih-Shyang; Alexander, Richard B.; Smith, Richard A.; Boyer, Elizabeth W.; Shwarz, Grogory E.; Chung, Susie

    2010-01-01

    Watersheds play many important roles in the carbon cycle: (1) they are a site for both terrestrial and aquatic carbon dioxide (CO2) removal through photosynthesis; (2) they transport living and decomposing organic carbon in streams and groundwater; and (3) they store organic carbon for widely varying lengths of time as a function of many biogeochemical factors. Using the U.S. Geological Survey (USGS) Spatially Referenced Regression on Watershed Attributes (SPARROW) model, along with long-term monitoring data on total organic carbon (TOC), this research quantitatively estimates the sources, transport, and fate of the long-term mean annual load of TOC in streams of the conterminous United States. The model simulations use surrogate measures of the major terrestrial and aquatic sources of organic carbon to estimate the long-term mean annual load of TOC in streams. The estimated carbon sources in the model are associated with four land uses (urban, cultivated, forest, and wetlands) and autochthonous fixation of carbon (stream photosynthesis). Stream photosynthesis is determined by reach-level application of an empirical model of stream chlorophyll based on total phosphorus concentration, and a mechanistic model of photosynthetic rate based on chlorophyll, average daily solar irradiance, water column light attenuation, and reach dimensions. It was found that the estimate of in-stream photosynthesis is a major contributor to the mean annual TOC load per unit of drainage area (that is, yield) in large streams, with a median share of about 60 percent of the total mean annual carbon load in streams with mean flows above 500 cubic feet per second. The interquartile range of the model predictions of TOC from in-stream photosynthesis is from 0.1 to 0.4 grams (g) carbon (C) per square meter (m-2) per day (day-1) for the approximately 62,000 stream reaches in the continental United States, which compares favorably with the reported literature range for net carbon fixation by phytoplankton in lakes and streams. The largest contributors per unit of drainage area to the mean annual stream TOC load among the terrestrial sources are, in descending order: wetlands, urban lands, mixed forests, agricultural lands, evergreen forests, and deciduous forests . It was found that the SPARROW model estimates of TOC contributions to streams associated with these land uses are also consistent with literature estimates. SPARROW model calibration results are used to simulate the delivery of TOC loads to the coastal areas of seven major regional drainages. It was found that stream photosynthesis is the largest source of the TOC yields ( about 50 percent) delivered to the coastal waters in two of the seven regional drainages (the Pacific Northwest and Mississippi-Atchafalaya-Red River basins ), whereas terrestrial sources are dominant (greater than 60 percent) in all other regions (North Atlantic, South Atlantic-Gulf, California, Texas-Gulf, and Great Lakes).

  16. Cramer-Rao bound analysis of wideband source localization and DOA estimation

    NASA Astrophysics Data System (ADS)

    Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung

    2002-12-01

    In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.

  17. Bayesian inverse modeling and source location of an unintended 131I release in Europe in the fall of 2011

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas

    2017-10-01

    In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.

  18. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    PubMed

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A New Generation of Leaching Tests – The Leaching Environmental Assessment Framework

    EPA Science Inventory

    Provides an overview of newly released leaching tests that provide a more accurate source term when estimating environmental release of metals and other constituents of potential concern (COPCs). The Leaching Environmental Assessment Framework (LEAF) methods have been (1) develo...

  20. Multisource Estimation of Long-term Global Terrestrial Surface Radiation

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.

    2017-12-01

    Land surface net radiation is the essential energy source at the earth's surface. It determines the surface energy budget and its partitioning, drives the hydrological cycle by providing available energy, and offers heat, light, and energy for biological processes. Individual components in net radiation have changed historically due to natural and anthropogenic climate change and land use change. Decadal variations in radiation such as global dimming or brightening have important implications for hydrological and carbon cycles. In order to assess the trends and variability of net radiation and evapotranspiration, there is a need for accurate estimates of long-term terrestrial surface radiation. While large progress in measuring top of atmosphere energy budget has been made, huge discrepancies exist among ground observations, satellite retrievals, and reanalysis fields of surface radiation, due to the lack of observational networks, the difficulty in measuring from space, and the uncertainty in algorithm parameters. To overcome the weakness of single source datasets, we propose a multi-source merging approach to fully utilize and combine multiple datasets of radiation components separately, as they are complementary in space and time. First, we conduct diagnostic analysis of multiple satellite and reanalysis datasets based on in-situ measurements such as Global Energy Balance Archive (GEBA), existing validation studies, and other information such as network density and consistency with other meteorological variables. Then, we calculate the optimal weighted average of multiple datasets by minimizing the variance of error between in-situ measurements and other observations. Finally, we quantify the uncertainties in the estimates of surface net radiation and employ physical constraints based on the surface energy balance to reduce these uncertainties. The final dataset is evaluated in terms of the long-term variability and its attribution to changes in individual components. The goal of this study is to provide a merged observational benchmark for large-scale diagnostic analyses, remote sensing and land surface modeling.

  1. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008, Stohl et al., 2012). The a priori information on the source term is a first guess. The gamma dose rate observations are used to improve the first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  2. Detailed source term estimation of atmospheric release during the Fukushima Dai-ichi nuclear power plant accident by coupling atmospheric and oceanic dispersion models

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Chino, Masamichi; Terada, Hiroaki; Kobayashi, Takuya; Ota, Masakazu; Nagai, Haruyasu; Kajino, Mizuo

    2014-05-01

    Temporal variations of release amounts of radionuclides during the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident and their dispersion process are essential to evaluate the environmental impacts and resultant radiological doses to the public. Here, we estimated a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data and coupling atmospheric and oceanic dispersion simulations by WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN developed by the authors. New schemes for wet, dry, and fog depositions of radioactive iodine gas (I2 and CH3I) and other particles (I-131, Te-132, Cs-137, and Cs-134) were incorporated into WSPEEDI-II. The deposition calculated by WSPEEDI-II was used as input data of ocean dispersion calculations by SEA-GEARN. The reverse estimation method based on the simulation by both models assuming unit release rate (1 Bq h-1) was adopted to estimate the source term at the FNPP1 using air dose rate, and air sea surface concentrations. The results suggested that the major release of radionuclides from the FNPP1 occurred in the following periods during March 2011: afternoon on the 12th when the venting and hydrogen explosion occurred at Unit 1, morning on the 13th after the venting event at Unit 3, midnight on the 14th when several openings of SRV (steam relief valve) were conducted at Unit 2, morning and night on the 15th, and morning on the 16th. The modified WSPEEDI-II using the newly estimated source term well reproduced local and regional patterns of air dose rate and surface deposition of I-131 and Cs-137 obtained by airborne observations. Our dispersion simulations also revealed that the highest radioactive contamination areas around FNPP1 were created from 15th to 16th March by complicated interactions among rainfall (wet deposition), plume movements, and phase properties (gas or particle) of I-131 and release rates associated with reactor pressure variations in Units 2 and 3.

  3. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  4. EVALUATION OF ALTERNATIVE GAUSSIAN PLUME DISPERSION MODELING TECHNIQUES IN ESTIMATING SHORT-TERM SULFUR DIOXIDE CONCENTRATIONS

    EPA Science Inventory

    A routinely applied atmospheric dispersion model was modified to evaluate alternative modeling techniques which allowed for more detailed source data, onsite meteorological data, and several dispersion methodologies. These were evaluated with hourly SO2 concentrations measured at...

  5. Long-term carbon loss in fragmented Neotropical forests.

    PubMed

    Pütz, Sandro; Groeneveld, Jürgen; Henle, Klaus; Knogge, Christoph; Martensen, Alexandre Camargo; Metz, Markus; Metzger, Jean Paul; Ribeiro, Milton Cezar; de Paula, Mateus Dantas; Huth, Andreas

    2014-10-07

    Tropical forests play an important role in the global carbon cycle, as they store a large amount of carbon (C). Tropical forest deforestation has been identified as a major source of CO2 emissions, though biomass loss due to fragmentation--the creation of additional forest edges--has been largely overlooked as an additional CO2 source. Here, through the combination of remote sensing and knowledge on ecological processes, we present long-term carbon loss estimates due to fragmentation of Neotropical forests: within 10 years the Brazilian Atlantic Forest has lost 69 (±14) Tg C, and the Amazon 599 (±120) Tg C due to fragmentation alone. For all tropical forests, we estimate emissions up to 0.2 Pg C y(-1) or 9 to 24% of the annual global C loss due to deforestation. In conclusion, tropical forest fragmentation increases carbon loss and should be accounted for when attempting to understand the role of vegetation in the global carbon balance.

  6. Solar Radiation Pressure Estimation and Analysis of a GEO Class of High Area-to-Mass Ratio Debris Objects

    NASA Technical Reports Server (NTRS)

    Kelecy, Tom; Payne, Tim; Thurston, Robin; Stansbery, Gene

    2007-01-01

    A population of deep space objects is thought to be high area-to-mass ratio (AMR) debris having origins from sources in the geosynchronous orbit (GEO) belt. The typical AMR values have been observed to range anywhere from 1's to 10's of m(sup 2)/kg, and hence, higher than average solar radiation pressure effects result in long-term migration of eccentricity (0.1-0.6) and inclination over time. However, the nature of the debris orientation-dependent dynamics also results time-varying solar radiation forces about the average which complicate the short-term orbit determination processing. The orbit determination results are presented for several of these debris objects, and highlight their unique and varied dynamic attributes. Estimation or the solar pressure dynamics over time scales suitable for resolving the shorter term dynamics improves the orbit estimation, and hence, the orbit predictions needed to conduct follow-up observations.

  7. Associations between Source-Specific Fine Particulate Matter and Emergency Department Visits for Respiratory Disease in Four U.S. Cities

    PubMed Central

    Krall, Jenna R.; Mulholland, James A.; Russell, Armistead G.; Balachandran, Sivaraman; Winquist, Andrea; Tolbert, Paige E.; Waller, Lance A.; Sarnat, Stefanie Ebelt

    2016-01-01

    Background: Short-term exposure to ambient fine particulate matter (PM2.5) concentrations has been associated with increased mortality and morbidity. Determining which sources of PM2.5 are most toxic can help guide targeted reduction of PM2.5. However, conducting multicity epidemiologic studies of sources is difficult because source-specific PM2.5 is not directly measured, and source chemical compositions can vary between cities. Objectives: We determined how the chemical composition of primary ambient PM2.5 sources varies across cities. We estimated associations between source-specific PM2.5 and respiratory disease emergency department (ED) visits and examined between-city heterogeneity in estimated associations. Methods: We used source apportionment to estimate daily concentrations of primary source-specific PM2.5 for four U.S. cities. For sources with similar chemical compositions between cities, we applied Poisson time-series regression models to estimate associations between source-specific PM2.5 and respiratory disease ED visits. Results: We found that PM2.5 from biomass burning, diesel vehicle, gasoline vehicle, and dust sources was similar in chemical composition between cities, but PM2.5 from coal combustion and metal sources varied across cities. We found some evidence of positive associations of respiratory disease ED visits with biomass burning PM2.5; associations with diesel and gasoline PM2.5 were frequently imprecise or consistent with the null. We found little evidence of associations with dust PM2.5. Conclusions: We introduced an approach for comparing the chemical compositions of PM2.5 sources across cities and conducted one of the first multicity studies of source-specific PM2.5 and ED visits. Across four U.S. cities, among the primary PM2.5 sources assessed, biomass burning PM2.5 was most strongly associated with respiratory health. Citation: Krall JR, Mulholland JA, Russell AG, Balachandran S, Winquist A, Tolbert PE, Waller LA, Sarnat SE. 2017. Associations between source-specific fine particulate matter and emergency department visits for respiratory disease in four U.S. cities. Environ Health Perspect 125:97–103; http://dx.doi.org/10.1289/EHP271 PMID:27315241

  8. Associations between Source-Specific Fine Particulate Matter and Emergency Department Visits for Respiratory Disease in Four U.S. Cities.

    PubMed

    Krall, Jenna R; Mulholland, James A; Russell, Armistead G; Balachandran, Sivaraman; Winquist, Andrea; Tolbert, Paige E; Waller, Lance A; Sarnat, Stefanie Ebelt

    2017-01-01

    Short-term exposure to ambient fine particulate matter (PM2.5) concentrations has been associated with increased mortality and morbidity. Determining which sources of PM2.5 are most toxic can help guide targeted reduction of PM2.5. However, conducting multicity epidemiologic studies of sources is difficult because source-specific PM2.5 is not directly measured, and source chemical compositions can vary between cities. We determined how the chemical composition of primary ambient PM2.5 sources varies across cities. We estimated associations between source-specific PM2.5 and respiratory disease emergency department (ED) visits and examined between-city heterogeneity in estimated associations. We used source apportionment to estimate daily concentrations of primary source-specific PM2.5 for four U.S. cities. For sources with similar chemical compositions between cities, we applied Poisson time-series regression models to estimate associations between source-specific PM2.5 and respiratory disease ED visits. We found that PM2.5 from biomass burning, diesel vehicle, gasoline vehicle, and dust sources was similar in chemical composition between cities, but PM2.5 from coal combustion and metal sources varied across cities. We found some evidence of positive associations of respiratory disease ED visits with biomass burning PM2.5; associations with diesel and gasoline PM2.5 were frequently imprecise or consistent with the null. We found little evidence of associations with dust PM2.5. We introduced an approach for comparing the chemical compositions of PM2.5 sources across cities and conducted one of the first multicity studies of source-specific PM2.5 and ED visits. Across four U.S. cities, among the primary PM2.5 sources assessed, biomass burning PM2.5 was most strongly associated with respiratory health. Citation: Krall JR, Mulholland JA, Russell AG, Balachandran S, Winquist A, Tolbert PE, Waller LA, Sarnat SE. 2017. Associations between source-specific fine particulate matter and emergency department visits for respiratory disease in four U.S. cities. Environ Health Perspect 125:97-103; http://dx.doi.org/10.1289/EHP271.

  9. A hierarchical modeling approach to estimate regional acute health effects of particulate matter sources

    PubMed Central

    Krall, J. R.; Hackstadt, A. J.; Peng, R. D.

    2017-01-01

    Exposure to particulate matter (PM) air pollution has been associated with a range of adverse health outcomes, including cardiovascular disease (CVD) hospitalizations and other clinical parameters. Determining which sources of PM, such as traffic or industry, are most associated with adverse health outcomes could help guide future recommendations aimed at reducing harmful pollution exposure for susceptible individuals. Information obtained from multisite studies, which is generally more precise than information from a single location, is critical to understanding how PM impacts health and to informing local strategies for reducing individual-level PM exposure. However, few methods exist to perform multisite studies of PM sources, which are not generally directly observed, and adverse health outcomes. We developed SHARE, a hierarchical modeling approach that facilitates reproducible, multisite epidemiologic studies of PM sources. SHARE is a two-stage approach that first summarizes information about PM sources across multiple sites. Then, this information is used to determine how community-level (i.e. county- or city-level) health effects of PM sources should be pooled to estimate regional-level health effects. SHARE is a type of population value decomposition that aims to separate out regional-level features from site-level data. Unlike previous approaches for multisite epidemiologic studies of PM sources, the SHARE approach allows the specific PM sources identified to vary by site. Using data from 2000–2010 for 63 northeastern US counties, we estimated regional-level health effects associated with short-term exposure to major types of PM sources. We found PM from secondary sulfate, traffic, and metals sources was most associated with CVD hospitalizations. PMID:28098412

  10. Modeling and Forecasting Influenza-like Illness (ILI) in Houston, Texas Using Three Surveillance Data Capture Mechanisms.

    PubMed

    Paul, Susannah; Mgbere, Osaro; Arafat, Raouf; Yang, Biru; Santos, Eunice

    2017-01-01

    Objective The objective was to forecast and validate prediction estimates of influenza activity in Houston, TX using four years of historical influenza-like illness (ILI) from three surveillance data capture mechanisms. Background Using novel surveillance methods and historical data to estimate future trends of influenza-like illness can lead to early detection of influenza activity increases and decreases. Anticipating surges gives public health professionals more time to prepare and increase prevention efforts. Methods Data was obtained from three surveillance systems, Flu Near You, ILINet, and hospital emergency center (EC) visits, with diverse data capture mechanisms. Autoregressive integrated moving average (ARIMA) models were fitted to data from each source for week 27 of 2012 through week 26 of 2016 and used to forecast influenza-like activity for the subsequent 10 weeks. Estimates were then compared to actual ILI percentages for the same period. Results Forecasted estimates had wide confidence intervals that crossed zero. The forecasted trend direction differed by data source, resulting in lack of consensus about future influenza activity. ILINet forecasted estimates and actual percentages had the least differences. ILINet performed best when forecasting influenza activity in Houston, TX. Conclusion Though the three forecasted estimates did not agree on the trend directions, and thus, were considered imprecise predictors of long-term ILI activity based on existing data, pooling predictions and careful interpretations may be helpful for short term intervention efforts. Further work is needed to improve forecast accuracy considering the promise forecasting holds for seasonal influenza prevention and control, and pandemic preparedness.

  11. Parametrized energy spectrum of cosmic-ray protons with kinetic energies down to 1 GeV

    NASA Technical Reports Server (NTRS)

    Tan, L. C.

    1985-01-01

    A new estimation of the interstellar proton spectrum is made in which the source term of primary protons is taken from shock acceleration theory and the cosmic ray propagation calculation is based on a proposed nonuniform galactic disk model.

  12. Bounding the role of black carbon in the climate system: A scientific assessment

    NASA Astrophysics Data System (ADS)

    Bond, T. C.; Doherty, S. J.; Fahey, D. W.; Forster, P. M.; Berntsen, T.; DeAngelo, B. J.; Flanner, M. G.; Ghan, S.; Kärcher, B.; Koch, D.; Kinne, S.; Kondo, Y.; Quinn, P. K.; Sarofim, M. C.; Schultz, M. G.; Schulz, M.; Venkataraman, C.; Zhang, H.; Zhang, S.; Bellouin, N.; Guttikunda, S. K.; Hopke, P. K.; Jacobson, M. Z.; Kaiser, J. W.; Klimont, Z.; Lohmann, U.; Schwarz, J. P.; Shindell, D.; Storelvmo, T.; Warren, S. G.; Zender, C. S.

    2013-06-01

    carbon aerosol plays a unique and important role in Earth's climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m-2 with 90% uncertainty bounds of (+0.08, +1.27) W m-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m-2 with 90% uncertainty bounds of +0.17 to +2.1 W m-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (-0.50 to +1.08) W m-2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (-0.06 W m-2 with 90% uncertainty bounds of -1.45 to +1.29 W m-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.

  13. Bounding the Role of Black Carbon in the Climate System: a Scientific Assessment

    NASA Technical Reports Server (NTRS)

    Bond, T. C.; Doherty, S. J.; Fahey, D. W.; Forster, P. M.; Bernsten, T.; DeAngelo, B. J.; Flanner, M. G.; Ghan, S.; Karcher, B.; Koch, D.; hide

    2013-01-01

    Black carbon aerosol plays a unique and important role in Earth's climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg/yr in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W/sq m with 90% uncertainty bounds of (+0.08, +1.27)W/sq m. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W/sq m. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W/sq m with 90% uncertainty bounds of +0.17 to +2.1 W/sq m. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W/sq m, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (0.50 to +1.08) W/sq m during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (0.06 W/sq m with 90% uncertainty bounds of 1.45 to +1.29 W/sq m). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.

  14. Estimating air emissions from ships: Meta-analysis of modelling approaches and available data sources

    NASA Astrophysics Data System (ADS)

    Miola, Apollonia; Ciuffo, Biagio

    2011-04-01

    Maritime transport plays a central role in the transport sector's sustainability debate. Its contribution to air pollution and greenhouse gases is significant. An effective policy strategy to regulate air emissions requires their robust estimation in terms of quantification and location. This paper provides a critical analysis of the ship emission modelling approaches and data sources available, identifying their limits and constraints. It classifies the main methodologies on the basis of the approach followed (bottom-up or top-down) for the evaluation and geographic characterisation of emissions. The analysis highlights the uncertainty of results from the different methods. This is mainly due to the level of uncertainty connected with the sources of information that are used as inputs to the different studies. This paper describes the sources of the information required for these analyses, paying particular attention to AIS data and to the possible problems associated with their use. One way of reducing the overall uncertainty in the results could be the simultaneous use of different sources of information. This paper presents an alternative methodology based on this approach. As a final remark, it can be expected that new approaches to the problem together with more reliable data sources over the coming years could give more impetus to the debate on the global impact of maritime traffic on the environment that, currently, has only reached agreement via the "consensus" estimates provided by IMO (2009).

  15. Evaluation of PCB sources and releases for identifying priorities to reduce PCBs in Washington State (USA).

    PubMed

    Davies, Holly; Delistraty, Damon

    2016-02-01

    Polychlorinated biphenyls (PCBs) are ubiquitously distributed in the environment and produce multiple adverse effects in humans and wildlife. As a result, the purpose of our study was to characterize PCB sources in anthropogenic materials and releases to the environment in Washington State (USA) in order to formulate recommendations to reduce PCB exposures. Methods included review of relevant publications (e.g., open literature, industry studies and reports, federal and state government databases), scaling of PCB sources from national or county estimates to state estimates, and communication with industry associations and private and public utilities. Recognizing high associated uncertainty due to incomplete data, we strived to provide central tendency estimates for PCB sources. In terms of mass (high to low), PCB sources include lamp ballasts, caulk, small capacitors, large capacitors, and transformers. For perspective, these sources (200,000-500,000 kg) overwhelm PCBs estimated to reside in the Puget Sound ecosystem (1500 kg). Annual releases of PCBs to the environment (high to low) are attributed to lamp ballasts (400-1500 kg), inadvertent generation by industrial processes (900 kg), caulk (160 kg), small capacitors (3-150 kg), large capacitors (10-80 kg), pigments and dyes (0.02-31 kg), and transformers (<2 kg). Recommendations to characterize the extent of PCB distribution and decrease exposures include assessment of PCBs in buildings (e.g., schools) and replacement of these materials, development of Best Management Practices (BMPs) to contain PCBs, reduction of inadvertent generation of PCBs in consumer products, expansion of environmental monitoring and public education, and research to identify specific PCB congener profiles in human tissues.

  16. Bayesian Inference for Source Term Estimation: Application to the International Monitoring System Radionuclide Network

    DTIC Science & Technology

    2014-10-01

    de l’exactitude et de la précision), comparativement au modèle de mesure plus simple qui n’utilise pas de multiplicateurs. Importance pour la défense...3) Bayesian experimental design for receptor placement in order to maximize the expected information in the measured concen- tration data for...applications of the Bayesian inferential methodology for source recon- struction have used high-quality concentration data from well- designed atmospheric

  17. Integration of measurements with atmospheric dispersion models: Source term estimation for dispersal of (239)Pu due to non-nuclear detonation of high explosive

    NASA Astrophysics Data System (ADS)

    Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.

    1992-10-01

    The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.

  18. A large and persistent carbon sink in the world's forests

    USGS Publications Warehouse

    Pan, Y.; Birdsey, R.A.; Fang, J.; Houghton, R.; Kauppi, P.E.; Kurz, W.A.; Phillips, O.L.; Shvidenko, A.; Lewis, S.L.; Canadell, J.G.; Ciais, P.; Jackson, R.B.; Pacala, S.W.; McGuire, A.D.; Piao, S.; Rautiainen, A.; Sitch, S.; Hayes, D.

    2011-01-01

    The terrestrial carbon sink has been large in recent decades, but its size and location remain uncertain. Using forest inventory data and long-term ecosystem carbon studies, we estimate a total forest sink of 2.4 ?? 0.4 petagrams of carbon per year (Pg C year-1) globally for 1990 to 2007. We also estimate a source of 1.3 ?? 0.7 Pg C year-1 from tropical land-use change, consisting of a gross tropical deforestation emission of 2.9 ?? 0.5 Pg C year-1 partially compensated by a carbon sink in tropical forest regrowth of 1.6 ?? 0.5 Pg C year-1. Together, the fluxes comprise a net global forest sink of 1.1 ?? 0.8 Pg C year-1, with tropical estimates having the largest uncertainties. Our total forest sink estimate is equivalent in magnitude to the terrestrial sink deduced from fossil fuel emissions and land-use change sources minus ocean and atmospheric sinks.

  19. The scope and control of attention: Sources of variance in working memory capacity.

    PubMed

    Chow, Michael; Conway, Andrew R A

    2015-04-01

    Working memory capacity is a strong positive predictor of many cognitive abilities, across various domains. The pattern of positive correlations across domains has been interpreted as evidence for a unitary source of inter-individual differences in behavior. However, recent work suggests that there are multiple sources of variance contributing to working memory capacity. The current study (N = 71) investigates individual differences in the scope and control of attention, in addition to the number and resolution of items maintained in working memory. Latent variable analyses indicate that the scope and control of attention reflect independent sources of variance and each account for unique variance in general intelligence. Also, estimates of the number of items maintained in working memory are consistent across tasks and related to general intelligence whereas estimates of resolution are task-dependent and not predictive of intelligence. These results provide insight into the structure of working memory, as well as intelligence, and raise new questions about the distinction between number and resolution in visual short-term memory.

  20. Rating curve estimation of nutrient loads in Iowa rivers

    USGS Publications Warehouse

    Stenback, G.A.; Crumpton, W.G.; Schilling, K.E.; Helmers, M.J.

    2011-01-01

    Accurate estimation of nutrient loads in rivers and streams is critical for many applications including determination of sources of nutrient loads in watersheds, evaluating long-term trends in loads, and estimating loading to downstream waterbodies. Since in many cases nutrient concentrations are measured on a weekly or monthly frequency, there is a need to estimate concentration and loads during periods when no data is available. The objectives of this study were to: (i) document the performance of a multiple regression model to predict loads of nitrate and total phosphorus (TP) in Iowa rivers and streams; (ii) determine whether there is any systematic bias in the load prediction estimates for nitrate and TP; and (iii) evaluate streamflow and concentration factors that could affect the load prediction efficiency. A commonly cited rating curve regression is utilized to estimate riverine nitrate and TP loads for rivers in Iowa with watershed areas ranging from 17.4 to over 34,600km2. Forty-nine nitrate and 44 TP datasets each comprising 5-22years of approximately weekly to monthly concentrations were examined. Three nitrate data sets had sample collection frequencies averaging about three samples per week. The accuracy and precision of annual and long term riverine load prediction was assessed by direct comparison of rating curve load predictions with observed daily loads. Significant positive bias of annual and long term nitrate loads was detected. Long term rating curve nitrate load predictions exceeded observed loads by 25% or more at 33% of the 49 measurement sites. No bias was found for TP load prediction although 15% of the 44 cases either underestimated or overestimate observed long-term loads by more than 25%. The rating curve was found to poorly characterize nitrate and phosphorus variation in some rivers. ?? 2010 .

  1. LIGHT NONAQUEOUS-PHASE LIQUID HYDROCARBON WEATHERING AT SOME JP-4 FUEL RELEASE SITES

    EPA Science Inventory

    A fuel weathering study was conducted for database entries to estimate natural light, nonaqueousphase
    liquid weathering and source-term reduction rates for use in natural attenuation models. A range of BTEX
    weathering rates from mobile LNAPL plumes at eight field sites with...

  2. DEVELOPMENT AND VALIDATION OF AN AIR-TO-BEEF FOOD CHAIN MODEL FOR DIOXIN-LIKE COMPOUNDS

    EPA Science Inventory

    A model for predicting concentrations of dioxin-like compounds in beef is developed and tested. The key premise of the model is that concentrations of these compounds in air are the source term, or starting point, for estimating beef concentrations. Vapor-phase concentrations t...

  3. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  4. Revisiting the social cost of carbon.

    PubMed

    Nordhaus, William D

    2017-02-14

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO 2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  5. Revisiting the social cost of carbon

    NASA Astrophysics Data System (ADS)

    Nordhaus, William D.

    2017-02-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is 31 per ton of CO2 in 2010 US for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  6. On increasing stability in the two dimensional inverse source scattering problem with many frequencies

    NASA Astrophysics Data System (ADS)

    Entekhabi, Mozhgan Nora; Isakov, Victor

    2018-05-01

    In this paper, we will study the increasing stability in the inverse source problem for the Helmholtz equation in the plane when the source term is assumed to be compactly supported in a bounded domain Ω with a sufficiently smooth boundary. Using the Fourier transform in the frequency domain, bounds for the Hankel functions and for scattering solutions in the complex plane, improving bounds for the analytic continuation, and the exact observability for the wave equation led us to our goals which are a sharp uniqueness and increasing stability estimate when the wave number interval is growing.

  7. Regulatory Technology Development Plan - Sodium Fast Reactor. Mechanistic Source Term - Metal Fuel Radionuclide Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-02-01

    The development of an accurate and defensible mechanistic source term will be vital for the future licensing efforts of metal fuel, pool-type sodium fast reactors. To assist in the creation of a comprehensive mechanistic source term, the current effort sought to estimate the release fraction of radionuclides from metal fuel pins to the primary sodium coolant during fuel pin failures at a variety of temperature conditions. These release estimates were based on the findings of an extensive literature search, which reviewed past experimentation and reactor fuel damage accidents. Data sources for each radionuclide of interest were reviewed to establish releasemore » fractions, along with possible release dependencies, and the corresponding uncertainty levels. Although the current knowledge base is substantial, and radionuclide release fractions were established for the elements deemed important for the determination of offsite consequences following a reactor accident, gaps were found pertaining to several radionuclides. First, there is uncertainty regarding the transport behavior of several radionuclides (iodine, barium, strontium, tellurium, and europium) during metal fuel irradiation to high burnup levels. The migration of these radionuclides within the fuel matrix and bond sodium region can greatly affect their release during pin failure incidents. Post-irradiation examination of existing high burnup metal fuel can likely resolve this knowledge gap. Second, data regarding the radionuclide release from molten high burnup metal fuel in sodium is sparse, which makes the assessment of radionuclide release from fuel melting accidents at high fuel burnup levels difficult. This gap could be addressed through fuel melting experimentation with samples from the existing high burnup metal fuel inventory.« less

  8. Associations of Mortality with Long-Term Exposures to Fine and Ultrafine Particles, Species and Sources: Results from the California Teachers Study Cohort

    PubMed Central

    Hu, Jianlin; Goldberg, Debbie; Reynolds, Peggy; Hertz, Andrew; Bernstein, Leslie; Kleeman, Michael J.

    2015-01-01

    Background Although several cohort studies report associations between chronic exposure to fine particles (PM2.5) and mortality, few have studied the effects of chronic exposure to ultrafine (UF) particles. In addition, few studies have estimated the effects of the constituents of either PM2.5 or UF particles. Methods We used a statewide cohort of > 100,000 women from the California Teachers Study who were followed from 2001 through 2007. Exposure data at the residential level were provided by a chemical transport model that computed pollutant concentrations from > 900 sources in California. Besides particle mass, monthly concentrations of 11 species and 8 sources or primary particles were generated at 4-km grids. We used a Cox proportional hazards model to estimate the association between the pollutants and all-cause, cardiovascular, ischemic heart disease (IHD), and respiratory mortality. Results We observed statistically significant (p < 0.05) associations of IHD with PM2.5 mass, nitrate, elemental carbon (EC), copper (Cu), and secondary organics and the sources gas- and diesel-fueled vehicles, meat cooking, and high-sulfur fuel combustion. The hazard ratio estimate of 1.19 (95% CI: 1.08, 1.31) for IHD in association with a 10-μg/m3 increase in PM2.5 is consistent with findings from the American Cancer Society cohort. We also observed significant positive associations between IHD and several UF components including EC, Cu, metals, and mobile sources. Conclusions Using an emissions-based model with a 4-km spatial scale, we observed significant positive associations between IHD mortality and both fine and ultrafine particle species and sources. Our results suggest that the exposure model effectively measured local exposures and facilitated the examination of the relative toxicity of particle species. Citation Ostro B, Hu J, Goldberg D, Reynolds P, Hertz A, Bernstein L, Kleeman MJ. 2015. Associations of mortality with long-term exposures to fine and ultrafine particles, species and sources: results from the California Teachers Study cohort. Environ Health Perspect 123:549–556; http://dx.doi.org/10.1289/ehp.1408565 PMID:25633926

  9. Uncertainty assessment of source attribution of PM(2.5) and its water-soluble organic carbon content using different biomass burning tracers in positive matrix factorization analysis--a case study in Beijing, China.

    PubMed

    Tao, Jun; Zhang, Leiming; Zhang, Renjian; Wu, Yunfei; Zhang, Zhisheng; Zhang, Xiaoling; Tang, Yixi; Cao, Junji; Zhang, Yuanhang

    2016-02-01

    Daily PM2.5 samples were collected at an urban site in Beijing during four one-month periods in 2009-2010, with each period in a different season. Samples were subject to chemical analysis for various chemical components including major water-soluble ions, organic carbon (OC) and water-soluble organic carbon (WSOC), element carbon (EC), trace elements, anhydrosugar levoglucosan (LG), and mannosan (MN). Three sets of source profiles of PM2.5 were first identified through positive matrix factorization (PMF) analysis using single or combined biomass tracers - non-sea salt potassium (nss-K(+)), LG, and a combination of nss-K(+) and LG. The six major source factors of PM2.5 included secondary inorganic aerosol, industrial pollution, soil dust, biomass burning, traffic emission, and coal burning, which were estimated to contribute 31±37%, 39±28%, 14±14%, 7±7%, 5±6%, and 4±8%, respectively, to PM2.5 mass if using the nss-K(+) source profiles, 22±19%, 29±17%, 20±20%, 13±13%, 12±10%, and 4±6%, respectively, if using the LG source profiles, and 21±17%, 31±18%, 19±19%, 11±12%, 14±11%, and 4±6%, respectively, if using the combined nss-K(+) and LG source profiles. The uncertainties in the estimation of biomass burning contributions to WSOC due to the different choices of biomass burning tracers were around 3% annually and up to 24% seasonally in terms of absolute percentage contributions, or on a factor of 1.7 annually and up to a factor of 3.3 seasonally in terms of the actual concentrations. The uncertainty from the major source (e.g. industrial pollution) was on a factor of 1.9 annually and up to a factor of 2.5 seasonally in the estimated WSOC concentrations. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Estimating the mass variance in neutron multiplicity counting-A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.

    2017-12-01

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.

  11. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubi, C.; Croft, S.; Favalli, A.

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  12. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE PAGES

    Dubi, C.; Croft, S.; Favalli, A.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  13. Desert Dust Outbreaks in Southern Europe: Contribution to Daily PM₁₀ Concentrations and Short-Term Associations with Mortality and Hospital Admissions.

    PubMed

    Stafoggia, Massimo; Zauli-Sajani, Stefano; Pey, Jorge; Samoli, Evangelia; Alessandrini, Ester; Basagaña, Xavier; Cernigliaro, Achille; Chiusolo, Monica; Demaria, Moreno; Díaz, Julio; Faustini, Annunziata; Katsouyanni, Klea; Kelessis, Apostolos G; Linares, Cristina; Marchesi, Stefano; Medina, Sylvia; Pandolfi, Paolo; Pérez, Noemí; Querol, Xavier; Randi, Giorgia; Ranzi, Andrea; Tobias, Aurelio; Forastiere, Francesco

    2016-04-01

    Evidence on the association between short-term exposure to desert dust and health outcomes is controversial. We aimed to estimate the short-term effects of particulate matter ≤ 10 μm (PM10) on mortality and hospital admissions in 13 Southern European cities, distinguishing between PM10 originating from the desert and from other sources. We identified desert dust advection days in multiple Mediterranean areas for 2001-2010 by combining modeling tools, back-trajectories, and satellite data. For each advection day, we estimated PM10 concentrations originating from desert, and computed PM10 from other sources by difference. We fitted city-specific Poisson regression models to estimate the association between PM from different sources (desert and non-desert) and daily mortality and emergency hospitalizations. Finally, we pooled city-specific results in a random-effects meta-analysis. On average, 15% of days were affected by desert dust at ground level (desert PM10 > 0 μg/m3). Most episodes occurred in spring-summer, with increasing gradient of both frequency and intensity north-south and west-east of the Mediterranean basin. We found significant associations of both PM10 concentrations with mortality. Increases of 10 μg/m3 in non-desert and desert PM10 (lag 0-1 days) were associated with increases in natural mortality of 0.55% (95% CI: 0.24, 0.87%) and 0.65% (95% CI: 0.24, 1.06%), respectively. Similar associations were estimated for cardio-respiratory mortality and hospital admissions. PM10 originating from the desert was positively associated with mortality and hospitalizations in Southern Europe. Policy measures should aim at reducing population exposure to anthropogenic airborne particles even in areas with large contribution from desert dust advections. Stafoggia M, Zauli-Sajani S, Pey J, Samoli E, Alessandrini E, Basagaña X, Cernigliaro A, Chiusolo M, Demaria M, Díaz J, Faustini A, Katsouyanni K, Kelessis AG, Linares C, Marchesi S, Medina S, Pandolfi P, Pérez N, Querol X, Randi G, Ranzi A, Tobias A, Forastiere F, MED-PARTICLES Study Group. 2016. Desert dust outbreaks in Southern Europe: contribution to daily PM10 concentrations and short-term associations with mortality and hospital admissions. Environ Health Perspect 124:413-419; http://dx.doi.org/10.1289/ehp.1409164.

  14. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    PubMed

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.

  15. Detection of a Moving Gas Source and Estimation of its Concentration Field with a Sensing Aerial Vehicle Integration of Theoretical Controls and Computational Fluids

    DTIC Science & Technology

    2016-07-21

    constants. The model (2.42) is popular for simulation of the UAV motion [60], [61], [62] due to the fact that it models the aircraft response to...inputs to the dynamic model (2.42). The concentration sensors onboard the UAV record concentration ( simulated ) data according to its spatial location...vehicle dynamics and guidance, and the onboard sensor modeling . 15. SUBJECT TERMS State estimation; UAVs , mobile sensors; grid adaptationj; plume

  16. Bradley Fighting Vehicle Gunnery: An Analysis of Engagement Strategies for the M242 25-mm Automatic Gun

    DTIC Science & Technology

    1993-03-01

    source for this estimate of eight rounds per BMP target. According to analyst Donna Quirido, AMSAA does not provide or support any such estimate (30...engagement or in the case of. the Bradley, stabilization inaccuracies. According to Helgert: These errors give rise to aim-wander, a term that derives from...the same area. (6:14_5) The resulting approximation to the truncated normal integral has a maximum relative error of 0.0075. Using Polya -Williams, an

  17. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Probabilistic estimation of long-term volcanic hazard under evolving tectonic conditions in a 1 Ma timeframe

    NASA Astrophysics Data System (ADS)

    Jaquet, O.; Lantuéjoul, C.; Goto, J.

    2017-10-01

    Risk assessments in relation to the siting of potential deep geological repositories for radioactive wastes demand the estimation of long-term tectonic hazards such as volcanicity and rock deformation. Owing to their tectonic situation, such evaluations concern many industrial regions around the world. For sites near volcanically active regions, a prevailing source of uncertainty is related to volcanic hazard. For specific situations, in particular in relation to geological repository siting, the requirements for the assessment of volcanic and tectonic hazards have to be expanded to 1 million years. At such time scales, tectonic changes are likely to influence volcanic hazard and therefore a particular stochastic model needs to be developed for the estimation of volcanic hazard. The concepts and theoretical basis of the proposed model are given and a methodological illustration is provided using data from the Tohoku region of Japan.

  19. Lesion contrast and detection using sonoelastographic shear velocity imaging: preliminary results

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Parker, Kevin J.

    2007-03-01

    This paper assesses lesion contrast and detection using sonoelastographic shear velocity imaging. Shear wave interference patterns, termed crawling waves, for a two phase medium were simulated assuming plane wave conditions. Shear velocity estimates were computed using a spatial autocorrelation algorithm that operates in the direction of shear wave propagation for a given kernel size. Contrast was determined by analyzing shear velocity estimate transition between mediums. Experimental results were obtained using heterogeneous phantoms with spherical inclusions (5 or 10 mm in diameter) characterized by elevated shear velocities. Two vibration sources were applied to opposing phantom edges and scanned (orthogonal to shear wave propagation) with an ultrasound scanner equipped for sonoelastography. Demodulated data was saved and transferred to an external computer for processing shear velocity images. Simulation results demonstrate shear velocity transition between contrasting mediums is governed by both estimator kernel size and source vibration frequency. Experimental results from phantoms further indicates that decreasing estimator kernel size produces corresponding decrease in shear velocity estimate transition between background and inclusion material albeit with an increase in estimator noise. Overall, results demonstrate the ability to generate high contrast shear velocity images using sonoelastographic techniques and detect millimeter-sized lesions.

  20. Long-Term Temporal Trends of Polychlorinated Biphenyls and Their Controlling Sources in China.

    PubMed

    Zhao, Shizhen; Breivik, Knut; Liu, Guorui; Zheng, Minghui; Jones, Kevin C; Sweetman, Andrew J

    2017-03-07

    Polychlorinated biphenyls (PCBs) are industrial organic contaminants identified as persistent, bioaccumulative, toxic (PBT), and subject to long-range transport (LRT) with global scale significance. This study focuses on a reconstruction and prediction for China of long-term emission trends of intentionally and unintentionally produced (UP) ∑ 7 PCBs (UP-PCBs, from the manufacture of steel, cement and sinter iron) and their re-emissions from secondary sources (e.g., soils and vegetation) using a dynamic fate model (BETR-Global). Contemporary emission estimates combined with predictions from the multimedia fate model suggest that primary sources still dominate, although unintentional sources are predicted to become a main contributor from 2035 for PCB-28. Imported e-waste is predicted to play an increasing role until 2020-2030 on a national scale due to the decline of intentionally produced (IP) emissions. Hypothetical emission scenarios suggest that China could become a potential source to neighboring regions with a net output of ∼0.4 t year -1 by around 2050. However, future emission scenarios and hence model results will be dictated by the efficiency of control measures.

  1. Sample Based Unit Liter Dose Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JENSEN, L.

    The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new datamore » to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting {mu}Ci/g or {mu}Ci/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000).« less

  2. Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.

    PubMed

    Pathak, Biswajit; Boruah, Bosanta R

    2017-12-01

    Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.

  3. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.

    2015-03-10

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reportedmore » demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.« less

  4. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Three-Dimensional Model Synthesis of the Global Methane Cycle

    NASA Technical Reports Server (NTRS)

    Fung, I.; Prather, M.; John, J.; Lerner, J.; Matthews, E.

    1991-01-01

    A synthesis of the global methane cycle is presented to attempt to generate an accurate global methane budget. Methane-flux measurements, energy data, and agricultural statistics are merged with databases of land-surface characteristics and anthropogenic activities. The sources and sinks of methane are estimated based on atmospheric methane composition and variations, and a global 3D transport model simulates the corresponding atmospheric responses. The geographic and seasonal variations of candidate budgets are compared with observational data, and the available observations are used to constrain the plausible methane budgets. The preferred budget includes annual destruction rates and annual emissions for various sources. The lack of direct flux measurements in the regions of many of these fluxes makes the unique determination of each term impossible. OH oxidation is found to be the largest single term, although more measurements of this and other terms are recommended.

  6. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  7. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  8. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  9. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  10. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach.

    PubMed

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-03-22

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.

  11. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    PubMed Central

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  12. Revisiting the radionuclide atmospheric dispersion event of the Chernobyl disaster - modelling sensitivity and data assimilation

    NASA Astrophysics Data System (ADS)

    Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor

    2013-04-01

    A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be able to improve the simulation results. For deposited activities the results are more complex probably due to a strong sensitivity to some of the meteorological fields which remain quite uncertain.

  13. Toward a Mechanistic Source Term in Advanced Reactors: A Review of Past U.S. SFR Incidents, Experiments, and Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Brunett, Acacia J.; Grabaskas, David

    In 2015, as part of a Regulatory Technology Development Plan (RTDP) effort for sodium-cooled fast reactors (SFRs), Argonne National Laboratory investigated the current state of knowledge of source term development for a metal-fueled, pool-type SFR. This paper provides a summary of past domestic metal-fueled SFR incidents and experiments and highlights information relevant to source term estimations that were gathered as part of the RTDP effort. The incidents described in this paper include fuel pin failures at the Sodium Reactor Experiment (SRE) facility in July of 1959, the Fermi I meltdown that occurred in October of 1966, and the repeated meltingmore » of a fuel element within an experimental capsule at the Experimental Breeder Reactor II (EBR-II) from November 1967 to May 1968. The experiments described in this paper include the Run-Beyond-Cladding-Breach tests that were performed at EBR-II in 1985 and a series of severe transient overpower tests conducted at the Transient Reactor Test Facility (TREAT) in the mid-1980s.« less

  14. Emergency Preparedness technology support to the Health and Safety Executive (HSE), Nuclear Installations Inspectorate (NII) of the United Kingdom. Appendix A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O`Kula, K.R.

    1994-03-01

    The Nuclear Installations Inspectorate (NII) of the United Kingdom (UK) suggested the use of an accident progression logic model method developed by Westinghouse Savannah River Company (WSRC) and Science Applications International Corporation (SAIC) for K Reactor to predict the magnitude and timing of radioactivity releases (the source term) based on an advanced logic model methodology. Predicted releases are output from the personal computer-based model in a level-of-confidence format. Additional technical discussions eventually led to a request from the NII to develop a proposal for assembling a similar technology to predict source terms for the UK`s advanced gas-cooled reactor (AGR) type.more » To respond to this request, WSRC is submitting a proposal to provide contractual assistance as specified in the Scope of Work. The work will produce, document, and transfer technology associated with a Decision-Oriented Source Term Estimator for Emergency Preparedness (DOSE-EP) for the NII to apply to AGRs in the United Kingdom. This document, Appendix A is a part of this proposal.« less

  15. CONCENTRATIONS AND ESTIMATED LOADS OF NITROGEN CONTRIBUTED BY TWO ADJACENT WETLAND STREAMS WITH DIFFERENT FLOW-SOURCE TERMS IN WATKINSVILLE, GA

    EPA Science Inventory

    Inorganic, fixed nitrogen from agricultural settings often is introduced to first-order streams via surface runoff and shallow ground-water flow. Best management practices for limiting the flux of fixed N to surface waters often include buffers such as wetlands. However, the eff...

  16. CONCENTRATIONS AND ESTIMATED LOADS OF NITROGEN CONTRIBUTED BY TWO ADJACENT WETLAND STREAMS WITH DIFFERENT FLOW-SOURCE TERMS IN WATKINSVILLE, GEORGIA

    EPA Science Inventory

    Inorganic, fixed nitrogen from agricultural settings often is introduced to first-order streams via surface runoff and shallow ground-water flow. Best management practices for limiting the flux of fixed N to surface waters often include buffers such as wetlands. However, the eff...

  17. A simulated approach to estimating PM10 and PM2.5 concentrations downwind from cotton gins

    USDA-ARS?s Scientific Manuscript database

    Cotton gins are required to obtain operating permits from state air pollution regulatory agencies (SAPRA), which regulate the amount of particulate matter that can be emitted. Industrial Source Complex Short Term version 3 (ISCST3) is the Gaussian dispersion model currently used by some SAPRAs to pr...

  18. Hyperedge bundling: Data, source code, and precautions to modeling-accuracy bias to synchrony estimates.

    PubMed

    Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias

    2018-06-01

    It has not been well documented that MEG/EEG functional connectivity graphs estimated with zero-lag-free interaction metrics are severely confounded by a multitude of spurious interactions (SI), i.e., the false-positive "ghosts" of true interactions [1], [2]. These SI are caused by the multivariate linear mixing between sources, and thus they pose a severe challenge to the validity of connectivity analysis. Due to the complex nature of signal mixing and the SI problem, there is a need to intuitively demonstrate how the SI are discovered and how they can be attenuated using a novel approach that we termed hyperedge bundling. Here we provide a dataset with software with which the readers can perform simulations in order to better understand the theory and the solution to SI. We include the supplementary material of [1] that is not directly relevant to the hyperedge bundling per se but reflects important properties of the MEG source model and the functional connectivity graphs. For example, the gyri of dorsal-lateral cortices are the most accurately modeled areas; the sulci of inferior temporal, frontal and the insula have the least modeling accuracy. Importantly, we found the interaction estimates are heavily biased by the modeling accuracy between regions, which means the estimates cannot be straightforwardly interpreted as the coupling between brain regions. This raise a red flag that the conventional method of thresholding graphs by estimate values is rather suboptimal: because the measured topology of the graph reflects the geometric property of source-model instead of the cortical interactions under investigation.

  19. Analysis of accident sequences and source terms at waste treatment and storage facilities for waste generated by U.S. Department of Energy Waste Management Operations, Volume 3: Appendixes C-H

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, C.; Nabelssi, B.; Roglans-Ribas, J.

    1995-04-01

    This report contains the Appendices for the Analysis of Accident Sequences and Source Terms at Waste Treatment and Storage Facilities for Waste Generated by the U.S. Department of Energy Waste Management Operations. The main report documents the methodology, computational framework, and results of facility accident analyses performed as a part of the U.S. Department of Energy (DOE) Waste Management Programmatic Environmental Impact Statement (WM PEIS). The accident sequences potentially important to human health risk are specified, their frequencies are assessed, and the resultant radiological and chemical source terms are evaluated. A personal computer-based computational framework and database have been developedmore » that provide these results as input to the WM PEIS for calculation of human health risk impacts. This report summarizes the accident analyses and aggregates the key results for each of the waste streams. Source terms are estimated and results are presented for each of the major DOE sites and facilities by WM PEIS alternative for each waste stream. The appendices identify the potential atmospheric release of each toxic chemical or radionuclide for each accident scenario studied. They also provide discussion of specific accident analysis data and guidance used or consulted in this report.« less

  20. Assessing risk of non-compliance of phosphorus standards for lakes in England and Wales

    NASA Astrophysics Data System (ADS)

    Duethmann, D.; Anthony, S.; Carvalho, L.; Spears, B.

    2009-04-01

    High population densities, use of inorganic fertilizer and intensive livestock agriculture have increased phosphorus loads to lakes, and accelerated eutrophication is a major pressure for many lakes. The EC Water Framework Directive (WFD) requires that good chemical and ecological quality is restored in all surface water bodies by 2015. Total phosphorus (TP) standards for lakes in England and Wales have been agreed recently, and our aim was to estimate what percentage of lakes in England and Wales is at risk of failing these standards. With measured lake phosphorus concentrations only being available for a small number of lakes, such an assessment had to be model based. The study also makes a source apportionment of phosphorus inputs into lakes. Phosphorus loads were estimated from a range of sources including agricultural loads, sewage effluents, septic tanks, diffuse urban sources, atmospheric deposition, groundwater and bank erosion. Lake phosphorus concentrations were predicted using the Vollenweider model, and the model framework was satisfactorily tested against available observed lake concentration data. Even though predictions for individual lakes remain uncertain, results for a population of lakes are considered as sufficiently robust. A scenario analysis was carried out to investigate to what extent reductions in phosphorus loads would increase the number of lakes achieving good ecological status in terms of TP standards. Applying the model to all lakes in England and Wales greater than 1 ha, it was calculated that under current conditions roughly two thirds of the lakes would fail the good ecological status with respect to phosphorus. According to our estimates, agricultural phosphorus loads represent the most frequent dominant source for the majority of catchments, but diffuse urban runoff also is important in many lakes. Sewage effluents are the most frequent dominant source for large lake catchments greater than 100 km². The evaluation in terms of total load can be misleading in terms of what sources need to be tackled by catchment management for most of the lakes. For example sewage effluents are responsible for the majority of the total load but are the dominant source in only a small number of larger lake catchments. If loads from all sources were halved this would potentially increase the number of complying lakes to two thirds but require substantial measures to reduce phosphorus inputs to lakes. For agriculture, required changes would have to go beyond improvements of agricultural practise, and need to include reducing the intensity of land use. The time required for many lakes to respond to reduced nutrient loading is likely to extend beyond the current timelines of the WFD due to internal loading and biological resistances.

  1. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    NASA Astrophysics Data System (ADS)

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura

    2017-12-01

    In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated - reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.

  2. Design of an Air Pollution Monitoring Campaign in Beijing for Application to Cohort Health Studies.

    PubMed

    Vedal, Sverre; Han, Bin; Xu, Jia; Szpiro, Adam; Bai, Zhipeng

    2017-12-15

    No cohort studies in China on the health effects of long-term air pollution exposure have employed exposure estimates at the fine spatial scales desirable for cohort studies with individual-level health outcome data. Here we assess an array of modern air pollution exposure estimation approaches for assigning within-city exposure estimates in Beijing for individual pollutants and pollutant sources to individual members of a cohort. Issues considered in selecting specific monitoring data or new monitoring campaigns include: needed spatial resolution, exposure measurement error and its impact on health effect estimates, spatial alignment and compatibility with the cohort, and feasibility and expense. Sources of existing data largely include administrative monitoring data, predictions from air dispersion or chemical transport models and remote sensing (specifically satellite) data. New air monitoring campaigns include additional fixed site monitoring, snapshot monitoring, passive badge or micro-sensor saturation monitoring and mobile monitoring, as well as combinations of these. Each of these has relative advantages and disadvantages. It is concluded that a campaign in Beijing that at least includes a mobile monitoring component, when coupled with currently available spatio-temporal modeling methods, should be strongly considered. Such a campaign is economical and capable of providing the desired fine-scale spatial resolution for pollutants and sources.

  3. Design of an Air Pollution Monitoring Campaign in Beijing for Application to Cohort Health Studies

    PubMed Central

    Vedal, Sverre; Han, Bin; Szpiro, Adam; Bai, Zhipeng

    2017-01-01

    No cohort studies in China on the health effects of long-term air pollution exposure have employed exposure estimates at the fine spatial scales desirable for cohort studies with individual-level health outcome data. Here we assess an array of modern air pollution exposure estimation approaches for assigning within-city exposure estimates in Beijing for individual pollutants and pollutant sources to individual members of a cohort. Issues considered in selecting specific monitoring data or new monitoring campaigns include: needed spatial resolution, exposure measurement error and its impact on health effect estimates, spatial alignment and compatibility with the cohort, and feasibility and expense. Sources of existing data largely include administrative monitoring data, predictions from air dispersion or chemical transport models and remote sensing (specifically satellite) data. New air monitoring campaigns include additional fixed site monitoring, snapshot monitoring, passive badge or micro-sensor saturation monitoring and mobile monitoring, as well as combinations of these. Each of these has relative advantages and disadvantages. It is concluded that a campaign in Beijing that at least includes a mobile monitoring component, when coupled with currently available spatio-temporal modeling methods, should be strongly considered. Such a campaign is economical and capable of providing the desired fine-scale spatial resolution for pollutants and sources. PMID:29244738

  4. Mapping water availability, projected use and cost in the western United States

    NASA Astrophysics Data System (ADS)

    Tidwell, Vincent C.; Moreland, Barbara D.; Zemlick, Katie M.; Roberts, Barry L.; Passell, Howard D.; Jensen, Daniel; Forsgren, Christopher; Sehlke, Gerald; Cook, Margaret A.; King, Carey W.; Larsen, Sara

    2014-05-01

    New demands for water can be satisfied through a variety of source options. In some basins surface and/or groundwater may be available through permitting with the state water management agency (termed unappropriated water), alternatively water might be purchased and transferred out of its current use to another (termed appropriated water), or non-traditional water sources can be captured and treated (e.g., wastewater). The relative availability and cost of each source are key factors in the development decision. Unfortunately, these measures are location dependent with no consistent or comparable set of data available for evaluating competing water sources. With the help of western water managers, water availability was mapped for over 1200 watersheds throughout the western US. Five water sources were individually examined, including unappropriated surface water, unappropriated groundwater, appropriated water, municipal wastewater and brackish groundwater. Also mapped was projected change in consumptive water use from 2010 to 2030. Associated costs to acquire, convey and treat the water, as necessary, for each of the five sources were estimated. These metrics were developed to support regional water planning and policy analysis with initial application to electric transmission planning in the western US.

  5. Estimation of the time-dependent radioactive source-term from the Fukushima nuclear power plant accident using atmospheric transport modelling

    NASA Astrophysics Data System (ADS)

    Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.

    2012-04-01

    Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.

  6. Estimating watershed degradation over the last century and its impact on water-treatment costs for the world’s large cities

    PubMed Central

    McDonald, Robert I.; Weber, Katherine F.; Padowski, Julie; Boucher, Tim; Shemie, Daniel

    2016-01-01

    Urban water systems are impacted by land use within their source watersheds, as it affects raw water quality and thus the costs of water treatment. However, global estimates of the effect of land cover change on urban water-treatment costs have been hampered by a lack of global information on urban source watersheds. Here, we use a unique map of the urban source watersheds for 309 large cities (population > 750,000), combined with long-term data on anthropogenic land-use change in their source watersheds and data on water-treatment costs. We show that anthropogenic activity is highly correlated with sediment and nutrient pollution levels, which is in turn highly correlated with treatment costs. Over our study period (1900–2005), median population density has increased by a factor of 5.4 in urban source watersheds, whereas ranching and cropland use have increased by a factor of 3.4 and 2.0, respectively. Nearly all (90%) of urban source watersheds have had some level of watershed degradation, with the average pollutant yield of urban source watersheds increasing by 40% for sediment, 47% for phosphorus, and 119% for nitrogen. We estimate the degradation of watersheds over our study period has impacted treatment costs for 29% of cities globally, with operation and maintenance costs for impacted cities increasing on average by 53 ± 5% and replacement capital costs increasing by 44 ± 14%. We discuss why this widespread degradation might be occurring, and strategies cities have used to slow natural land cover loss. PMID:27457941

  7. Estimating watershed degradation over the last century and its impact on water-treatment costs for the world's large cities.

    PubMed

    McDonald, Robert I; Weber, Katherine F; Padowski, Julie; Boucher, Tim; Shemie, Daniel

    2016-08-09

    Urban water systems are impacted by land use within their source watersheds, as it affects raw water quality and thus the costs of water treatment. However, global estimates of the effect of land cover change on urban water-treatment costs have been hampered by a lack of global information on urban source watersheds. Here, we use a unique map of the urban source watersheds for 309 large cities (population > 750,000), combined with long-term data on anthropogenic land-use change in their source watersheds and data on water-treatment costs. We show that anthropogenic activity is highly correlated with sediment and nutrient pollution levels, which is in turn highly correlated with treatment costs. Over our study period (1900-2005), median population density has increased by a factor of 5.4 in urban source watersheds, whereas ranching and cropland use have increased by a factor of 3.4 and 2.0, respectively. Nearly all (90%) of urban source watersheds have had some level of watershed degradation, with the average pollutant yield of urban source watersheds increasing by 40% for sediment, 47% for phosphorus, and 119% for nitrogen. We estimate the degradation of watersheds over our study period has impacted treatment costs for 29% of cities globally, with operation and maintenance costs for impacted cities increasing on average by 53 ± 5% and replacement capital costs increasing by 44 ± 14%. We discuss why this widespread degradation might be occurring, and strategies cities have used to slow natural land cover loss.

  8. Selective advantage of implementing optimal contributions selection and timescales for the convergence of long-term genetic contributions.

    PubMed

    Howard, David M; Pong-Wong, Ricardo; Knap, Pieter W; Kremer, Valentin D; Woolliams, John A

    2018-05-10

    Optimal contributions selection (OCS) provides animal breeders with a framework for maximising genetic gain for a predefined rate of inbreeding. Simulation studies have indicated that the source of the selective advantage of OCS is derived from breeding decisions being more closely aligned with estimates of Mendelian sampling terms ([Formula: see text]) of selection candidates, rather than estimated breeding values (EBV). This study represents the first attempt to assess the source of the selective advantage provided by OCS using a commercial pig population and by testing three hypotheses: (1) OCS places more emphasis on [Formula: see text] compared to EBV for determining which animals were selected as parents, (2) OCS places more emphasis on [Formula: see text] compared to EBV for determining which of those parents were selected to make a long-term genetic contribution (r), and (3) OCS places more emphasis on [Formula: see text] compared to EBV for determining the magnitude of r. The population studied also provided an opportunity to investigate the convergence of r over time. Selection intensity limited the number of males available for analysis, but females provided some evidence that the selective advantage derived from applying an OCS algorithm resulted from greater weighting being placed on [Formula: see text] during the process of decision-making. Male r were found to converge initially at a faster rate than female r, with approximately 90% convergence achieved within seven generations across both sexes. This study of commercial data provides some support to results from theoretical and simulation studies that the source of selective advantage from OCS comes from [Formula: see text]. The implication that genomic selection (GS) improves estimation of [Formula: see text] should allow for even greater genetic gains for a predefined rate of inbreeding, once the synergistic benefits of combining OCS and GS are realised.

  9. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  10. Revisiting the social cost of carbon

    PubMed Central

    Nordhaus, William D.

    2017-01-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources. PMID:28143934

  11. Rebound of a coal tar creosote plume following partial source zone treatment with permanganate.

    PubMed

    Thomson, N R; Fraser, M J; Lamarche, C; Barker, J F; Forsey, S P

    2008-11-14

    The long-term management of dissolved plumes originating from a coal tar creosote source is a technical challenge. For some sites stabilization of the source may be the best practical solution to decrease the contaminant mass loading to the plume and associated off-site migration. At the bench-scale, the deposition of manganese oxides, a permanganate reaction byproduct, has been shown to cause pore plugging and the formation of a manganese oxide layer adjacent to the non-aqueous phase liquid creosote which reduces post-treatment mass transfer and hence mass loading from the source. The objective of this study was to investigate the potential of partial permanganate treatment to reduce the ability of a coal tar creosote source zone to generate a multi-component plume at the pilot-scale over both the short-term (weeks to months) and the long-term (years) at a site where there is >10 years of comprehensive synoptic plume baseline data available. A series of preliminary bench-scale experiments were conducted to support this pilot-scale investigation. The results from the bench-scale experiments indicated that if sufficient mass removal of the reactive compounds is achieved then the effective solubility, aqueous concentration and rate of mass removal of the more abundant non-reactive coal tar creosote compounds such as biphenyl and dibenzofuran can be increased. Manganese oxide formation and deposition caused an order-of-magnitude decrease in hydraulic conductivity. Approximately 125 kg of permanganate were delivered into the pilot-scale source zone over 35 days, and based on mass balance estimates <10% of the initial reactive coal tar creosote mass in the source zone was oxidized. Mass discharge estimated at a down-gradient fence line indicated >35% reduction for all monitored compounds except for biphenyl, dibenzofuran and fluoranthene 150 days after treatment, which is consistent with the bench-scale experimental results. Pre- and post-treatment soil core data indicated a highly variable and random spatial distribution of mass within the source zone and provided no insight into the mass removed of any of the monitored species. The down-gradient plume was monitored approximately 1, 2 and 4 years following treatment. The data collected at 1 and 2 years post-treatment showed a decrease in mass discharge (10 to 60%) and/or total plume mass (0 to 55%); however, by 4 years post-treatment there was a rebound in both mass discharge and total plume mass for all monitored compounds to pre-treatment values or higher. The variability of the data collected was too large to resolve subtle changes in plume morphology, particularly near the source zone, that would provide insight into the impact of the formation and deposition of manganese oxides that occurred during treatment on mass transfer and/or flow by-passing. Overall, the results from this pilot-scale investigation indicate that there was a significant but short-term (months) reduction of mass emanating from the source zone as a result of permanganate treatment but there was no long-term (years) impact on the ability of this coal tar creosote source zone to generate a multi-component plume.

  12. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  13. Evaluation of the site effect with Heuristic Methods

    NASA Astrophysics Data System (ADS)

    Torres, N. N.; Ortiz-Aleman, C.

    2017-12-01

    The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.

  14. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  15. Toward a Mechanistic Source Term in Advanced Reactors: Characterization of Radionuclide Transport and Retention in a Sodium Cooled Fast Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia J.; Bucknor, Matthew; Grabaskas, David

    A vital component of the U.S. reactor licensing process is an integrated safety analysis in which a source term representing the release of radionuclides during normal operation and accident sequences is analyzed. Historically, source term analyses have utilized bounding, deterministic assumptions regarding radionuclide release. However, advancements in technical capabilities and the knowledge state have enabled the development of more realistic and best-estimate retention and release models such that a mechanistic source term assessment can be expected to be a required component of future licensing of advanced reactors. Recently, as part of a Regulatory Technology Development Plan effort for sodium cooledmore » fast reactors (SFRs), Argonne National Laboratory has investigated the current state of knowledge of potential source terms in an SFR via an extensive review of previous domestic experiments, accidents, and operation. As part of this work, the significant sources and transport processes of radionuclides in an SFR have been identified and characterized. This effort examines all stages of release and source term evolution, beginning with release from the fuel pin and ending with retention in containment. Radionuclide sources considered in this effort include releases originating both in-vessel (e.g. in-core fuel, primary sodium, cover gas cleanup system, etc.) and ex-vessel (e.g. spent fuel storage, handling, and movement). Releases resulting from a primary sodium fire are also considered as a potential source. For each release group, dominant transport phenomena are identified and qualitatively discussed. The key product of this effort was the development of concise, inclusive diagrams that illustrate the release and retention mechanisms at a high level, where unique schematics have been developed for in-vessel, ex-vessel and sodium fire releases. This review effort has also found that despite the substantial range of phenomena affecting radionuclide release, the current state of knowledge is extensive, and in most areas may be sufficient. Several knowledge gaps were identified, such as uncertainty in release from molten fuel and availability of thermodynamic data for lanthanides and actinides in liquid sodium. However, the overall findings suggest that high retention rates can be expected within the fuel and primary sodium for all radionuclides other than noble gases.« less

  16. Preventing Bandwidth Abuse at the Router through Sending Rate Estimate-based Active Queue Management

    DTIC Science & Technology

    2007-06-01

    behavior is growing in the Internet. These non-responsive sources can monopolize network bandwidth and starve the “congestion friendly” flows. Without...unnecessarily complex because most of the flows in the Internet are short flows usually termed as “web mice ” [7]. Moreover, having a separate queue for each

  17. The 2016 Al-Mishraq sulphur plant fire: Source and health risk area estimation

    NASA Astrophysics Data System (ADS)

    Björnham, Oscar; Grahn, Håkan; von Schoenberg, Pontus; Liljedahl, Birgitta; Waleij, Annica; Brännström, Niklas

    2017-11-01

    On October 20, 2016, Daesh (Islamic State) set fire to the sulphur production site Al-Mishraq as the battle of Mosul in northern Iraq became more intense. An extensive plume of toxic sulphur dioxide and hydrogen sulphide caused comprehensive casualties. The intensity of the SO2 release was reaching levels of minor volcanic eruptions and the plume was observed by several satellites. By investigation of the measurement data from instruments on the MetOp-A, MetOp-B, Aura and Soumi satellites we have estimated the time-dependent source term to 161 kilotonnes sulphur dioxide released into the atmosphere during seven days. A long-range dispersion model was utilized to simulate the atmospheric transport over the Middle East. The ground level concentrations predicted by the simulation were compared with observation from the Turkey National Air Quality Monitoring Network. Finally, the simulation data provided, using a probit analysis of the simulated data, an estimate of the health risk area that was compared to reported urgent medical treatments.

  18. Using recorded sound spectra profile as input data for real-time short-term urban road-traffic-flow estimation.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2012-10-01

    Road traffic has a heavy impact on the urban sound environment, constituting the main source of noise and widely dominating its spectral composition. In this context, our research investigates the use of recorded sound spectra as input data for the development of real-time short-term road traffic flow estimation models. For this, a series of models based on the use of Multilayer Perceptron Neural Networks, multiple linear regression, and the Fisher linear discriminant were implemented to estimate road traffic flow as well as to classify it according to the composition of heavy vehicles and motorcycles/mopeds. In view of the results, the use of the 50-400 Hz and 1-2.5 kHz frequency ranges as input variables in multilayer perceptron-based models successfully estimated urban road traffic flow with an average percentage of explained variance equal to 86%, while the classification of the urban road traffic flow gave an average success rate of 96.1%. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Moment Tensor Analysis of Shallow Sources

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.; Yoo, S. H.

    2015-12-01

    A potential issue for moment tensor inversion of shallow seismic sources is that some moment tensor components have vanishing amplitudes at the free surface, which can result in bias in the moment tensor solution. The effects of the free-surface on the stability of the moment tensor method becomes important as we continue to investigate and improve the capabilities of regional full moment tensor inversion for source-type identification and discrimination. It is important to understand these free surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have shallow seismicity such as volcanoes and geothermal systems. In this study, we apply the moment tensor based discrimination method to the HUMMING ALBATROSS quarry blasts. These shallow chemical explosions at approximately 10 m depth and recorded up to several kilometers distance represent rather severe source-station geometry in terms of vanishing traction issues. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first motion method enables the unique discrimination of these events. Recovering the correct yield using seismic moment estimates from moment tensor inversion remains challenging but we can begin to put error bounds on our moment estimates using the NSS technique.

  20. Mismatch and G-Stack Modulated Probe Signals on SNP Microarrays

    PubMed Central

    Binder, Hans; Fasold, Mario; Glomb, Torsten

    2009-01-01

    Background Single nucleotide polymorphism (SNP) arrays are important tools widely used for genotyping and copy number estimation. This technology utilizes the specific affinity of fragmented DNA for binding to surface-attached oligonucleotide DNA probes. We analyze the variability of the probe signals of Affymetrix GeneChip SNP arrays as a function of the probe sequence to identify relevant sequence motifs which potentially cause systematic biases of genotyping and copy number estimates. Methodology/Principal Findings The probe design of GeneChip SNP arrays enables us to disentangle different sources of intensity modulations such as the number of mismatches per duplex, matched and mismatched base pairings including nearest and next-nearest neighbors and their position along the probe sequence. The effect of probe sequence was estimated in terms of triple-motifs with central matches and mismatches which include all 256 combinations of possible base pairings. The probe/target interactions on the chip can be decomposed into nearest neighbor contributions which correlate well with free energy terms of DNA/DNA-interactions in solution. The effect of mismatches is about twice as large as that of canonical pairings. Runs of guanines (G) and the particular type of mismatched pairings formed in cross-allelic probe/target duplexes constitute sources of systematic biases of the probe signals with consequences for genotyping and copy number estimates. The poly-G effect seems to be related to the crowded arrangement of probes which facilitates complex formation of neighboring probes with at minimum three adjacent G's in their sequence. Conclusions The applied method of “triple-averaging” represents a model-free approach to estimate the mean intensity contributions of different sequence motifs which can be applied in calibration algorithms to correct signal values for sequence effects. Rules for appropriate sequence corrections are suggested. PMID:19924253

  1. Comparison of risk estimates for selected diseases and causes of death.

    PubMed

    Merrill, R M; Kessler, L G; Udler, J M; Rasband, G C; Feuer, E J

    1999-02-01

    Lifetime risk estimates of disease are limited by long-term data extrapolations and are less relevant to individuals who have already lived a period of time without the disease, but are approaching the age at which the disease risk becomes common. In contrast, short-term age-conditional risk estimates, such as the risk of developing a disease in the next 10 years among those alive and free of the disease at a given age, are less restricted by long-term extrapolation of current rates and can present patients with risk information tailored to their age. This study focuses on short-term age-conditional risk estimates for a broad set of important chronic diseases and nondisease causes of death among white and black men and women. The Feuer et al. (1993, Journal of the National Cancer Institute) [15] method was applied to data from a variety of sources to obtain risk estimates for select cancers, myocardial infarction, diabetes mellitus, multiple sclerosis, Alzheimer's, and death from motor vehicle accidents, homicide or legal intervention, and suicide. Acute deaths from suicide, homicide or legal intervention, and fatal motor vehicle accidents dominate the risk picture for persons in their 20s, with only diabetes mellitus and end-stage renal disease therapy (for blacks only) having similar levels of risk in this age range. Late in life, cancer, acute myocardial infarction, Alzheimer's, and stroke become most common. The chronic diseases affecting the population later in life present the most likely diseases someone will face. Several interesting differences in disease and death risks were derived and reported among age-specific race and gender subgroups of the population. Presentation of risk estimates for a broad set of chronic diseases and nondisease causes of death within short-term age ranges among population subgroups provides tailored information that may lead to better educated prevention, screening, and control behaviors and more efficient allocation of health resources.

  2. Characterization of air manganese exposure estimates for residents in two Ohio towns

    PubMed Central

    Colledge, Michelle A.; Julian, Jaime R.; Gocheva, Vihra V.; Beseler, Cheryl L.; Roels, Harry A.; Lobdell, Danelle T.; Bowler, Rosemarie M.

    2016-01-01

    This study was conducted to derive receptor-specific outdoor exposure concentrations of total suspended particulate (TSP) and respirable (dae ≤ 10 μm) air manganese (air-Mn) for East Liverpool and Marietta (Ohio) in the absence of facility emissions data, but where long-term air measurements were available. Our “site-surface area emissions method” used U.S. Environmental Protection Agency’s (EPA) AERMOD (AMS/EPA Regulatory Model) dispersion model and air measurement data to estimate concentrations for residential receptor sites in the two communities. Modeled concentrations were used to create ratios between receptor points and calibrated using measured data from local air monitoring stations. Estimated outdoor air-Mn concentrations were derived for individual study subjects in both towns. The mean estimated long-term air-Mn exposure levels for total suspended particulate were 0.35 μg/m3 (geometric mean [GM]) and 0.88 μg/m3 (arithmetic mean [AM]) in East Liverpool (range: 0.014–6.32 μg/m3) and 0.17 μg/m3 (GM) and 0.21 μg/m3 (AM) in Marietta (range: 0.03–1.61 μg/m3). Modeled results compared well with averaged ambient air measurements from local air monitoring stations. Exposure to respirable Mn particulate matter (PM10; PM <10 μm) was higher in Marietta residents. Implications Few available studies evaluate long-term health outcomes from inhalational manganese (Mn) exposure in residential populations, due in part to challenges in measuring individual exposures. Local long-term air measurements provide the means to calibrate models used in estimating long-term exposures. Furthermore, this combination of modeling and ambient air sampling can be used to derive receptor-specific exposure estimates even in the absence of source emissions data for use in human health outcome studies. PMID:26211636

  3. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-07-01

    This paper presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  4. Development of an atmospheric N2O isotopocule model and optimization procedure, and application to source estimation

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Takigawa, M.; Sudo, K.; Toyoda, S.; Yoshida, N.; Röckmann, T.; Kaiser, J.; Aoki, S.; Morimoto, S.; Sugawara, S.; Nakazawa, T.

    2015-12-01

    This work presents the development of an atmospheric N2O isotopocule model based on a chemistry-coupled atmospheric general circulation model (ACTM). We also describe a simple method to optimize the model and present its use in estimating the isotopic signatures of surface sources at the hemispheric scale. Data obtained from ground-based observations, measurements of firn air, and balloon and aircraft flights were used to optimize the long-term trends, interhemispheric gradients, and photolytic fractionation, respectively, in the model. This optimization successfully reproduced realistic spatial and temporal variations of atmospheric N2O isotopocules throughout the atmosphere from the surface to the stratosphere. The very small gradients associated with vertical profiles through the troposphere and the latitudinal and vertical distributions within each hemisphere were also reasonably simulated. The results of the isotopic characterization of the global total sources were generally consistent with previous one-box model estimates, indicating that the observed atmospheric trend is the dominant factor controlling the source isotopic signature. However, hemispheric estimates were different from those generated by a previous two-box model study, mainly due to the model accounting for the interhemispheric transport and latitudinal and vertical distributions of tropospheric N2O isotopocules. Comparisons of time series of atmospheric N2O isotopocule ratios between our model and observational data from several laboratories revealed the need for a more systematic and elaborate intercalibration of the standard scales used in N2O isotopic measurements in order to capture a more complete and precise picture of the temporal and spatial variations in atmospheric N2O isotopocule ratios. This study highlights the possibility that inverse estimation of surface N2O fluxes, including the isotopic information as additional constraints, could be realized.

  5. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  6. Global solutions and finite time blow-up for fourth order nonlinear damped wave equation

    NASA Astrophysics Data System (ADS)

    Xu, Runzhang; Wang, Xingchang; Yang, Yanbing; Chen, Shaohua

    2018-06-01

    In this paper, we study the initial boundary value problem and global well-posedness for a class of fourth order wave equations with a nonlinear damping term and a nonlinear source term, which was introduced to describe the dynamics of a suspension bridge. The global existence, decay estimate, and blow-up of solution at both subcritical (E(0) < d) and critical (E(0) = d) initial energy levels are obtained. Moreover, we prove the blow-up in finite time of solution at the supercritical initial energy level (E(0) > 0).

  7. Association of long-term exposure to local industry- and traffic-specific particulate matter with arterial blood pressure and incident hypertension.

    PubMed

    Fuks, Kateryna B; Weinmayr, Gudrun; Hennig, Frauke; Tzivian, Lilian; Moebus, Susanne; Jakobs, Hermann; Memmesheimer, Michael; Kälsch, Hagen; Andrich, Silke; Nonnemacher, Michael; Erbel, Raimund; Jöckel, Karl-Heinz; Hoffmann, Barbara

    2016-08-01

    Long-term exposure to fine particulate matter (PM2.5) may lead to increased blood pressure (BP). The role of industry- and traffic-specific PM2.5 remains unclear. We investigated the associations of residential long-term source-specific PM2.5 exposure with arterial BP and incident hypertension in the population-based Heinz Nixdorf Recall cohort study. We defined hypertension as systolic BP≥140mmHg, or diastolic BP≥90mmHg, or current use of BP lowering medication. Long-term concentrations of PM2.5 from all local sources (PM2.5ALL), local industry (PM2.5IND) and traffic (PM2.5TRA) were modeled with a dispersion and chemistry transport model (EURAD-CTM) with a 1km(2) resolution. We performed a cross-sectional analysis with BP and prevalent hypertension at baseline, using linear and logistic regression, respectively, and a longitudinal analysis with incident hypertension at 5-year follow-up, using Poisson regression with robust variance estimation. We adjusted for age, sex, body mass index, lifestyle, education, and major road proximity. Change in BP (mmHg), odds ratio (OR) and relative risk (RR) for hypertension were calculated per 1μg/m(3) of exposure concentration. PM2.5ALL was highly correlated with PM2.5IND (Spearman's ρ=0.92) and moderately with PM2.5TRA (ρ=0.42). In adjusted cross-sectional analysis with 4539 participants, we found positive associations of PM2.5ALL with systolic (0.42 [95%-CI: 0.03, 0.80]) and diastolic (0.25 [0.04, 0.46]) BP. Higher, but less precise estimates were found for PM2.5IND (systolic: 0.55 [-0.05, 1.14]; diastolic: 0.35 [0.03, 0.67]) and PM2.5TRA (systolic: 0.88 [-1.55, 3.31]; diastolic: 0.41 [-0.91, 1.73]). We found crude positive association of PM2.5TRA with prevalence (OR 1.41 [1.10, 1.80]) and incidence of hypertension (RR 1.38 [1.03, 1.85]), attenuating after adjustment (OR 1.19 [0.90, 1.58] and RR 1.28 [0.94, 1.72]). We found no association of PM2.5ALL and PM2.5IND with hypertension. Long-term exposures to all-source and industry-specific PM2.5 were positively related to BP. We could not separate the effects of industry-specific PM2.5 from all-source PM2.5. Estimates with traffic-specific PM2.5 were generally higher but inconclusive. Copyright © 2016. Published by Elsevier GmbH.

  8. Estimation of the sensitive volume for gravitational-wave source populations using weighted Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Tiwari, Vaibhav

    2018-07-01

    The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.

  9. Desert Dust Outbreaks in Southern Europe: Contribution to Daily PM10 Concentrations and Short-Term Associations with Mortality and Hospital Admissions

    PubMed Central

    Stafoggia, Massimo; Zauli-Sajani, Stefano; Pey, Jorge; Samoli, Evangelia; Alessandrini, Ester; Basagaña, Xavier; Cernigliaro, Achille; Chiusolo, Monica; Demaria, Moreno; Díaz, Julio; Faustini, Annunziata; Katsouyanni, Klea; Kelessis, Apostolos G.; Linares, Cristina; Marchesi, Stefano; Medina, Sylvia; Pandolfi, Paolo; Pérez, Noemí; Querol, Xavier; Randi, Giorgia; Ranzi, Andrea; Tobias, Aurelio; Forastiere, Francesco

    2015-01-01

    Background: Evidence on the association between short-term exposure to desert dust and health outcomes is controversial. Objectives: We aimed to estimate the short-term effects of particulate matter ≤ 10 μm (PM10) on mortality and hospital admissions in 13 Southern European cities, distinguishing between PM10 originating from the desert and from other sources. Methods: We identified desert dust advection days in multiple Mediterranean areas for 2001–2010 by combining modeling tools, back-trajectories, and satellite data. For each advection day, we estimated PM10 concentrations originating from desert, and computed PM10 from other sources by difference. We fitted city-specific Poisson regression models to estimate the association between PM from different sources (desert and non-desert) and daily mortality and emergency hospitalizations. Finally, we pooled city-specific results in a random-effects meta-analysis. Results: On average, 15% of days were affected by desert dust at ground level (desert PM10 > 0 μg/m3). Most episodes occurred in spring–summer, with increasing gradient of both frequency and intensity north–south and west–east of the Mediterranean basin. We found significant associations of both PM10 concentrations with mortality. Increases of 10 μg/m3 in non-desert and desert PM10 (lag 0–1 days) were associated with increases in natural mortality of 0.55% (95% CI: 0.24, 0.87%) and 0.65% (95% CI: 0.24, 1.06%), respectively. Similar associations were estimated for cardio-respiratory mortality and hospital admissions. Conclusions: PM10 originating from the desert was positively associated with mortality and hospitalizations in Southern Europe. Policy measures should aim at reducing population exposure to anthropogenic airborne particles even in areas with large contribution from desert dust advections. Citation: Stafoggia M, Zauli-Sajani S, Pey J, Samoli E, Alessandrini E, Basagaña X, Cernigliaro A, Chiusolo M, Demaria M, Díaz J, Faustini A, Katsouyanni K, Kelessis AG, Linares C, Marchesi S, Medina S, Pandolfi P, Pérez N, Querol X, Randi G, Ranzi A, Tobias A, Forastiere F, MED-PARTICLES Study Group. 2016. Desert dust outbreaks in Southern Europe: contribution to daily PM10 concentrations and short-term associations with mortality and hospital admissions. Environ Health Perspect 124:413–419; http://dx.doi.org/10.1289/ehp.1409164 PMID:26219103

  10. Annual global tree cover estimated by fusing optical and SAR satellite observations

    NASA Astrophysics Data System (ADS)

    Feng, M.; Sexton, J. O.; Channan, S.; Townshend, J. R.

    2017-12-01

    Tree cover defined structurally as the proportional, vertically projected area of vegetation (including leaves, stems, branches, etc.) of woody plants above a given height affects terrestrial energy and water exchanges, photosynthesis and transpiration, net primary production, and carbon and nutrient fluxes. Tree cover provides a measurable attribute upon which forest cover may be defined. Changes in tree cover over time can be used to monitor and retrieve site-specific histories of forest disturbance, succession, and degradation. Measurements of Earth's tree cover have been produced at regional, national, and global extents. However, most representations are static, and those for which multiple time periods have been produced are neither intended nor adequate for consistent, long-term monitoring. Moreover, although a substantial proportion of change has been shown to occur at resolutions below 250 m, existing long-term, Landsat-resolution datasets are either produced as static layers or with annual, five- or ten-year temporal resolution. We have developed an algorithms to retrieve seamless and consistent, sub-hectare resolution estimates of tree-canopy from optical and radar satellite data sources (e.g., Landsat, Sentinel-2, and ALOS-PALSAR). Our approach to estimation enables assimilation of multiple data sources and produces estimates of both cover and its uncertainty at the scale of pixels. It has generated the world's first Landsat-based percent tree cover dataset in 2013. Our previous algorithms are being adapted to produce prototype percent-tree and water-cover layers globally in 2000, 2005, and 2010—as well as annually over North and South America from 2010 to 2015—from passive-optical (Landsat and Sentinel-2) and SAR measurements. Generating a global, annual dataset is beyond the scope of this support; however, North and South America represent all of the world's major biomes and so offer the complete global range of environmental sources of error and uncertainty.

  11. Long-term financing needs for HIV control in sub-Saharan Africa in 2015–2050: a modelling study

    PubMed Central

    Atun, Rifat; Chang, Angela Y; Ogbuoji, Osondu; Silva, Sachin; Resch, Stephen; Hontelez, Jan; Bärnighausen, Till

    2016-01-01

    Objectives To estimate the present value of current and future funding needed for HIV treatment and prevention in 9 sub-Saharan African (SSA) countries that account for 70% of HIV burden in Africa under different scenarios of intervention scale-up. To analyse the gaps between current expenditures and funding obligation, and discuss the policy implications of future financing needs. Design We used the Goals module from Spectrum, and applied the most up-to-date cost and coverage data to provide a range of estimates for future financing obligations. The four different scale-up scenarios vary by treatment initiation threshold and service coverage level. We compared the model projections to current domestic and international financial sources available in selected SSA countries. Results In the 9 SSA countries, the estimated resources required for HIV prevention and treatment in 2015–2050 range from US$98 billion to maintain current coverage levels for treatment and prevention with eligibility for treatment initiation at CD4 count of <500/mm3 to US$261 billion if treatment were to be extended to all HIV-positive individuals and prevention scaled up. With the addition of new funding obligations for HIV—which arise implicitly through commitment to achieve higher than current treatment coverage levels—overall financial obligations (sum of debt levels and the present value of the stock of future HIV funding obligations) would rise substantially. Conclusions Investing upfront in scale-up of HIV services to achieve high coverage levels will reduce HIV incidence, prevention and future treatment expenditures by realising long-term preventive effects of ART to reduce HIV transmission. Future obligations are too substantial for most SSA countries to be met from domestic sources alone. New sources of funding, in addition to domestic sources, include innovative financing. Debt sustainability for sustained HIV response is an urgent imperative for affected countries and donors. PMID:26948960

  12. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  13. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  14. Associations of short-term exposure to traffic-related air pollution with cardiovascular and respiratory hospital admissions in London, UK.

    PubMed

    Samoli, Evangelia; Atkinson, Richard W; Analitis, Antonis; Fuller, Gary W; Green, David C; Mudway, Ian; Anderson, H Ross; Kelly, Frank J

    2016-05-01

    There is evidence of adverse associations between short-term exposure to traffic-related pollution and health, but little is known about the relative contribution of the various sources and particulate constituents. For each day for 2011-2012 in London, UK over 100 air pollutant metrics were assembled using monitors, modelling and chemical analyses. We selected a priori metrics indicative of traffic sources: general traffic, petrol exhaust, diesel exhaust and non-exhaust (mineral dust, brake and tyre wear). Using Poisson regression models, controlling for time-varying confounders, we derived effect estimates for cardiovascular and respiratory hospital admissions at prespecified lags and evaluated the sensitivity of estimates to multipollutant modelling and effect modification by season. For single day exposure, we found consistent associations between adult (15-64 years) cardiovascular and paediatric (0-14 years) respiratory admissions with elemental and black carbon (EC/BC), ranging from 0.56% to 1.65% increase per IQR change, and to a lesser degree with carbon monoxide (CO) and aluminium (Al). The average of past 7 days EC/BC exposure was associated with elderly (65+ years) cardiovascular admissions. Indicated associations were higher during the warm period of the year. Although effect estimates were sensitive to the adjustment for other pollutants they remained consistent in direction, indicating independence of associations from different sources, especially between diesel and petrol engines, as well as mineral dust. Our results suggest that exhaust related pollutants are associated with increased numbers of adult cardiovascular and paediatric respiratory hospitalisations. More extensive monitoring in urban centres is required to further elucidate the associations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Associations of short-term exposure to traffic-related air pollution with cardiovascular and respiratory hospital admissions in London, UK

    PubMed Central

    Samoli, Evangelia; Atkinson, Richard W; Analitis, Antonis; Fuller, Gary W; Green, David C; Mudway, Ian; Anderson, H Ross; Kelly, Frank J

    2016-01-01

    Objectives There is evidence of adverse associations between short-term exposure to traffic-related pollution and health, but little is known about the relative contribution of the various sources and particulate constituents. Methods For each day for 2011–2012 in London, UK over 100 air pollutant metrics were assembled using monitors, modelling and chemical analyses. We selected a priori metrics indicative of traffic sources: general traffic, petrol exhaust, diesel exhaust and non-exhaust (mineral dust, brake and tyre wear). Using Poisson regression models, controlling for time-varying confounders, we derived effect estimates for cardiovascular and respiratory hospital admissions at prespecified lags and evaluated the sensitivity of estimates to multipollutant modelling and effect modification by season. Results For single day exposure, we found consistent associations between adult (15–64 years) cardiovascular and paediatric (0–14 years) respiratory admissions with elemental and black carbon (EC/BC), ranging from 0.56% to 1.65% increase per IQR change, and to a lesser degree with carbon monoxide (CO) and aluminium (Al). The average of past 7 days EC/BC exposure was associated with elderly (65+ years) cardiovascular admissions. Indicated associations were higher during the warm period of the year. Although effect estimates were sensitive to the adjustment for other pollutants they remained consistent in direction, indicating independence of associations from different sources, especially between diesel and petrol engines, as well as mineral dust. Conclusions Our results suggest that exhaust related pollutants are associated with increased numbers of adult cardiovascular and paediatric respiratory hospitalisations. More extensive monitoring in urban centres is required to further elucidate the associations. PMID:26884048

  16. Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni

    2018-06-01

    Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.

  17. Monitoring the size and protagonists of the drug market: combining supply and demand data sources and estimates.

    PubMed

    Rossi, Carla

    2013-06-01

    The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.

  18. Estimates of ground level TSP, SO sub 2 and HCI for a municipal waste incinerator to be located at Tynes Bay - Bermuda

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kent Simmons, J.A.; Knap, A.H.

    1991-04-01

    The computer model Industrial Source Complex Short Term (ISCST) was used to study the stack emissions from a refuse incinerator proposed for the inland of Bermuda. The model predicts that the highest ground level pollutant concentrations will occur near Prospect, 800 m to 1,000 m due south of the stack. The authors installed a portable laboratory and instruments at Prospect to begin making air quality baseline measurements. By comparing the model's estimates of the incinerator contribution to the background levels measured at the site they predicted that stack emissions would not cause an increase in TSP or SO{sub 2}. Themore » incinerator will be a significant source of HCI to Bermuda air with ambient levels approaching air quality guidelines.« less

  19. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    DOE PAGES

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan; ...

    2017-10-17

    In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less

  20. Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Ray, Jaideep; Hou, Zhangshuan

    In this paper we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach ismore » used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.« less

  1. Investigating the effects of the fixed and varying dispersion parameters of Poisson-gamma models on empirical Bayes estimates.

    PubMed

    Lord, Dominique; Park, Peter Young-Jin

    2008-07-01

    Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.

  2. Clinical and Biochemical Characteristics of Brain-Dead Donors as Predictors of Early- and Long-Term Renal Function After Transplant.

    PubMed

    Kwiatkowska, Ewa; Domański, Leszek; Bober, Joanna; Safranow, Krzysztof; Pawlik, Andrzej; Ciechanowski, Kazimierz; Wiśniewska, Magda; Kędzierska, Karolina

    2017-08-01

    Organs from brain-dead donors are the main source of allografts for transplant. Comparisons between living-donor and brain-dead donor kidneys show that the latter are more likely to demonstrate delayed graft function and lower long-term survival. This study aimed to assess the effects of various clinical and biochemical factors of donors on early- and long-term renal function after transplant. We analyzed data from kidney recipients treated between 2006 and 2008 who received organs from brain-dead donors. Data from 54 donors and 89 recipients were analyzed. No relation was observed between donor sodium concentration and the presence of delayed graft function. Donor height was positively correlated with creatinine clearance in recipients in the 1 to 3 months after renal transplant. Donor diastolic blood pressure was negatively correlated with estimated glomerular filtration rate throughout the observation period. Donor age was negatively correlated with the allograft recipient's estimated glomerular filtration rate throughout 4 years of observation. Donor estimated glomerular filtration rate was positively correlated with that of the recipient throughout 3 years of observation. The results of this study indicate that various factors associated with allograft donors may influence graft function.

  3. An audit of the global carbon budget: identifying and reducing sources of uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Tans, P. P.; Marland, G.; Stocker, B. D.

    2012-12-01

    Uncertainties in our carbon accounting practices may limit our ability to objectively verify emission reductions on regional scales. Furthermore uncertainties in the global C budget must be reduced to benchmark Earth System Models that incorporate carbon-climate interactions. Here we present an audit of the global C budget where we try to identify sources of uncertainty for major terms in the global C budget. The atmospheric growth rate of CO2 has increased significantly over the last 50 years, while the uncertainty in calculating the global atmospheric growth rate has been reduced from 0.4 ppm/yr to 0.2 ppm/yr (95% confidence). Although we have greatly reduced global CO2 growth rate uncertainties, there remain regions, such as the Southern Hemisphere, Tropics and Arctic, where changes in regional sources/sinks will remain difficult to detect without additional observations. Increases in fossil fuel (FF) emissions are the primary factor driving the increase in global CO2 growth rate; however, our confidence in FF emission estimates has actually gone down. Based on a comparison of multiple estimates, FF emissions have increased from 2.45 ± 0.12 PgC/yr in 1959 to 9.40 ± 0.66 PgC/yr in 2010. Major sources of increasing FF emission uncertainty are increased emissions from emerging economies, such as China and India, as well as subtle differences in accounting practices. Lastly, we evaluate emission estimates from Land Use Change (LUC). Although relative errors in emission estimates from LUC are quite high (2 sigma ~ 50%), LUC emissions have remained fairly constant in recent decades. We evaluate the three commonly used approaches to estimating LUC emissions- Bookkeeping, Satellite Imagery, and Model Simulations- to identify their main sources of error and their ability to detect net emissions from LUC.; Uncertainties in Fossil Fuel Emissions over the last 50 years.

  4. Analysis of percent density estimates from digital breast tomosynthesis projection images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Zhang, Cuiping; Yaffe, Martin J.; Maidment, Andrew D. A.

    2007-03-01

    Women with dense breasts have an increased risk of breast cancer. Breast density is typically measured as the percent density (PD), the percentage of non-fatty (i.e., dense) tissue in breast images. Mammographic PD estimates vary, in part, due to the projective nature of mammograms. Digital breast tomosynthesis (DBT) is a novel radiographic method in which 3D images of the breast are reconstructed from a small number of projection (source) images, acquired at different positions of the x-ray focus. DBT provides superior visualization of breast tissue and has improved sensitivity and specificity as compared to mammography. Our long-term goal is to test the hypothesis that PD obtained from DBT is superior in estimating cancer risk compared with other modalities. As a first step, we have analyzed the PD estimates from DBT source projections since the results would be independent of the reconstruction method. We estimated PD from MLO mammograms (PD M) and from individual DBT projections (PD T). We observed good agreement between PD M and PD T from the central projection images of 40 women. This suggests that variations in breast positioning, dose, and scatter between mammography and DBT do not negatively affect PD estimation. The PD T estimated from individual DBT projections of nine women varied with the angle between the projections. This variation is caused by the 3D arrangement of the breast dense tissue and the acquisition geometry.

  5. Pesticides exposure assessment of kettleman city using the industrial source complex short-term model version 3.

    PubMed

    Tao, Jing; Barry, Terrell; Segawa, Randy; Neal, Rosemary; Tuli, Atac

    2013-01-01

    Kettleman City, California, reported a higher than expected number of birth defect cases between 2007 and 2010, raising the concern of community and government agencies. A pesticide exposure evaluation was conducted as part of a complete assessment of community chemical exposure. Nineteen pesticides that potentially cause birth defects were investigated. The Industrial Source Complex Short-Term Model Version 3 (ISCST3) was used to estimate off-site air concentrations associated with pesticide applications within 8 km of the community from late 2006 to 2009. The health screening levels were designed to indicate potential health effects and used for preliminary health evaluations of estimated air concentrations. A tiered approach was conducted. The first tier modeled simple, hypothetical worst-case situations for each of 19 pesticides. The second tier modeled specific applications of the pesticides with estimated concentrations exceeding health screening levels in the first tier. The pesticide use report database of the California Department of Pesticide Regulation provided application information. Weather input data were summarized from the measurements of a local weather station in the California Irrigation Management Information System. The ISCST3 modeling results showed that during the target period, only two application days of one pesticide (methyl isothiocyanate) produced air concentration estimates above the health screening level for developmental effects at the boundary of Kettleman City. These results suggest that the likelihood of birth defects caused by pesticide exposure was low. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. Developing a comprehensive time series of GDP per capita for 210 countries from 1950 to 2015

    PubMed Central

    2012-01-01

    Background Income has been extensively studied and utilized as a determinant of health. There are several sources of income expressed as gross domestic product (GDP) per capita, but there are no time series that are complete for the years between 1950 and 2015 for the 210 countries for which data exist. It is in the interest of population health research to establish a global time series that is complete from 1950 to 2015. Methods We collected GDP per capita estimates expressed in either constant US dollar terms or international dollar terms (corrected for purchasing power parity) from seven sources. We applied several stages of models, including ordinary least-squares regressions and mixed effects models, to complete each of the seven source series from 1950 to 2015. The three US dollar and four international dollar series were each averaged to produce two new GDP per capita series. Results and discussion Nine complete series from 1950 to 2015 for 210 countries are available for use. These series can serve various analytical purposes and can illustrate myriad economic trends and features. The derivation of the two new series allows for researchers to avoid any series-specific biases that may exist. The modeling approach used is flexible and will allow for yearly updating as new estimates are produced by the source series. Conclusion GDP per capita is a necessary tool in population health research, and our development and implementation of a new method has allowed for the most comprehensive known time series to date. PMID:22846561

  7. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  8. Long-term trends of ambient particulate matter emission source contributions and the accountability of control strategies in Hong Kong over 1998-2008

    NASA Astrophysics Data System (ADS)

    Yuan, Zibing; Yadav, Varun; Turner, Jay R.; Louie, Peter K. K.; Lau, Alexis Kai Hon

    2013-09-01

    Despite extensive emission control measures targeting motor vehicles and to a lesser extent other sources, annual-average PM10 mass concentrations in Hong Kong have remained relatively constant for the past several years and for some air quality metrics, such as the frequency of poor visibility days, conditions have degraded. The underlying drivers for these long-term trends were examined by performing source apportionment on eleven years (1998-2008) of data for seven monitoring sites in the Hong Kong PM10 chemical speciation network. Nine factors were resolved using Positive Matrix Factorization. These factors were assigned to emission source categories that were classified as local (operationally defined as within the Hong Kong Special Administrative Region) or non-local based on temporal and spatial patterns in the source contribution estimates. This data-driven analysis provides strong evidence that local controls on motor vehicle emissions have been effective in reducing motor vehicle-related ambient PM10 burdens with annual-average contributions at neighborhood- and larger-scale monitoring stations decreasing by ˜6 μg m-3 over the eleven year period. However, this improvement has been offset by an increase in annual-average contributions from non-local contributions, especially secondary sulfate and nitrate, of ˜8 μg m-3 over the same time period. As a result, non-local source contributions to urban-scale PM10 have increased from 58% in 1998 to 70% in 2008. Most of the motor vehicle-related decrease and non-local source driven increase occurred over the period 1998-2004 with more modest changes thereafter. Non-local contributions increased most dramatically for secondary sulfate and secondary nitrate factors and thus combustion-related control strategies, including but not limited to power plants, are needed for sources located in the Pearl River Delta and more distant regions to improve air quality conditions in Hong Kong. PMF-resolved source contribution estimates were also used to examine differential contributions of emission source categories during high PM episodes compared to study-average behavior. While contributions from all source categories increased to some extent on high PM days, the increases were disproportionately high for the non-local sources. Thus, controls on emission sources located outside the Hong Kong Special Administrative Region will be needed to effectively decrease the frequency and severity of high PM episodes.

  9. Explosion Source Characteristics in Frozen and Unfrozen Rock

    DTIC Science & Technology

    2008-09-30

    Alaska in August 2006 to provide empirical data on seismically -estimated yield from explosions it frozen rock Iaboratory studies have demonstrated that...can alter seismic yield. Central Alaska has abrupt lateral boundaries in discontinuous permafrost, and we detonated 3 shots in frozen, saturated rock...SUBJECT TERMS Seismic attenuation, Seismic propagation, Seismic characterization 16. SECURITY CLASSIFICATION OF: 17. LIMITATION 18. NUMBER 19a. NAME

  10. Geo-social media as a proxy for hydrometeorological data for streamflow estimation and to improve flood monitoring

    NASA Astrophysics Data System (ADS)

    Restrepo-Estrada, Camilo; de Andrade, Sidgley Camargo; Abe, Narumi; Fava, Maria Clara; Mendiondo, Eduardo Mario; de Albuquerque, João Porto

    2018-02-01

    Floods are one of the most devastating types of worldwide disasters in terms of human, economic, and social losses. If authoritative data is scarce, or unavailable for some periods, other sources of information are required to improve streamflow estimation and early flood warnings. Georeferenced social media messages are increasingly being regarded as an alternative source of information for coping with flood risks. However, existing studies have mostly concentrated on the links between geo-social media activity and flooded areas. Thus, there is still a gap in research with regard to the use of social media as a proxy for rainfall-runoff estimations and flood forecasting. To address this, we propose using a transformation function that creates a proxy variable for rainfall by analysing geo-social media messages and rainfall measurements from authoritative sources, which are later incorporated within a hydrological model for streamflow estimation. We found that the combined use of official rainfall values with the social media proxy variable as input for the Probability Distributed Model (PDM), improved streamflow simulations for flood monitoring. The combination of authoritative sources and transformed geo-social media data during flood events achieved a 71% degree of accuracy and a 29% underestimation rate in a comparison made with real streamflow measurements. This is a significant improvement on the respective values of 39% and 58%, achieved when only authoritative data were used for the modelling. This result is clear evidence of the potential use of derived geo-social media data as a proxy for environmental variables for improving flood early-warning systems.

  11. Demand for Long-Term Care Insurance in China.

    PubMed

    Wang, Qun; Zhou, Yi; Ding, Xinrui; Ying, Xiaohua

    2017-12-22

    The aim of this study was to estimate willingness to pay (WTP) for long-term care insurance (LTCI) and to explore the determinants of demand for LTCI in China. We collected data from a household survey conducted in Qinghai and Zhejiang on a sample of 1842 households. We relied on contingent valuation methods to elicit the demand for LTCI and random effects logistic regression to analyze the factors associated with the demand for LTCI. Complementarily, we used document analysis to compare the LTCI designed in this study and the current LTCI policies in the pilot cities. More than 90% of the respondents expressed their willingness to buy LTCI. The median WTP for LTCI was estimated at 370.14 RMB/year, accounting for 2.29% of average annual per capita disposable income. Price, age, education status, and income were significantly associated with demand for LTCI. Most pilot cities were found to mainly rely on Urban Employees Basic Medical Insurance funds as the financing source for LTCI. Considering that financing is one of the greatest challenges in the development of China's LTCI, we suggest that policy makers consider individual contribution as an important and possible option as a source of financing for LTCI.

  12. Comparison of SOC estimates and uncertainties from aerosol chemical composition and gas phase data in Atlanta

    NASA Astrophysics Data System (ADS)

    Pachon, Jorge E.; Balachandran, Sivaraman; Hu, Yongtao; Weber, Rodney J.; Mulholland, James A.; Russell, Armistead G.

    2010-10-01

    In the Southeastern US, organic carbon (OC) comprises about 30% of the PM 2.5 mass. A large fraction of OC is estimated to be of secondary origin. Long-term estimates of SOC and uncertainties are necessary in the evaluation of air quality policy effectiveness and epidemiologic studies. Four methods to estimate secondary organic carbon (SOC) and respective uncertainties are compared utilizing PM 2.5 chemical composition and gas phase data available in Atlanta from 1999 to 2007. The elemental carbon (EC) tracer and the regression methods, which rely on the use of tracer species of primary and secondary OC formation, provided intermediate estimates of SOC as 30% of OC. The other two methods, chemical mass balance (CMB) and positive matrix factorization (PMF) solve mass balance equations to estimate primary and secondary fractions based on source profiles and statistically-derived common factors, respectively. CMB had the highest estimate of SOC (46% of OC) while PMF led to the lowest (26% of OC). The comparison of SOC uncertainties, estimated based on propagation of errors, led to the regression method having the lowest uncertainty among the four methods. We compared the estimates with the water soluble fraction of the OC, which has been suggested as a surrogate of SOC when biomass burning is negligible, and found a similar trend with SOC estimates from the regression method. The regression method also showed the strongest correlation with daily SOC estimates from CMB using molecular markers. The regression method shows advantages over the other methods in the calculation of a long-term series of SOC estimates.

  13. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Emmons, L. K.; Mak, J. E.

    2007-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year- simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  14. Joint Application of Concentrations and Isotopic Signatures to Investigate the Global Atmospheric Carbon Monoxide Budget: Inverse Modeling Approach

    NASA Astrophysics Data System (ADS)

    Park, K.; Mak, J. E.; Emmons, L. K.

    2008-12-01

    Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year-simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.

  15. #nowplaying Madonna: a large-scale evaluation on estimating similarities between music artists and between movies from microblogs.

    PubMed

    Schedl, Markus

    2012-01-01

    Different term weighting techniques such as [Formula: see text] or BM25 have been used intensely for manifold text-based information retrieval tasks. Their use for modeling term profiles for named entities and subsequent calculation of similarities between these named entities have been studied to a much smaller extent. The recent trend of microblogging made available massive amounts of information about almost every topic around the world. Therefore, microblogs represent a valuable source for text-based named entity modeling. In this paper, we present a systematic and comprehensive evaluation of different term weighting measures , normalization techniques , query schemes , index term sets , and similarity functions for the task of inferring similarities between named entities, based on data extracted from microblog posts . We analyze several thousand combinations of choices for the above mentioned dimensions, which influence the similarity calculation process, and we investigate in which way they impact the quality of the similarity estimates. Evaluation is performed using three real-world data sets: two collections of microblogs related to music artists and one related to movies. For the music collections, we present results of genre classification experiments using as benchmark genre information from allmusic.com. For the movie collection, we present results of multi-class classification experiments using as benchmark categories from IMDb. We show that microblogs can indeed be exploited to model named entity similarity with remarkable accuracy, provided the correct settings for the analyzed aspects are used. We further compare the results to those obtained when using Web pages as data source.

  16. Uncertainties for seismic moment tensors and applications to nuclear explosions, volcanic events, and earthquakes

    NASA Astrophysics Data System (ADS)

    Tape, C.; Alvizuri, C. R.; Silwal, V.; Tape, W.

    2017-12-01

    When considered as a point source, a seismic source can be characterized in terms of its origin time, hypocenter, moment tensor, and source time function. The seismologist's task is to estimate these parameters--and their uncertainties--from three-component ground motion recorded at irregularly spaced stations. We will focus on one portion of this problem: the estimation of the moment tensor and its uncertainties. With magnitude estimated separately, we are left with five parameters describing the normalized moment tensor. A lune of normalized eigenvalue triples can be used to visualize the two parameters (lune longitude and lune latitude) describing the source type, while the conventional strike, dip, and rake angles can be used to characterize the orientation. Slight modifications of these five parameters lead to a uniform parameterization of moment tensors--uniform in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. For a moment tensor m that we have inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in our inference of m. The calculation of P(V) requires knowing both the probability P(w) and the fractional volume V(w) of the set of moment tensors within a given angular radius w of m. We apply this approach to several different data sets, including nuclear explosions from the Nevada Test Site, volcanic events from Uturuncu (Bolivia), and earthquakes. Several challenges remain: choosing an appropriate misfit function, handling time shifts between data and synthetic waveforms, and extending the uncertainty estimation to include more source parameters (e.g., hypocenter and source time function).

  17. Remediation and its effect represented on long term monitoring data at a chlorinated ethenes contaminated site, Wonju, Korea

    NASA Astrophysics Data System (ADS)

    Lee, Seong-Sun; Lee, Seung Hyun; Lee, Kang-Kun

    2016-04-01

    A research for the contamination of chlorinated ethenes such as trichloroethylene (TCE) at an industrial complex, Wonju, Korea, was carried out based on 17 rounds of groundwater quality data collection from 2009 to 2015. Remediation technologies such as soil vapor extraction, soil flushing, biostimulation, and pump-and-treat have been applied to eliminate the contaminant sources of trichloroethylene (TCE) and to prevent the migration of TCE plume from remediation target zones to groundwater discharge area like a stream. The remediation efficiency according to the remedial actions was evaluated by tracing a time-series of plume evaluation and temporal mass discharge at three transects (Source, Transect-1, Transect-2) which was assigned along the groundwater flow path. Also, based on long term monitoring data, dissolved TCE concentration and mass of residual TCE in the initial stage of disposal were estimated to evaluate the efficiency of in situ remediation. The results of temporal and spatial monitoring before remedial actions showed that a TCE plume originating from main and local source zones continues to be discharged to a stream. However, from the end of intensive remedial actions from 2012 to 2013, the aqueous concentrations of TCE plume present at and around the main source areas decreased significantly. Especially, during the intensive remediation period, the early average mass discharge (26.58 g/day) at source transect was decreased to average 4.99 g/day. Estimated initial dissolved concentration and residual mass of TCE in the initial stage of disposal decreased rapidly after an intensive remedial action in 2013 and it is expected to be continuously decreased from the end of remedial actions to 2020. This study demonstrates that long term monitoring data are useful in assessing the effectiveness of remedial actions at chlorinated ethenes contaminated site. Acknowledgements This project is supported by the Korea Ministry of Environment under "The GAIA Project (173-092-009)"and "R&D Project on Environmental Management of Geologic CO2 storage" from the KEITI (Project number:2014001810003).

  18. Accuracy of advanced cancer patients' life expectancy estimates: The role of race and source of life expectancy information.

    PubMed

    Trevino, Kelly M; Zhang, Baohui; Shen, Megan J; Prigerson, Holly G

    2016-06-15

    The objective of this study was to examine the source of advanced cancer patients' information about their prognosis and determine whether this source of information could explain racial disparities in the accuracy of patients' life expectancy estimates (LEEs). Coping With Cancer was a prospective, longitudinal, multisite study of terminally ill cancer patients followed until death. In structured interviews, patients reported their LEEs and the sources of these estimates (ie, medical providers, personal beliefs, religious beliefs, and other). The accuracy of LEEs was calculated through a comparison of patients' self-reported LEEs with their actual survival. The sample for this analysis included 229 patients: 31 black patients and 198 white patients. Only 39.30% of the patients estimated their life expectancy within 12 months of their actual survival. Black patients were more likely to have an inaccurate LEE than white patients. A minority of the sample (18.3%) reported that a medical provider was the source of their LEEs; none of the black patients (0%) based their LEEs on a medical provider. Black race remained a significant predictor of an inaccurate LEE, even after the analysis had been controlled for sociodemographic characteristics and the source of LEEs. The majority of advanced cancer patients have an inaccurate understanding of their life expectancy. Black patients with advanced cancer are more likely to have an inaccurate LEE than white patients. Medical providers are not the source of information for LEEs for most advanced cancer patients and especially for black patients. The source of LEEs does not explain racial differences in LEE accuracy. Additional research into the mechanisms underlying racial differences in prognostic understanding is needed. Cancer 2016;122:1905-12. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  19. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  20. What is the impact of different VLBI analysis setups of the tropospheric delay on precipitable water vapor trends?

    NASA Astrophysics Data System (ADS)

    Balidakis, Kyriakos; Nilsson, Tobias; Heinkelmann, Robert; Glaser, Susanne; Zus, Florian; Deng, Zhiguo; Schuh, Harald

    2017-04-01

    The quality of the parameters estimated by global navigation satellite systems (GNSS) and very long baseline interferometry (VLBI) are distorted by erroneous meteorological observations applied to model the propagation delay in the electrically neutral atmosphere. For early VLBI sessions with poor geometry, unsuitable constraints imposed on the a priori tropospheric gradients is a source of additional hassle of VLBI analysis. Therefore, climate change indicators deduced from the geodetic analysis, such as the long-term precipitable water vapor (PWV) trends, are strongly affected. In this contribution we investigate the impact of different modeling and parameterization of the propagation delay in the troposphere on the estimates of long-term PWV trends from geodetic VLBI analysis results. We address the influence of the meteorological data source, and of the a priori non-hydrostatic delays and gradients employed in the VLBI processing, on the estimated PWV trends. In particular, we assess the effect of employing temperature and pressure from (i) homogenized in situ observations, (ii) the model levels of the ERA Interim reanalysis numerical weather model and (iii) our own blind model in the style of GPT2w with enhanced parameterization, calculated using the latter data set. Furthermore, we utilize non-hydrostatic delays and gradients estimated from (i) a GNSS reprocessing at GeoForschungsZentrum Potsdam, rigorously considering tropospheric ties, and (ii)) direct ray-tracing through ERA Interim, as additional observations. To evaluate the above, the least-squares module of the VieVS@GFZ VLBI software was appropriately modified. Additionally, we study the noise characteristics of the non-hydrostatic delays and gradients estimated from our VLBI and GNSS analyses as well as from ray-tracing. We have modified the Theil-Sen estimator appropriately to robustly deduce PWV trends from VLBI, GNSS, ray-tracing and direct numerical integration in ERA Interim. We disseminate all our solutions in the latest Tropo-SINEX format.

  1. Bounding the Role of Black Carbon in the Climate System: A Scientific Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, Tami C.; Doherty, Sarah J.; Fahey, D. W.

    2013-06-06

    Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. Predominant sources are combustion related; namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr-1 in the year 2000 with an uncertainty range of 2000 to 29000. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that ismore » quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption, influence on liquid, mixed-phase, and ice clouds, and deposition on snow and ice. These effects are calculated with models, but when possible, they are evaluated with both microphysical measurements and field observations. Global atmospheric absorption attributable to black carbon is too low in many models, and should be increased by about about 60%. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of black carbon is +0.43 W m-2 with 90% uncertainty bounds of (+0.17, +0.68) W m-2. Total direct forcing by all black carbon sources in the present day is estimated as +0.49 (+0.20, +0.76) W m-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings and their rapid responses and feedbacks. The best estimate of industrial-era (1750 to 2005) climate forcing of black carbon through all forcing mechanisms is +0.77 W m-2 with 90% uncertainty bounds of +-0.06 to +1.53 W m-2. Thus, there is a 96% probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. With a value of +0.77 W m-2, black carbon is likely the second most important individual climate-forcing agent in the industrial era, following carbon dioxide. Sources that emit black carbon also emit other short- lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of co- emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil-fuel and biofuel) have a net climate forcing of +0.004 (-0.62 to +0.57) W m-2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all black- carbon-rich sources becomes slightly negative (-0.08 W m-2 with 90% uncertainty bounds of -1.23 to +0.81 W m-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.« less

  2. Algae Biofuels Co-Location Assessment Tool for Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-11-29

    The Algae Biofuels Co-Location Assessment Tool for Canada uses chemical stoichiometry to estimate Nitrogen, Phosphorous, and Carbon atom availability from waste water and carbon dioxide emissions streams, and requirements for those same elements to produce a unit of algae. This information is then combined to find limiting nutrient information and estimate potential productivity associated with waste water and carbon dioxide sources. Output is visualized in terms of distributions or spatial locations. Distances are calculated between points of interest in the model using the great circle distance equation, and the smallest distances found by an exhaustive search and sort algorithm.

  3. Measurement of Fukushima Aerosol Debris in Sequim and Richland, WA and Ketchikan, AK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miley, Harry S.; Bowyer, Ted W.; Engelmann, Mark D.

    2013-05-01

    Aerosol collections were initiated at several locations by PNNL shortly after the Great East Japan Earthquake of May 2011. Aerosol samples were transferred to laboratory high-resolution gamma spectrometers for analysis. Similar to treaty monitoring stations operating across the Northern hemisphere, iodine and other isotopes which could be volatilized at high temperature were detected. Though these locations are not far apart, they have significant variations with respect to water, mountain-range placement, and local topography. Variation in computed source terms will be shown to bound the variability of this approach to source estimation.

  4. Generalizing Observational Study Results: Applying Propensity Score Methods to Complex Surveys

    PubMed Central

    DuGoff, Eva H; Schuler, Megan; Stuart, Elizabeth A

    2014-01-01

    ObjectiveTo provide a tutorial for using propensity score methods with complex survey data. Data SourcesSimulated data and the 2008 Medical Expenditure Panel Survey. Study DesignUsing simulation, we compared the following methods for estimating the treatment effect: a naïve estimate (ignoring both survey weights and propensity scores), survey weighting, propensity score methods (nearest neighbor matching, weighting, and subclassification), and propensity score methods in combination with survey weighting. Methods are compared in terms of bias and 95 percent confidence interval coverage. In Example 2, we used these methods to estimate the effect on health care spending of having a generalist versus a specialist as a usual source of care. Principal FindingsIn general, combining a propensity score method and survey weighting is necessary to achieve unbiased treatment effect estimates that are generalizable to the original survey target population. ConclusionsPropensity score methods are an essential tool for addressing confounding in observational studies. Ignoring survey weights may lead to results that are not generalizable to the survey target population. This paper clarifies the appropriate inferences for different propensity score methods and suggests guidelines for selecting an appropriate propensity score method based on a researcher’s goal. PMID:23855598

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  6. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  7. Gridded national inventory of U.S. methane emissions

    DOE PAGES

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...

    2016-11-16

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  8. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  9. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  10. Use of Satellite Observations for Long-Term Exposure Assessment of Global Concentrations of Fine Particulate Matter

    PubMed Central

    Martin, Randall V.; Brauer, Michael; Boys, Brian L.

    2014-01-01

    Background: More than a decade of satellite observations offers global information about the trend and magnitude of human exposure to fine particulate matter (PM2.5). Objective: In this study, we developed improved global exposure estimates of ambient PM2.5 mass and trend using PM2.5 concentrations inferred from multiple satellite instruments. Methods: We combined three satellite-derived PM2.5 sources to produce global PM2.5 estimates at about 10 km × 10 km from 1998 through 2012. For each source, we related total column retrievals of aerosol optical depth to near-ground PM2.5 using the GEOS–Chem chemical transport model to represent local aerosol optical properties and vertical profiles. We collected 210 global ground-based PM2.5 observations from the literature to evaluate our satellite-based estimates with values measured in areas other than North America and Europe. Results: We estimated that global population-weighted ambient PM2.5 concentrations increased 0.55 μg/m3/year (95% CI: 0.43, 0.67) (2.1%/year; 95% CI: 1.6, 2.6) from 1998 through 2012. Increasing PM2.5 in some developing regions drove this global change, despite decreasing PM2.5 in some developed regions. The estimated proportion of the population of East Asia living above the World Health Organization (WHO) Interim Target-1 of 35 μg/m3 increased from 51% in 1998–2000 to 70% in 2010–2012. In contrast, the North American proportion above the WHO Air Quality Guideline of 10 μg/m3 fell from 62% in 1998–2000 to 19% in 2010–2012. We found significant agreement between satellite-derived estimates and ground-based measurements outside North America and Europe (r = 0.81; n = 210; slope = 0.68). The low bias in satellite-derived estimates suggests that true global concentrations could be even greater. Conclusions: Satellite observations provide insight into global long-term changes in ambient PM2.5 concentrations. Satellite-derived estimates and ground-based PM2.5 observations from this study are available for public use. Citation: van Donkelaar A, Martin RV, Brauer M, Boys BL. 2015. Use of satellite observations for long-term exposure assessment of global concentrations of fine particulate matter. Environ Health Perspect 123:135–143; http://dx.doi.org/10.1289/ehp.1408646 PMID:25343779

  11. Recent Approaches to Estimate Associations Between Source-Specific Air Pollution and Health.

    PubMed

    Krall, Jenna R; Strickland, Matthew J

    2017-03-01

    Estimating health effects associated with source-specific exposure is important for better understanding how pollution impacts health and for developing policies to better protect public health. Although epidemiologic studies of sources can be informative, these studies are challenging to conduct because source-specific exposures (e.g., particulate matter from vehicles) often are not directly observed and must be estimated. We reviewed recent studies that estimated associations between pollution sources and health to identify methodological developments designed to address important challenges. Notable advances in epidemiologic studies of sources include approaches for (1) propagating uncertainty in source estimation into health effect estimates, (2) assessing regional and seasonal variability in emissions sources and source-specific health effects, and (3) addressing potential confounding in estimated health effects. Novel methodological approaches to address challenges in studies of pollution sources, particularly evaluation of source-specific health effects, are important for determining how source-specific exposure impacts health.

  12. Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison

    2017-11-01

    Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.

    Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less

  14. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    USGS Publications Warehouse

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  15. Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Fure, A. D.; Jawitz, J. W.

    2006-12-01

    Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).

  16. Interagency Nuclear Safety Review Panel: Biomedical and Environmental Effects Subpanel report for Galileo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anspaugh, L.R.; Blanton, J.O.; Bollinger, L.J.

    1989-10-01

    This report of the Biomedical and Environmental Effects Subpanel (BEES) of the Interagency Nuclear Safety Review Panel (INSRP), for the Galileo space mission addresses the possible radiological consequences of postulated accidents that release radioactivity into the environment. This report presents estimates of the consequences and uncertainties given that the source term is released into the environment. 10 refs., 6 tabs.

  17. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of atmospheric dispersion model with improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2014-06-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Dai-ichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data with atmospheric model simulations from WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information), and simulations from the oceanic dispersion model SEA-GEARN-FDM, both developed by the authors. A sophisticated deposition scheme, which deals with dry and fogwater depositions, cloud condensation nuclei (CCN) activation and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The fallout to the ocean surface calculated by WSPEEDI-II was used as input data for the SEA-GEARN-FDM calculations. Reverse and inverse source-term estimation methods based on coupling the simulations from both models was adopted using air dose rates and concentrations, and sea surface concentrations. The results revealed that the major releases of radionuclides due to FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, the morning of 13 March after the venting event at Unit 3, midnight of 14 March when the SRV (Safely Relief Valve) at Unit 2 was opened three times, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates associated with reactor pressure changes in Units 2 and 3. The modified WSPEEDI-II simulation using the new source term reproduced local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (MLDP0, HYSPLIT, and NAME) for regional and global calculations and showed good agreement between calculated and observed air concentration and surface deposition of 137Cs in East Japan. Moreover, HYSPLIT model using the new source term also reproduced the plume arrivals at several countries abroad showing a good correlation with measured air concentration data. A large part of deposition pattern of total 131I and 137Cs in East Japan was explained by in-cloud particulate scavenging. However, for the regional scale contaminated areas, there were large uncertainties due to the overestimation of rainfall amounts and the underestimation of fogwater and drizzle depositions. The computations showed that approximately 27% of 137Cs discharged from FNPS1 deposited to the land in East Japan, mostly in forest areas.

  18. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    PubMed

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. A modified approach for estimating the aquatic critical load of acid deposition in northern Saskatchewan, Canada

    NASA Astrophysics Data System (ADS)

    Whitfield, Colin J.; Mowat, Aidan C.; Scott, Kenneth A.; Watmough, Shaun A.

    2016-12-01

    Acid-sensitive ecosystems are found in northern Saskatchewan, which lies downwind of major sulphur (S) and nitrogen (N) emissions sources associated with the oil sands extraction industry. In order to protect these ecosystems against acidification, tolerance to acid deposition must be quantified. The suitability of the central empirical relationship used in the Steady-State Water Chemistry (SSWC) model to predict historical sulphate (SO4) concentrations was investigated, and an alternate approach for determining aquatic critical loads of acidity (CL(A)) was employed for the study lakes (n = 260). Critical loads of acidity were often low, with median values of 12-16 mmolc m-2 yr-1, with the lower value reflecting a region-specific limit for acid-neutralizing capacity identified in this study. Uncertain levels of atmospheric deposition in the region, however, are problematic for characterizing acidification risk. Accurate S and chloride (Cl) deposition are needed to identify catchment sources (and sinks) of these elements in the new approach for CL(A) calculation. Likewise, accurate depiction of atmospheric deposition levels can prove useful for evaluation of lake runoff estimates on which estimates of CL(A) are contingent. While CL(A) are low and exceedance may occur according to projected increases in S deposition in the near-term, S retention appears to be an important feature in many catchments and risk of acidification may be overstated should long-term S retention be occurring in peatlands.

  20. Regularized Semiparametric Estimation for Ordinary Differential Equations

    PubMed Central

    Li, Yun; Zhu, Ji; Wang, Naisyin

    2015-01-01

    Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639

  1. Total Phosphorus Loads for Selected Tributaries to Sebago Lake, Maine

    USGS Publications Warehouse

    Hodgkins, Glenn A.

    2001-01-01

    The streamflow and water-quality datacollection networks of the Portland Water District (PWD) and the U.S. Geological Survey (USGS) as of February 2000 were analyzed in terms of their applicability for estimating total phosphorus loads for selected tributaries to Sebago Lake in southern Maine. The long-term unit-area mean annual flows for the Songo River and for small, ungaged tributaries are similar to the long-term unit-area mean annual flows for the Crooked River and other gaged tributaries to Sebago Lake, based on a regression equation that estimates mean annual streamflows in Maine. Unit-area peak streamflows of Sebago Lake tributaries can be quite different, based on a regression equation that estimates peak streamflows for Maine. Crooked River had a statistically significant positive relation (Kendall's Tau test, p=0.0004) between streamflow and total phosphorus concentration. Panther Run had a statistically significant negative relation (p=0.0015). Significant positive relations may indicate contributions from nonpoint sources or sediment resuspension, whereas significant negative relations may indicate dilution of point sources. Total phosphorus concentrations were significantly larger in the Crooked River than in the Songo River (Wilcoxon rank-sum test, p<0.0001). Evidence was insufficient, however, to indicate that phosphorus concentrations from medium-sized drainage basins, at a significance level of 0.05, were different from each other or that concentrations in small-sized drainage basins were different from each other (Kruskal-Wallis test, p= 0.0980, 0.1265). All large- and medium-sized drainage basins were sampled for total phosphorus approximately monthly. Although not all small drainage basins were sampled, they may be well represented by the small drainage basins that were sampled. If the tributaries gaged by PWD had adequate streamflow data, the current PWD tributary monitoring program would probably produce total phosphorus loading data that would represent all gaged and ungaged tributaries to Sebago Lake. Outside the PWD tributary-monitoring program, the largest ungaged tributary to Sebago Lake contains 1.5 percent of the area draining to the lake. In the absence of unique point or nonpoint sources of phosphorus, ungaged tributaries are unlikely to have total phosphorus concentrations that differ significantly from those in the small tributaries that have concentration data. The regression method, also known as the rating-curve method, was used to estimate the annual total phosphorus load for Crooked River, Northwest River, and Rich Mill Pond Outlet for water years 1996-98. The MOVE.1 method was used to estimate daily streamflows for the regression method at Northwest River and Rich Mill Pond Outlet, where streamflows were not continuously monitored. An averaging method also was used to compute annual loads at the three sites. The difference between the regression estimate and the averaging estimate for each of the three tributaries was consistent with what was expected from previous studies.

  2. Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources

    NASA Technical Reports Server (NTRS)

    Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.

    2014-01-01

    Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.

  3. Hanford Environmental Dose Reconstruction Project monthly report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMakin, A.H., Cannon, S.D.; Finch, S.M.

    1992-09-01

    The objective of the Hanford Environmental Dose Reconstruction MDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in envirorunental pathways. epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering. radiation dosimetry. and cultural anthropology. Included are appointed members representing the states of Oregon, Washington, and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact onmore » humans (dose estimates): Source Terms; Environmental Transport; Environmental Monitoring Data Demography, Food Consumption, and Agriculture; and Environmental Pathways and Dose Estimates.« less

  4. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  5. Data-optimized source modeling with the Backwards Liouville Test–Kinetic method

    DOE PAGES

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  6. Magnetostrophic balance in planetary dynamos - Predictions for Neptune's magnetosphere

    NASA Technical Reports Server (NTRS)

    Curtis, S. A.; Ness, N. F.

    1986-01-01

    With the purpose of estimating Neptune's magnetic field and its implications for nonthermal Neptune radio emissions, a new scaling law for planetary magnetic fields was developed in terms of externally observable parameters (the planet's mean density, radius, mass, rotation rate, and internal heat source luminosity). From a comparison of theory and observations by Voyager it was concluded that planetary dynamos are two-state systems with either zero intrinsic magnetic field (for planets with low internal heat source) or (for planets with the internal heat source sufficiently strong to drive convection) a magnetic field near the upper bound determined from magnetostrophic balance. It is noted that mass loading of the Neptune magnetosphere by Triton may play an important role in the generation of nonthermal radio emissions.

  7. Continuous fermentation of food waste leachate for the production of volatile fatty acids and potential as a denitrification carbon source.

    PubMed

    Kim, Hakchan; Kim, Jaai; Shin, Seung Gu; Hwang, Seokhwan; Lee, Changsoo

    2016-05-01

    This study investigated the simultaneous effects of hydraulic retention time (HRT) and pH on the continuous production of VFAs from food waste leachate using response surface analysis. The response surface approximations (R(2)=0.895, p<0.05) revealed that pH has a dominant effect on the specific VFA production (PTVFA) within the explored space (1-4-day HRT, pH 4.5-6.5). The estimated maximum PTVFA was 0.26g total VFAs/g CODf at 2.14-day HRT and pH 6.44, and the approximation was experimentally validated by running triplicate reactors under the estimated optimum conditions. The mixture of the filtrates recovered from these reactors was tested as a denitrification carbon source and demonstrated superior performance in terms of reaction rate and lag length relative to other chemicals, including acetate and methanol. The overall results provide helpful information for better design and control of continuous fermentation for producing waste-derived VFAs, an alternative carbon source for denitrification. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Unrecognized astrometric confusion in the Galactic Centre

    NASA Astrophysics Data System (ADS)

    Plewa, P. M.; Sari, R.

    2018-06-01

    The Galactic Centre is a crowded stellar field and frequent unrecognized events of source confusion, which involve undetected faint stars, are expected to introduce astrometric noise on a sub-mas level. This confusion noise is the main non-instrumental effect limiting the astrometric accuracy and precision of current near-infrared imaging observations and the long-term monitoring of individual stellar orbits in the vicinity of the central supermassive black hole. We self-consistently simulate the motions of the known and the yet unidentified stars to characterize this noise component and show that a likely consequence of source confusion is a bias in estimates of the stellar orbital elements, as well as the inferred mass and distance of the black hole, in particular if stars are being observed at small projected separations from it, such as the star S2 during pericentre passage. Furthermore, we investigate modelling the effect of source confusion as an additional noise component that is time-correlated, demonstrating a need for improved noise models to obtain trustworthy estimates of the parameters of interest (and their uncertainties) in future astrometric studies.

  9. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  10. A new experimental method for the determination of the effective orifice area based on the acoustical source term

    NASA Astrophysics Data System (ADS)

    Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.

    2005-12-01

    The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.

  11. Developing population models with data from marked individuals

    USGS Publications Warehouse

    Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,

    2016-01-01

    Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.

  12. Calibrating Treasure Valley Groundwater Model using MODFLOW

    NASA Astrophysics Data System (ADS)

    Hernandez, J.; Tan, K.

    2016-12-01

    In Idaho, groundwater plays an especially important role in the state. According to the Idaho Department of Environmental Quality, groundwater supplies 95% of the state's drinking water (2011). The USGS estimates that Idaho withdraws 117 million cubic meters (95,000 acre-feet) per year from groundwater sources for domestic usage which includes drinking water. The same report from the USGS also estimates that Idaho withdraws 5,140 million cubic meters (4,170,000 acre-feet) per year from groundwater sources for irrigation usage. Quantifying and managing that resource and estimating groundwater levels in the future is important for a variety of socio-economic reasons. As the population within the Treasure Valley continues to grow, the demand of clean usable groundwater increases. The objective of this study was to develop and calibrate a groundwater model with the purpose of understanding short- and long-term effects of existing and alternative land use scenarios on groundwater changes. Hydrologic simulations were done using the MODFLOW-2000 model. The model was calibrated for predevelopment period by reproducing and comparing groundwater levels of the years before 1925 using steady state boundary conditions representing no change in the land use. Depending on the reliability of the groundwater source, the economic growth of the area can be constrained or allowed to flourish. Mismanagement of the groundwater source can impact its sustainability, quality and could hamper development by increasing operation and maintenance costs. Proper water management is critical because groundwater is such a limited resource.

  13. Long-term changes in nitrate conditions over the 20th century in two Midwestern Corn Belt streams

    USGS Publications Warehouse

    Kelly, Valerie J.; Stets, Edward G.; Crawford, Charles G.

    2015-01-01

    Long-term changes in nitrate concentration and flux between the middle of the 20th century and the first decade of the 21st century were estimated for the Des Moines River and the Middle Illinois River, two Midwestern Corn Belt streams, using a novel weighted regression approach that is able to detect subtle changes in solute transport behavior over time. The results show that the largest changes in flow-normalized concentration and flux occurred between 1960 and 1980 in both streams, with smaller or negligible changes between 1980 and 2004. Contrasting patterns were observed between (1) nitrate export linked to non-point sources, explicitly runoff of synthetic fertilizer or other surface sources and (2) nitrate export presumably associated with point sources such as urban wastewater or confined livestock feeding facilities, with each of these modes of transport important under different domains of streamflow. Surface runoff was estimated to be consistently most important under high-flow conditions during the spring in both rivers. Nitrate export may also have been considerable in the Des Moines River even under some conditions during the winter when flows are generally lower, suggesting the influence of point sources during this time. Similar results were shown for the Middle Illinois River, which is subject to significant influence of wastewater from the Chicago area, where elevated nitrate concentrations were associated with at the lowest flows during the winter and fall. By modeling concentration directly, this study highlights the complex relationship between concentration and streamflow that has evolved in these two basins over the last 50 years. This approach provides insights about changing conditions that only become observable when stationarity in the relationship between concentration and streamflow is not assumed.

  14. Chemical characteristic and toxicity assessment of particle associated PAHs for the short-term anthropogenic activity event: During the Chinese New Year's Festival in 2013.

    PubMed

    Shi, Guo-Liang; Liu, Gui-Rong; Tian, Ying-Ze; Zhou, Xiao-Yu; Peng, Xing; Feng, Yin-Chang

    2014-06-01

    PM10 and PM2.5 samples were simultaneously collected during a period which covered the Chinese New Year's (CNY) Festival. The concentrations of particulate matter (PM) and 16 polycyclic aromatic hydrocarbons (PAHs) were measured. The possible source contributions and toxicity risks were estimated for Festival and non-Festival periods. According to the diagnostic ratios and Multilinear Engine 2 (ME2), three sources were identified and their contributions were calculated: vehicle emission (48.97% for PM10, 53.56% for PM2.5), biomass & coal combustion (36.83% for PM10, 28.76% for PM2.5), and cook emission (22.29% for PM10, 27.23% for PM2.5). An interesting result was found: although the PAHs are not directly from the fireworks display, they were still indirectly influenced by biomass combustion which is affiliated with the fireworks display. Additionally, toxicity risks of different sources were estimated by Multilinear Engine 2-BaP equivalent (ME2-BaPE): vehicle emission (54.01% for PM10, 55.42% for PM2.5), cook emission (25.59% for PM10, 29.05% for PM2.5), and biomass & coal combustion source (20.90% for PM10, 14.28% for PM2.5). It is worth to be noticed that the toxicity contribution of cook emission was considerable in Festival period. The findings can provide useful information to protect the urban human health, as well as develop the effective air control strategies in special short-term anthropogenic activity event. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Long-Term Exposure to Transportation Noise in Relation to Development of Obesity—a Cohort Study

    PubMed Central

    Eriksson, Charlotta; Lind, Tomas; Mitkovskaya, Natalya; Wallas, Alva; Ögren, Mikael; Östenson, Claes-Göran; Pershagen, Göran

    2017-01-01

    Background: Exposure to transportation noise is widespread and has been associated with obesity in some studies. However, the evidence from longitudinal studies is limited and little is known about effects of combined exposure to different noise sources. Objectives: The aim of this longitudinal study was to estimate the association between exposure to noise from road traffic, railways, or aircraft and the development of obesity markers. Methods: We assessed individual long-term exposure to road traffic, railway, and aircraft noise based on residential histories in a cohort of 5,184 men and women from Stockholm County. Noise levels were estimated at the most exposed façade of each dwelling. Waist circumference, weight, and height were measured at recruitment and after an average of 8.9 y of follow-up. Extensive information on potential confounders was available from repeated questionnaires and registers. Results: Waist circumference increased 0.04cm/y (95% CI: 0.02, 0.06) and 0.16cm/y (95% CI: 0.14, 0.17) per 10 dB Lden in relation to road traffic and aircraft noise, respectively. No corresponding association was seen for railway noise. Weight gain was only related to aircraft noise exposure. A similar pattern occurred for incidence rate ratios (IRRs) of central obesity and overweight. The IRR of central obesity increased from 1.22 (95% CI: 1.08, 1.39) in those exposed to only one source of transportation noise to 2.26 (95% CI: 1.55, 3.29) among those exposed to all three sources. Conclusion: Our results link transportation noise exposure to development of obesity and suggest that combined exposure from different sources may be particularly harmful. https://doi.org/10.1289/EHP1910 PMID:29161230

  16. Where did all the Nitrogen go? Use of Watershed-Scale Budgets to Quantify Nitrogen Inputs, Storages, and Losses.

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Goodale, C. L.; Howarth, R. W.; VanBreemen, N.

    2001-12-01

    Inputs of nitrogen (N) to aquatic and terrestrial ecosystems have increased during recent decades, primarily from the production and use of fertilizers, the planting of N-fixing crops, and the combustion of fossil fuels. We present mass-balanced budgets of N for 16 catchments along a latitudinal profile from Maine to Virginia, which encompass a range of climatic variability and are major drainages to the coast of the North Atlantic Ocean. We quantify inputs of N to each catchment from atmospheric deposition, application of nitrogenous fertilizers, biological nitrogen fixation by crops and trees, and import of N in agricultural products (food and feed). We relate these input terms to losses of N (total, organic, and nitrate) in streamflow. The importance of the relative N sources to N exports varies widely by watershed and is related to land use. Atmospheric deposition was the largest source of N to the forested catchments of northern New England (e.g., Penobscot and Kennebec); import of N in food was the largest source of N to the more populated regions of southern New England (e.g., Charles and Blackstone); and agricultural inputs were the dominant N sources in the Mid-Atlantic region (e.g., Schuylkill and Potomac). In all catchments, N inputs greatly exceed outputs, implying additional loss terms (e.g., denitrification or volatilization and transport of animal wastes), or changes in internal N stores (e.g, accumulation of N in vegetation, soil, or groundwater). We use our N budgets and several modeling approaches to constrain estimates about the fate of this excess N, including estimates of N storage in accumulating woody biomass, N losses due to in-stream denitrification, and more. This work is an effort of the SCOPE Nitrogen Project.

  17. Water quality monitoring records for estimating tap water arsenic and nitrate: a validation study.

    PubMed

    Searles Nielsen, Susan; Kuehn, Carrie M; Mueller, Beth A

    2010-01-28

    Tap water may be an important source of exposure to arsenic and nitrate. Obtaining and analyzing samples in the context of large studies of health effects can be expensive. As an alternative, studies might estimate contaminant levels in individual homes by using publicly available water quality monitoring records, either alone or in combination with geographic information systems (GIS). We examined the validity of records-based methods in Washington State, where arsenic and nitrate contamination is prevalent but generally observed at modest levels. Laboratory analysis of samples from 107 homes (median 0.6 microg/L arsenic, median 0.4 mg/L nitrate as nitrogen) served as our "gold standard." Using Spearman's rho we compared these measures to estimates obtained using only the homes' street addresses and recent and/or historical measures from publicly monitored water sources within specified distances (radii) ranging from one half mile to 10 miles. Agreement improved as distance decreased, but the proportion of homes for which we could estimate summary measures also decreased. When including all homes, agreement was 0.05-0.24 for arsenic (8 miles), and 0.31-0.33 for nitrate (6 miles). Focusing on the closest source yielded little improvement. Agreement was greatest among homes with private wells. For homes on a water system, agreement improved considerably if we included only sources serving the relevant system (rho = 0.29 for arsenic, rho = 0.60 for nitrate). Historical water quality databases show some promise for categorizing epidemiologic study participants in terms of relative tap water nitrate levels. Nonetheless, such records-based methods must be used with caution, and their use for arsenic may be limited.

  18. Characterization of particulate emissions from Australian open-cut coal mines: Toward improved emission estimates.

    PubMed

    Richardson, Claire; Rutherford, Shannon; Agranovski, Igor

    2018-06-01

    Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.

  19. Assessment of infrasound signals recorded on seismic stations and infrasound arrays in the western United States using ground truth sources

    NASA Astrophysics Data System (ADS)

    Park, Junghyun; Hayward, Chris; Stump, Brian W.

    2018-06-01

    Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.

  20. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  1. Volcanic stratospheric sulfur injections and aerosol optical depth from 500 BCE to 1900 CE

    NASA Astrophysics Data System (ADS)

    Toohey, Matthew; Sigl, Michael

    2017-11-01

    The injection of sulfur into the stratosphere by explosive volcanic eruptions is the cause of significant climate variability. Based on sulfate records from a suite of ice cores from Greenland and Antarctica, the eVolv2k database includes estimates of the magnitudes and approximate source latitudes of major volcanic stratospheric sulfur injection (VSSI) events from 500 BCE to 1900 CE, constituting an update of prior reconstructions and an extension of the record by 1000 years. The database incorporates improvements to the ice core records (in terms of synchronisation and dating) and refinements to the methods used to estimate VSSI from ice core records, and it includes first estimates of the random uncertainties in VSSI values. VSSI estimates for many of the largest eruptions, including Samalas (1257), Tambora (1815), and Laki (1783), are within 10 % of prior estimates. A number of strong events are included in eVolv2k which are largely underestimated or not included in earlier VSSI reconstructions, including events in 540, 574, 682, and 1108 CE. The long-term annual mean VSSI from major volcanic eruptions is estimated to be ˜ 0.5 Tg [S] yr-1, ˜ 50 % greater than a prior reconstruction due to the identification of more events and an increase in the magnitude of many intermediate events. A long-term latitudinally and monthly resolved stratospheric aerosol optical depth (SAOD) time series is reconstructed from the eVolv2k VSSI estimates, and the resulting global mean SAOD is found to be similar (within 33 %) to a prior reconstruction for most of the largest eruptions. The long-term (500 BCE-1900 CE) average global mean SAOD estimated from the eVolv2k VSSI estimates including a constant background injection of stratospheric sulfur is ˜ 0.014, 30 % greater than a prior reconstruction. These new long-term reconstructions of past VSSI and SAOD variability give context to recent volcanic forcing, suggesting that the 20th century was a period of somewhat weaker than average volcanic forcing, with current best estimates of 20th century mean VSSI and SAOD values being 25 and 14 % less, respectively, than the mean of the 500 BCE to 1900 CE period. The reconstructed VSSI and SAOD data are available at https://doi.org/10.1594/WDCC/eVolv2k_v2.

  2. Engineering description of the ascent/descent bet product

    NASA Technical Reports Server (NTRS)

    Seacord, A. W., II

    1986-01-01

    The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.

  3. An efficient and stable hydrodynamic model with novel source term discretization schemes for overland flow and flood simulations

    NASA Astrophysics Data System (ADS)

    Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming

    2017-05-01

    Numerical models solving the full 2-D shallow water equations (SWEs) have been increasingly used to simulate overland flows and better understand the transient flow dynamics of flash floods in a catchment. However, there still exist key challenges that have not yet been resolved for the development of fully dynamic overland flow models, related to (1) the difficulty of maintaining numerical stability and accuracy in the limit of disappearing water depth and (2) inaccurate estimation of velocities and discharges on slopes as a result of strong nonlinearity of friction terms. This paper aims to tackle these key research challenges and present a new numerical scheme for accurately and efficiently modeling large-scale transient overland flows over complex terrains. The proposed scheme features a novel surface reconstruction method (SRM) to correctly compute slope source terms and maintain numerical stability at small water depth, and a new implicit discretization method to handle the highly nonlinear friction terms. The resulting shallow water overland flow model is first validated against analytical and experimental test cases and then applied to simulate a hypothetic rainfall event in the 42 km2 Haltwhistle Burn, UK.

  4. Methods for assessing long-term mean pathogen count in drinking water and risk management implications.

    PubMed

    Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y

    2012-06-01

    Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.

  5. Overview of seismic potential in the central and eastern United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schweig, E.S.

    1995-12-31

    The seismic potential of any region can be framed in terms the locations of source zones, the frequency of earthquake occurrence for each source, and the maximum size earthquake that can be expect from each source. As delineated by modern and historical seismicity, the most important seismic source zones affecting the eastern United States include the New Madrid and Wabash Valley seismic zones of the central U.S., the southern Appalachians and Charleston, South Carolina, areas in the southeast, and the northern Appalachians and Adirondacks in the northeast. The most prominant of these in terms of current seismicity and historical seismicmore » moment release in the New Madrid seismic zone, which produced three earthquakes of moment magnitude {ge} 8 in 1811 and 1812. The frequency of earthquake recurrence can be examined using the instrumental record, the historical record, and the geological record. Each record covers a unique time period and has a different scale of temporal resolution and completeness of the data set. The Wabash Valley is an example where the long-term geological record indicates a greater potential than the instrumental and historical records. This points to the need to examine all of the evidence in any region in order to obtain a credible estimates of earthquake hazards. Although earthquake hazards may be dominated by mid-magnitude 6 earthquakes within the mapped seismic source zones, the 1994 Northridge, California, earthquake is just the most recent example of the danger of assuming future events will occur on faults known to have had past events and how destructive such an earthquake can be.« less

  6. Energy resources - cornucopia or empty barrel?

    USGS Publications Warehouse

    McCabe, P.J.

    1998-01-01

    Over the last 25 yr, considerable debate has continued about the future supply of fossil fuel. On one side are those who believe we are rapidly depleting resources and that the resulting shortages will have a profound impact on society. On the other side are those who see no impending crisis because long-term trends are for cheaper prices despite rising production. The concepts of resources and reserves have historically created considerable misunderstanding in the minds of many nongeologists. Hubbert-type predictions of energy production assume that there is a finite supply of energy that is measurable; however, estimates of resources and reserves are inventories of the amounts of a fossil fuel perceived to be available over some future period of time. As those resources/reserves are depleted over time, additional amounts of fossil fuels are inventoried. Throughout most of this century, for example, crude oil reserves in the United States have represented a 10-14-yr supply. For the last 50 yr, resource crude oil estimates have represented about a 60-70-yr supply for the United States. Division of reserve or resource estimates by current or projected annual consumption therefore is circular in reasoning and can lead to highly erroneous conclusions. Production histories of fossil fuels are driven more by demand than by the geologic abundance of the resource. Examination of some energy resources with well-documented histories leads to two conceptual models that relate production to price. The closed-market model assumes that there is only one source of energy available. Although the price initially may fall because of economies of scale long term, prices rise as the energy source is depleted and it becomes progressively more expensive to extract. By contrast, the open-market model assumes that there is a variety of available energy sources and that competition among them leads to long-term stable or falling prices. At the moment, the United States and the world approximate the open-market model, but in the long run the supply of fossil fuel is finite, and prices inevitably will rise unless alternate energy sources substitute for fossil energy supplies; however, there appears little reason to suspect that long-term price trends will rise significantly over the next few decades.Over the last 25 years, considerable debate has continued about the future supply of fossil fuel. On one side are those who believe that resources are rapidly depleting and that the resulting shortages will have a profound impact on society. On the other side are those who see no impending crisis because longterm trends are for cheaper prices despite rising production. This paper examines historic trends and clarify the foundations on which one may build one's predictions.

  7. Land Water Storage within the Congo Basin Inferred from GRACE Satellite Gravity Data

    NASA Technical Reports Server (NTRS)

    Crowley, John W.; Mitrovica, Jerry X.; Bailey, Richard C.; Tamisiea, Mark E.; Davis, James L.

    2006-01-01

    GRACE satellite gravity data is used to estimate terrestrial (surface plus ground) water storage within the Congo Basin in Africa for the period of April, 2002 - May, 2006. These estimates exhibit significant seasonal (30 +/- 6 mm of equivalent water thickness) and long-term trends, the latter yielding a total loss of approximately 280 km(exp 3) of water over the 50-month span of data. We also combine GRACE and precipitation data set (CMAP, TRMM) to explore the relative contributions of the source term to the seasonal hydrological balance within the Congo Basin. We find that the seasonal water storage tends to saturate for anomalies greater than 30-44 mm of equivalent water thickness. Furthermore, precipitation contributed roughly three times the peak water storage after anomalously rainy seasons, in early 2003 and 2005, implying an approximately 60-70% loss from runoff and evapotranspiration. Finally, a comparison of residual land water storage (monthly estimates minus best-fitting trends) in the Congo and Amazon Basins shows an anticorrelation, in agreement with the 'see-saw' variability inferred by others from runoff data.

  8. Glottal aerodynamics in compliant, life-sized vocal fold models

    NASA Astrophysics Data System (ADS)

    McPhail, Michael; Dowell, Grant; Krane, Michael

    2013-11-01

    This talk presents high-speed PIV measurements in compliant, life-sized models of the vocal folds. A clearer understanding of the fluid-structure interaction of voiced speech, how it produces sound, and how it varies with pathology is required to improve clinical diagnosis and treatment of vocal disorders. Physical models of the vocal folds can answer questions regarding the fundamental physics of speech, as well as the ability of clinical measures to detect the presence and extent of disorder. Flow fields were recorded in the supraglottal region of the models to estimate terms in the equations of fluid motion, and their relative importance. Experiments were conducted over a range of driving pressures with flow rates, given by a ball flowmeter, and subglottal pressures, given by a micro-manometer, reported for each case. Imaging of vocal fold motion, vector fields showing glottal jet behavior, and terms estimated by control volume analysis will be presented. The use of these results for a comparison with clinical measures, and for the estimation of aeroacoustic source strengths will be discussed. Acknowledge support from NIH R01 DC005642.

  9. Towards Personal Exposures: How Technology Is Changing Air Pollution and Health Research.

    PubMed

    Larkin, A; Hystad, P

    2017-12-01

    We present a review of emerging technologies and how these can transform personal air pollution exposure assessment and subsequent health research. Estimating personal air pollution exposures is currently split broadly into methods for modeling exposures for large populations versus measuring exposures for small populations. Air pollution sensors, smartphones, and air pollution models capitalizing on big/new data sources offer tremendous opportunity for unifying these approaches and improving long-term personal exposure prediction at scales needed for population-based research. A multi-disciplinary approach is needed to combine these technologies to not only estimate personal exposures for epidemiological research but also determine drivers of these exposures and new prevention opportunities. While available technologies can revolutionize air pollution exposure research, ethical, privacy, logistical, and data science challenges must be met before widespread implementations occur. Available technologies and related advances in data science can improve long-term personal air pollution exposure estimates at scales needed for population-based research. This will advance our ability to evaluate the impacts of air pollution on human health and develop effective prevention strategies.

  10. Boreal forest soil erosion and soil-atmosphere carbon exchange

    NASA Astrophysics Data System (ADS)

    Billings, S. A.; Harden, J. W.; O'Donnell, J.; Sierra, C. A.

    2013-12-01

    Erosion may become an increasingly important agent of change in boreal systems with climate warming, due to enhanced ice wedge degradation and increases in the frequency and intensity of stand-replacing fires. Ice wedge degradation can induce ground surface subsidence and lateral movement of mineral soil downslope, and fire can result in the loss of O horizons and live roots, with associated increases in wind- and water-promoted erosion until vegetation re-establishment. It is well-established that soil erosion can induce significant atmospheric carbon (C) source and sink terms, with the strength of these terms dependent on the fate of eroded soil organic carbon (SOC) and the extent to which SOC oxidation and production characteristics change with erosion. In spite of the large SOC stocks in the boreal system and the high probability that boreal soil profiles will experience enhanced erosion in the coming decades, no one has estimated the influence of boreal erosion on the atmospheric C budget, a phenomenon that can serve as a positive or negative feedback to climate. We employed an interactive erosion model that permits the user to define 1) profile characteristics, 2) the erosion rate, and 3) the extent to which each soil layer at an eroding site retains its pre-erosion SOC oxidation and production rates (nox and nprod=0, respectively) vs. adopts the oxidation and production rates of previous, non-eroded soil layers (nox and nprod=1, respectively). We parameterized the model using soil profile characteristics observed at a recently burned site in interior Alaska (Hess Creek), defining SOC content and turnover times. We computed the degree to which post-burn erosion of mineral soil generates an atmospheric C sink or source while varying erosion rates and assigning multiple values of nox and nprod between 0 and 1, providing insight into the influence of erosion rate, SOC oxidation, and SOC production on C dynamics in this and similar profiles. Varying nox and nprod did not induce meaningful changes in model estimates of atmospheric C source or sink strength, likely due to the low turnover rate of SOC in this system. However, variation in mineral soil erosion rates induced large shifts in the source and sink strengths for atmospheric C; after 50 y of mineral soil erosion at 5 cm y-1, we observed a maximum C source of 35 kg C m-2 and negligible sink strength. Doubling the erosion rate approximately doubled the source strength. Scaling these estimates to the region requires estimates of the area undergoing mineral soil erosion in forests similar to those modeled. We suggest that erosion is an important but little studied feature of fire-driven boreal systems that will influence atmospheric CO2 budgets.

  11. A Novel Strategy of Ambiguity Correction for the Improved Faraday Rotation Estimator in Linearly Full-Polarimetric SAR Data.

    PubMed

    Li, Jinhui; Ji, Yifei; Zhang, Yongsheng; Zhang, Qilei; Huang, Haifeng; Dong, Zhen

    2018-04-10

    Spaceborne synthetic aperture radar (SAR) missions operating at low frequencies, such as L-band or P-band, are significantly influenced by the ionosphere. As one of the serious ionosphere effects, Faraday rotation (FR) is a remarkable distortion source for the polarimetric SAR (PolSAR) application. Various published FR estimators along with an improved one have been introduced to solve this issue, all of which are implemented by processing a set of PolSAR real data. The improved estimator exhibits optimal robustness based on performance analysis, especially in term of the system noise. However, all published estimators, including the improved estimator, suffer from a potential FR angle (FRA) ambiguity. A novel strategy of the ambiguity correction for those FR estimators is proposed and shown as a flow process, which is divided into pixel-level and image-level correction. The former is not yet recognized and thus is considered in particular. Finally, the validation experiments show a prominent performance of the proposed strategy.

  12. Development of a Ballistic Impact Detection System

    DTIC Science & Technology

    2004-09-01

    body surface remains the largest variable to overcome. The snug fit of the body amour stabilizes the sensors and their response . The data from the...estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 22 19a. NAME OF RESPONSIBLE PERSON

  13. Delivery of Modular Lethality via a Parent-Child Concept

    DTIC Science & Technology

    2015-02-01

    time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...downrange distance to the target, is the time of flight, is the distance of the thruster force from the body center of gravity, and is...velocity and time of flight can be estimated or measured in flight. These values can be collected in a term, , and the 2 components of lateral

  14. A Source Term for Wave Attenuation by Sea Ice in WAVEWATCH III (registered trademark): IC4

    DTIC Science & Technology

    2017-06-07

    by ANSI Std. Z39.18 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... time . Diamonds indicate active, moored AWACs. Circle indicates location of R/V Sikuliaq. Thick magenta and white lines indicate path of R/V Sikuliaq...past and future ship position, respectively). .................................................................. 15 Figure 10 Time series of

  15. Acoustic Impact of Short-Term Ocean Variability in the Okinawa Trough

    DTIC Science & Technology

    2010-01-20

    nature run: Generalized Digital Environment Model ( GDEM ) 3.0 climatologyfl], Modular Ocean Data Assimilation System (MODAS) synthetic profiles[2], Navy...potentially preferred for a particular class of applications, and thus a possible source of sound speed for estimates of acoustic transmission. Three, GDEM ...MODAS, and NCODA, are statistical products, and the other three are dynamic forecasts from NCOM. GDEM is a climatology based solely on historical

  16. How Big Was It? Getting at Yield

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.; Walter, W. R.; Ford, S. R.

    2013-12-01

    One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.

  17. Surgeon and Hospital Volume as Quality Indicators for CABG in Taiwan: Examining Hazard to Mortality and Accounting for Unobserved Heterogeneity

    PubMed Central

    Hockenberry, Jason M; Lien, Hsien-Ming; Chou, Shin-Yi

    2010-01-01

    Objective To investigate whether provider volume has an impact on the hazard of mortality for coronary artery bypass grafting (CABG) patients in Taiwan. Data Sources/Study Setting Multiple sources of linked data from the National Health Insurance Program in Taiwan. Study Design The linked data were used to identify 27,463 patients who underwent CABG without concomitant angioplasty or valve procedures and the surgeon and hospital volumes. Generalized estimating equations and hazard models were estimated to assess the impact of volume on mortality. The hazard modeling technique used accounts for bias stemming from unobserved heterogeneity. Principal Findings Both surgeon and hospital volume quartiles are inversely related to the hazard of mortality after CABG. Patients whose surgeon is in the three higher volume quartiles have lower 1-, 3-, 6-, and 12-month mortality after CABG, while only those having their procedure performed at the highest quartile of volume hospitals have lower mortality outcomes. Conclusions Mortality outcomes are related to provider CABG volume in Taiwan. Unobserved heterogeneity is a concern in the volume–outcome relationship; after accounting for it, surgeon volume effects on short-term mortality are large. Using models controlling for unobserved heterogeneity and examining longer term mortality may still differentiate provider quality by volume. PMID:20662948

  18. Long-term health effects of Vietnam-era military service: A quasi-experiment using Australian conscription lotteries.

    PubMed

    Johnston, David W; Shields, Michael A; Siminski, Peter

    2016-01-01

    This paper estimates the long-term health effects of Vietnam-era military service using Australia's National conscription lotteries for identification. Our primary contribution is the quality and breadth of our health outcomes. We use several administrative sources, containing a near-universe of records on mortality (1994-2011), cancer diagnoses (1982-2008), and emergency hospital presentations (2005-2010). We also analyse a range of self-reported morbidity indicators (2006-2009). We find no significant long-term effects on mortality, cancer or emergency hospital visits. In contrast, we find significant detrimental effects on a number of morbidity measures. Hearing and mental health appear to be particularly affected. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. A novel pathway of direct methane production and emission by eukaryotes including plants, animals and fungi: An overview

    NASA Astrophysics Data System (ADS)

    Liu, Jiangong; Chen, Huai; Zhu, Qiuan; Shen, Yan; Wang, Xue; Wang, Meng; Peng, Changhui

    2015-08-01

    Methane (CH4) is a powerful greenhouse gas with a global warming potential 28 times that of carbon dioxide (CO2). CH4 is responsible for approximately 20% of the Earth's warming since pre-industrial times. Knowledge of the sources of CH4 is crucial due to the recent substantial interannual variability of growth rates and uncertainties regarding individual sources. The prevailing paradigm is that methanogenesis carried out by methanogenic archaea occurs primarily under strictly anaerobic conditions. However, in the past decade, studies have confirmed direct CH4 release from three important kingdoms of eukaryotes-Plantae, Animalia and Fungi-even in the presence of oxygen. This novel CH4 production pathway has been aptly termed ;aerobic CH4 production; to distinguish it from the well-known anaerobic CH4 production pathway, which involves catalytic activity by methanogenic archaeal enzymes. In this review, we collated recent experimental evidence from the published literature and documented this novel pathway of direct CH4 production and emission by eukaryotes. The mechanisms involved in this pathway may be related to protective strategies of eukaryotes in response to changing environmental stresses, with CH4 a by-product or end-product during or at the end of the process(es) that originates from organic methyl-type compounds. Based on the existing, albeit uncertain estimates, plants seem to contribute less to the global CH4 budget (3-24%) compared to previous estimates (10-37%). We still lack estimates of CH4 emissions by animals and fungi. Overall, there is an urgent need to identify the precursors for this novel CH4 source and improve our understanding of the mechanisms of direct CH4 production and the impacts of environmental stresses. An estimate of this new CH4 source, which was not considered as a CH4 source by the Intergovernmental Panel on Climate Change (IPCC) (2013), could be useful for better quantitation of the global CH4 budget.

  20. Medicaid's Role in the Many Markets for Health Care

    PubMed Central

    Quinn, Kevin; Kitchener, Martin

    2007-01-01

    To illuminate Medicaid's growing role as a health care purchaser, we estimated Medicaid spending and market shares for 30 markets defined by provider category of service. For approximately 15 markets, our estimates are more detailed than the data available from standard sources. Two-thirds of Medicaid spending occurs in markets where the program has a modest market share. The other one-third occurs in markets that Medicaid dominates, especially in the areas of long-term care (LTC), mental retardation, and mental health. We explore the implications of the different roles for payment policy, industry organization, data availability, and quality of care. PMID:17722752

  1. Global carbon - nitrogen - phosphorus cycle interactions: A key to solving the atmospheric CO2 balance problem?

    NASA Technical Reports Server (NTRS)

    Peterson, B. J.; Mellillo, J. M.

    1984-01-01

    If all biotic sinks of atmospheric CO2 reported were added a value of about 0.4 Gt C/yr would be found. For each category, a very high (non-conservative) estimate was used. This still does not provide a sufficient basis for achieving a balance between the sources and sinks of atmospheric CO2. The bulk of the discrepancy lies in a combination of errors in the major terms, the greatest being in a combination of errors in the major terms, the greatest being in the net biotic release and ocean uptake segments, but smaller errors or biases may exist in calculations of the rate of atmospheric CO2 increase and total fossil fuel use as well. The reason why biotic sinks are not capable of balancing the CO2 increase via nutrient-matching in the short-term is apparent from a comparison of the stoichiometry of the sources and sinks. The burning of fossil fuels and forest biomass releases much more CO2-carbon than is sequestered as organic carbon.

  2. From carbon sink to carbon source: extensive peat oxidation in insular Southeast Asia since 1990

    NASA Astrophysics Data System (ADS)

    Miettinen, Jukka; Hooijer, Aljosja; Vernimmen, Ronald; Liew, Soo Chin; Page, Susan E.

    2017-02-01

    Tropical peatlands of the western part of insular Southeast Asia have experienced extensive land cover changes since 1990. Typically involving drainage, these land cover changes have resulted in increased peat oxidation in the upper peat profile. In this paper we provide current (2015) and cumulative carbon emissions estimates since 1990 from peat oxidation in Peninsular Malaysia, Sumatra and Borneo, utilizing newly published peatland land cover information and the recently agreed Intergovernmental Panel on Climate Change (IPCC) peat oxidation emission values for tropical peatland areas. Our results highlight the change of one of the Earth’s most efficient long-term carbon sinks to a short-term emission source, with cumulative carbon emissions since 1990 estimated to have been in the order of 2.5 Gt C. Current (2015) levels of emissions are estimated at around 146 Mt C yr-1, with a range of 132-159 Mt C yr-1 depending on the selection of emissions factors for different land cover types. 44% (or 64 Mt C yr-1) of the emissions come from industrial plantations (mainly oil palm and Acacia pulpwood), followed by 34% (49 Mt C yr-1) of emissions from small-holder areas. Thus, altogether 78% of current peat oxidation emissions come from managed land cover types. Although based on the latest information, these estimates may still include considerable, yet currently unquantifiable, uncertainties (e.g. due to uncertainties in the extent of peatlands and drainage networks) which need to be focused on in future research. In comparison, fire induced carbon dioxide emissions over the past ten years for the entire equatorial Southeast Asia region have been estimated to average 122 Mt C yr-1 (www.globalfiredata.org/_index.html). The results emphasise that whilst reducing emissions from peat fires is important, urgent efforts are also needed to mitigate the constantly high level of emissions arising from peat drainage, regardless of fire occurrence.

  3. Bayesian and “Anti-Bayesian” Biases in Sensory Integration for Action and Perception in the Size–Weight Illusion

    PubMed Central

    Brayanov, Jordan B.

    2010-01-01

    Which is heavier: a pound of lead or a pound of feathers? This classic trick question belies a simple but surprising truth: when lifted, the pound of lead feels heavier—a phenomenon known as the size–weight illusion. To estimate the weight of an object, our CNS combines two imperfect sources of information: a prior expectation, based on the object's appearance, and direct sensory information from lifting it. Bayes' theorem (or Bayes' law) defines the statistically optimal way to combine multiple information sources for maximally accurate estimation. Here we asked whether the mechanisms for combining these information sources produce statistically optimal weight estimates for both perceptions and actions. We first studied the ability of subjects to hold one hand steady when the other removed an object from it, under conditions in which sensory information about the object's weight sometimes conflicted with prior expectations based on its size. Since the ability to steady the supporting hand depends on the generation of a motor command that accounts for lift timing and object weight, hand motion can be used to gauge biases in weight estimation by the motor system. We found that these motor system weight estimates reflected the integration of prior expectations with real-time proprioceptive information in a Bayesian, statistically optimal fashion that discounted unexpected sensory information. This produces a motor size–weight illusion that consistently biases weight estimates toward prior expectations. In contrast, when subjects compared the weights of two objects, their perceptions defied Bayes' law, exaggerating the value of unexpected sensory information. This produces a perceptual size–weight illusion that biases weight perceptions away from prior expectations. We term this effect “anti-Bayesian” because the bias is opposite that seen in Bayesian integration. Our findings suggest that two fundamentally different strategies for the integration of prior expectations with sensory information coexist in the nervous system for weight estimation. PMID:20089821

  4. Systematic review of the incidence of sudden cardiac death in the United States.

    PubMed

    Kong, Melissa H; Fonarow, Gregg C; Peterson, Eric D; Curtis, Anne B; Hernandez, Adrian F; Sanders, Gillian D; Thomas, Kevin L; Hayes, David L; Al-Khatib, Sana M

    2011-02-15

    The need for consistent and current data describing the true incidence of sudden cardiac arrest (SCA) and/or sudden cardiac death (SCD) was highlighted during the most recent Sudden Cardiac Arrest Thought Leadership Alliance's (SCATLA) Think Tank meeting of national experts with broad representation of key stakeholders, including thought leaders and representatives from the American College of Cardiology, American Heart Association, and the Heart Rhythm Society. As such, to evaluate the true magnitude of this public health problem, we performed a systematic literature search in MEDLINE using the MeSH headings, "death, sudden" OR the terms "sudden cardiac death" OR "sudden cardiac arrest" OR "cardiac arrest" OR "cardiac death" OR "sudden death" OR "arrhythmic death." Study selection criteria included peer-reviewed publications of primary data used to estimate SCD incidence in the U.S. We used Web of Science's Cited Reference Search to evaluate the impact of each primary estimate on the medical literature by determining the number of times each "primary source" has been cited. The estimated U.S. annual incidence of SCD varied widely from 180,000 to >450,000 among 6 included studies. These different estimates were in part due to different data sources (with data age ranging from 1980 to 2007), definitions of SCD, case ascertainment criteria, methods of estimation/extrapolation, and sources of case ascertainment. The true incidence of SCA and/or SCD in the U.S. remains unclear, with a wide range in the available estimates that are badly dated. As reliable estimates of SCD incidence are important for improving risk stratification and prevention, future efforts are clearly needed to establish uniform definitions of SCA and SCD and then to prospectively and precisely capture cases of SCA and SCD in the overall U.S. population. Copyright © 2011 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  5. Comparative assessment of the global fate and transport pathways of long-chain perfluorocarboxylic acids (PFCAs) and perfluorocarboxylates (PFCs) emitted from direct sources.

    PubMed

    Armitage, James M; Macleod, Matthew; Cousins, Ian T

    2009-08-01

    A global-scale multispecies mass balance model was used to simulate the long-term fate and transport of perfluorocarboxylic acids (PFCAs) with eight to thirteen carbons (C8-C13) and their conjugate bases, the perfluorocarboxylates (PFCs). The main purpose of this study was to assess the relative long-range transport (LRT) potential of each conjugate pair, collectively termed PFC(A)s, considering emissions from direct sources (i.e., manufacturing and use) only. Overall LRT potential (atmospheric + oceanic) varied as a function of chain length and depended on assumptions regarding pKa and mode of entry. Atmospheric transport makes a relatively higher contribution to overall LRT potential for PFC(A)s with longer chain length, which reflects the increasing trend in the air-water partition coefficient (K(AW)) of the neutral PFCA species with chain length. Model scenarios using estimated direct emissions of the C8, C9, and C11 PFC(A)s indicate that the mass fluxes to the Arctic marine environment associated with oceanic transport are in excess of mass fluxes from indirect sources (i.e., atmospheric transport of precursor substances such as fluorotelomer alcohols and subsequent degradation to PFCAs). Modeled concentrations of C8 and C9 in the abiotic environment are broadly consistent with available monitoring data in surface ocean waters. Furthermore, the modeled concentration ratios of C8 to C9 are reconcilable with the homologue pattern frequently observed in biota, assuming a positive correlation between bioaccumulation potential and chain length. Modeled concentration ratios of C11 to C10 are more difficult to reconcile with monitoring data in both source and remote regions. Our model results for C11 and C10 therefore imply that either (i) indirect sources are dominant or (ii) estimates of direct emission are not accurate for these homologues.

  6. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  7. Global Xenon-133 Emission Inventory Caused by Medical Isotope Production and Derived from the Worldwide Technetium-99m Demand

    NASA Astrophysics Data System (ADS)

    Kalinowski, Martin B.; Grosch, Martina; Hebel, Simon

    2014-03-01

    Emissions from medical isotope production are the most important source of background for atmospheric radioxenon measurements, which are an essential part of nuclear explosion monitoring. This article presents a new approach for estimating the global annual radioxenon emission inventory caused by medical isotope production using the amount of Tc-99m applications in hospitals as the basis. Tc-99m is the most commonly used isotope in radiology and dominates the medical isotope production. This paper presents the first estimate of the global production of Tc-99m. Depending on the production and transport scenario, global xenon emissions of 11-45 PBq/year can be derived from the global isotope demand. The lower end of this estimate is in good agreement with other estimations which are making use of reported releases and realistic process simulations. This proves the validity of the complementary assessment method proposed in this paper. It may be of relevance for future emission scenarios and for estimating the contribution to the global source term from countries and operators that do not make sufficient radioxenon release information available. It depends on sound data on medical treatments with radio-pharmaceuticals and on technical information on the production process of the supplier. This might help in understanding the apparent underestimation of the global emission inventory that has been found by atmospheric transport modelling.

  8. Looking inside the microseismic cloud using seismic interferometry

    NASA Astrophysics Data System (ADS)

    Matzel, E.; Rhode, A.; Morency, C.; Templeton, D. C.; Pyle, M. L.

    2015-12-01

    Microseismicity provides a direct means of measuring the physical characteristics of active tectonic features such as fault zones. Thousands of microquakes are often associated with an active site. This cloud of microseismicity helps define the tectonically active region. When processed using novel geophysical techniques, we can isolate the energy sensitive to the faulting region, itself. The virtual seismometer method (VSM) is a technique of seismic interferometry that provides precise estimates of the GF between earthquakes. In many ways the converse of ambient noise correlation, it is very sensitive to the source parameters (location, mechanism and magnitude) and to the Earth structure in the source region. In a region with 1000 microseisms, we can calculate roughly 500,000 waveforms sampling the active zone. At the same time, VSM collapses the computation domain down to the size of the cloud of microseismicity, often by 2-3 orders of magnitude. In simple terms VSM involves correlating the waveforms from a pair of events recorded at an individual station and then stacking the results over all stations to obtain the final result. In the far-field, when most of the stations in a network fall along a line between the two events, the result is an estimate of the GF between the two, modified by the source terms. In this geometry each earthquake is effectively a virtual seismometer recording all the others. When applied to microquakes, this alignment is often not met, and we also need to address the effects of the geometry between the two microquakes relative to each seismometer. Nonetheless, the technique is quite robust, and highly sensitive to the microseismic cloud. Using data from the Salton Sea geothermal region, we demonstrate the power of the technique, illustrating our ability to scale the technique from the far-field, where sources are well separated, to the near field where their locations fall within each other's uncertainty ellipse. VSM provides better illumination of the complex subsurface by generating precise, high frequency estimates of the GF and resolution of seismic properties between every pair of events. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  9. Long-term financing needs for HIV control in sub-Saharan Africa in 2015-2050: a modelling study.

    PubMed

    Atun, Rifat; Chang, Angela Y; Ogbuoji, Osondu; Silva, Sachin; Resch, Stephen; Hontelez, Jan; Bärnighausen, Till

    2016-03-06

    To estimate the present value of current and future funding needed for HIV treatment and prevention in 9 sub-Saharan African (SSA) countries that account for 70% of HIV burden in Africa under different scenarios of intervention scale-up. To analyse the gaps between current expenditures and funding obligation, and discuss the policy implications of future financing needs. We used the Goals module from Spectrum, and applied the most up-to-date cost and coverage data to provide a range of estimates for future financing obligations. The four different scale-up scenarios vary by treatment initiation threshold and service coverage level. We compared the model projections to current domestic and international financial sources available in selected SSA countries. In the 9 SSA countries, the estimated resources required for HIV prevention and treatment in 2015-2050 range from US$98 billion to maintain current coverage levels for treatment and prevention with eligibility for treatment initiation at CD4 count of <500/mm(3) to US$261 billion if treatment were to be extended to all HIV-positive individuals and prevention scaled up. With the addition of new funding obligations for HIV--which arise implicitly through commitment to achieve higher than current treatment coverage levels--overall financial obligations (sum of debt levels and the present value of the stock of future HIV funding obligations) would rise substantially. Investing upfront in scale-up of HIV services to achieve high coverage levels will reduce HIV incidence, prevention and future treatment expenditures by realising long-term preventive effects of ART to reduce HIV transmission. Future obligations are too substantial for most SSA countries to be met from domestic sources alone. New sources of funding, in addition to domestic sources, include innovative financing. Debt sustainability for sustained HIV response is an urgent imperative for affected countries and donors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Probabilistic Volcanic Hazard and Risk Assessment

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Neri, A.; Newhall, C. G.; Papale, P.

    2007-08-01

    Quantifying Long- and Short-Term Volcanic Hazard: Building Up a Common Strategy for Italian Volcanoes, Erice Italy, 8 November 2006 The term ``hazard'' can lead to some misunderstanding. In English, hazard has the generic meaning ``potential source of danger,'' but for more than 30 years [e.g., Fournier d'Albe, 1979], hazard has been also used in a more quantitative way, that reads, ``the probability of a certain hazardous event in a specific time-space window.'' However, many volcanologists still use ``hazard'' and ``volcanic hazard'' in purely descriptive and subjective ways. A recent meeting held in November 2006 at Erice, Italy, entitled ``Quantifying Long- and Short-Term Volcanic Hazard: Building up a Common Strategy for Italian Volcanoes'' (http://www.bo.ingv.it/erice2006) concluded that a more suitable term for the estimation of quantitative hazard is ``probabilistic volcanic hazard assessment'' (PVHA).

  11. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  12. Low-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Low-flow annual non-exceedance probabilities (ANEP), called probability-percent chance (P-percent chance) flow estimates, regional regression equations, and transfer methods are provided describing the low-flow characteristics of Virginia streams. Statistical methods are used to evaluate streamflow data. Analysis of Virginia streamflow data collected from 1895 through 2007 is summarized. Methods are provided for estimating low-flow characteristics of gaged and ungaged streams. The 1-, 4-, 7-, and 30-day average streamgaging station low-flow characteristics for 290 long-term, continuous-record, streamgaging stations are determined, adjusted for instances of zero flow using a conditional probability adjustment method, and presented for non-exceedance probabilities of 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0.01, and 0.005. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression equations to estimate annual non-exceedance probabilities at gaged and ungaged sites and are summarized for 290 long-term, continuous-record streamgaging stations, 136 short-term, continuous-record streamgaging stations, and 613 partial-record streamgaging stations. Regional regression equations for six physiographic regions use basin characteristics to estimate 1-, 4-, 7-, and 30-day average low-flow annual non-exceedance probabilities at gaged and ungaged sites. Weighted low-flow values that combine computed streamgaging station low-flow characteristics and annual non-exceedance probabilities from regional regression equations provide improved low-flow estimates. Regression equations developed using the Maintenance of Variance with Extension (MOVE.1) method describe the line of organic correlation (LOC) with an appropriate index site for low-flow characteristics at 136 short-term, continuous-record streamgaging stations and 613 partial-record streamgaging stations. Monthly streamflow statistics computed on the individual daily mean streamflows of selected continuous-record streamgaging stations and curves describing flow-duration are presented. Text, figures, and lists are provided summarizing low-flow estimates, selected low-flow sites, delineated physiographic regions, basin characteristics, regression equations, error estimates, definitions, and data sources. This study supersedes previous studies of low flows in Virginia.

  13. Optimal Search for an Astrophysical Gravitational-Wave Background

    NASA Astrophysics Data System (ADS)

    Smith, Rory; Thrane, Eric

    2018-04-01

    Roughly every 2-10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both "safe" and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources), simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.

  14. Hanford Environmental Dose Reconstruction Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMakin, A.H.; Cannon, S.D.; Finch, S.M.

    1992-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon, Washington, and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impactmore » on humans (dose estimates): Source terms, environmental transport, environmental monitoring data, demography, food consumption, and agriculture, and environmental pathways and dose estimates. Progress is discussed.« less

  15. Hanford Environmental Dose Reconstruction Project. Monthly report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMakin, A.H.; Cannon, S.D.; Finch, S.M.

    1992-07-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon, Washington, and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impactmore » on humans (dose estimates): Source terms, environmental transport, environmental monitoring data, demography, food consumption, and agriculture, and environmental pathways and dose estimates. Progress is discussed.« less

  16. Interaction between air pollution dispersion and residential heating demands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipfert, F.W.; Moskowitz, P.D.; Dungan, J.

    The effect of the short-term correlation of a specific emission (sulfur dioxide) from residential space heating, with air pollution dispersion rates on the accuracy of model estimates of urban air pollution on a seasonal or annual basis is analyzed. Hourly climatological and residential emission estimates for six U.S. cities and a simplified area source-dispersion model based on a circular receptor grid are used. The effect on annual average concentration estimations is found to be slight (approximately + or - 12 percent), while the maximum hourly concentrations are shown to vary considerably more, since maximum heat demand and worst-case dispersion aremore » not coincident. Accounting for the correlations between heating demand and dispersion makes possible a differentiation in air pollution potential between coastal and interior cities.« less

  17. Assessing the Gap Between Top-down and Bottom-up Measured Methane Emissions in Indianapolis, IN.

    NASA Astrophysics Data System (ADS)

    Prasad, K.; Lamb, B. K.; Cambaliza, M. O. L.; Shepson, P. B.; Stirm, B. H.; Salmon, O. E.; Lavoie, T. N.; Lauvaux, T.; Ferrara, T.; Howard, T.; Edburg, S. L.; Whetstone, J. R.

    2014-12-01

    Releases of methane (CH4) from the natural gas supply chain in the United States account for approximately 30% of the total US CH4 emissions. However, there continues to be large questions regarding the accuracy of current emission inventories for methane emissions from natural gas usage. In this paper, we describe results from top-down and bottom-up measurements of methane emissions from the large isolated city of Indianapolis. The top-down results are based on aircraft mass balance and tower based inverse modeling methods, while the bottom-up results are based on direct component sampling at metering and regulating stations, surface enclosure measurements of surveyed pipeline leaks, and tracer/modeling methods for other urban sources. Mobile mapping of methane urban concentrations was also used to identify significant sources and to show an urban-wide low level enhancement of methane levels. The residual difference between top-down and bottom-up measured emissions is large and cannot be fully explained in terms of the uncertainties in top-down and bottom-up emission measurements and estimates. Thus, the residual appears to be, at least partly, attributed to a significant wide-spread diffusive source. Analyses are included to estimate the size and nature of this diffusive source.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Operations of Sandia National Laboratories, Nevada (SNL/NV) at the Tonopah Test Range (TTR) resulted in no planned point radiological releases during 1996. Other releases from SNL/NV included diffuse transuranic sources consisting of the three Clean Slate sites. Air emissions from these sources result from wind resuspension of near-surface transuranic contaminated soil particulates. The total area of contamination has been estimated to exceed 20 million square meters. Soil contamination was documented in an aerial survey program in 1977 (EG&G 1979). Surface contamination levels were generally found to be below 400 pCi/g of combined plutonium-238, plutonium-239, plutonium-240, and americium-241 (i.e., transuranic) activity.more » Hot spot areas contain up to 43,000 pCi/g of transuranic activity. Recent measurements confirm the presence of significant levels of transuranic activity in the surface soil. An annual diffuse source term of 0.39 Ci of transuranic material was calculated for the cumulative release from all three Clean Slate sites. A maximally exposed individual dose of 1.1 mrem/yr at the TTR airport area was estimated based on the 1996 diffuse source release amounts and site-specific meteorological data. A population dose of 0.86 person-rem/yr was calculated for the local residents. Both dose values were attributable to inhalation of transuranic contaminated dust.« less

  19. [Estimation of urban non-point source pollution loading and its factor analysis in the Pearl River Delta].

    PubMed

    Liao, Yi-Shan; Zhuo, Mu-Ning; Li, Ding-Qiang; Guo, Tai-Long

    2013-08-01

    In the Pearl Delta region, urban rivers have been seriously polluted, and the input of non-point source pollution materials, such as chemical oxygen demand (COD), into rivers cannot be neglected. During 2009-2010, the water qualities at eight different catchments in the Fenjiang River of Foshan city were monitored, and the COD loads for eight rivulet sewages were calculated in respect of different rainfall conditions. Interesting results were concluded in our paper. The rainfall and landuse type played important roles in the COD loading, with greater influence of rainfall than landuse type. Consequently, a COD loading formula was constructed that was defined as a function of runoff and landuse type that were derived SCS model and land use map. Loading of COD could be evaluated and predicted with the constructed formula. The mean simulation accuracy for single rainfall event was 75.51%. Long-term simulation accuracy was better than that of single rainfall. In 2009, the estimated COD loading and its loading intensity were 8 053 t and 339 kg x (hm2 x a)(-1), and the industrial land was regarded as the main source of COD pollution area. The severe non-point source pollution such as COD in Fenjiang River must be paid more attention in the future.

  20. Estimating the indirect costs associated with the expected number of cancer cases in Mexico by 2020.

    PubMed

    Gutiérrez-Delgado, Cristina; Armas-Texta, Daniel; Reynoso-Noverón, Nancy; Meneses-García, Abelardo; Mohar-Betancourt, Alejandro

    2016-04-01

    To estimate the indirect costs generated by adults with cancer in Mexico from 2002-2020. Using information from national sources and the national cancer incidence from GLOBOCAN, we estimated income lost due to premature death (ILPD), short-term benefits (STBs), disability pensions (DPs), and opportunity costs for the carer (OCCs) generated by patients with cancer. Amounts were reported in Mexican pesos. We estimated 23 359 deaths and 216 679 new cases of cancer by 2020, which would be associated with a total indirect cost of 20.15 billion Mexican pesos. Men are expected to generate 54.9% of these costs. ILPD is expected to comprise the highest percentage of the cost (60%), followed by OCCs (22%), STBs (17%) and DPs (1%). From an economic perspective, the results emphasize the need to strengthen preventive interventions and early detection of cancer among adults to reduce its effect on the productivity of Mexico.

  1. Using water-quality profiles to characterize seasonal water quality and loading in the upper Animas River basin, southwestern Colorado

    USGS Publications Warehouse

    Leib, Kenneth J.; Mast, M. Alisa; Wright, Winfield G.

    2003-01-01

    One of the important types of information needed to characterize water quality in streams affected by historical mining is the seasonal pattern of toxic trace-metal concentrations and loads. Seasonal patterns in water quality are estimated in this report using a technique called water-quality profiling. Water-quality profiling allows land managers and scientists to assess priority areas to be targeted for characterization and(or) remediation by quantifying the timing and magnitude of contaminant occurrence. Streamflow and water-quality data collected at 15 sites in the upper Animas River Basin during water years 1991?99 were used to develop water-quality profiles. Data collected at each sampling site were used to develop ordinary least-squares regression models for streamflow and constituent concentrations. Streamflow was estimated by correlating instantaneous streamflow measured at ungaged sites with continuous streamflow records from streamflow-gaging stations in the subbasin. Water-quality regression models were developed to estimate hardness and dissolved cadmium, copper, and zinc concentrations based on streamflow and seasonal terms. Results from the regression models were used to calculate water-quality profiles for streamflow, constituent concentrations, and loads. Quantification of cadmium, copper, and zinc loads in a stream segment in Mineral Creek (sites M27 to M34) was presented as an example application of water-quality profiling. The application used a method of mass accounting to quantify the portion of metal loading in the segment derived from uncharacterized sources during different seasonal periods. During May, uncharacterized sources contributed nearly 95 percent of the cadmium load, 0 percent of the copper load (or uncharacterized sources also are attenuated), and about 85 percent of the zinc load at M34. During September, uncharacterized sources contributed about 86 percent of the cadmium load, 0 percent of the copper load (or uncharacterized sources also are attenuated), and about 52 percent of the zinc load at M34. Characterized sources accounted for more of the loading gains estimated in the example reach during September, possibly indicating the presence of diffuse inputs during snowmelt runoff. The results indicate that metal sources in the upper Animas River Basin may change substantially with season, regardless of the source.

  2. Uncertainty principles for inverse source problems for electromagnetic and elastic waves

    NASA Astrophysics Data System (ADS)

    Griesmaier, Roland; Sylvester, John

    2018-06-01

    In isotropic homogeneous media, far fields of time-harmonic electromagnetic waves radiated by compactly supported volume currents, and elastic waves radiated by compactly supported body force densities can be modelled in very similar fashions. Both are projected restricted Fourier transforms of vector-valued source terms. In this work we generalize two types of uncertainty principles recently developed for far fields of scalar-valued time-harmonic waves in Griesmaier and Sylvester (2017 SIAM J. Appl. Math. 77 154–80) to this vector-valued setting. These uncertainty principles yield stability criteria and algorithms for splitting far fields radiated by collections of well-separated sources into the far fields radiated by individual source components, and for the restoration of missing data segments. We discuss proper regularization strategies for these inverse problems, provide stability estimates based on the new uncertainty principles, and comment on reconstruction schemes. A numerical example illustrates our theoretical findings.

  3. Information spreading by a combination of MEG source estimation and multivariate pattern classification.

    PubMed

    Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.

  4. Information spreading by a combination of MEG source estimation and multivariate pattern classification

    PubMed Central

    Sato, Masashi; Yamashita, Okito; Sato, Masa-aki

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968

  5. Microwave implementation of two-source energy balance approach for estimating evapotranspiration

    NASA Astrophysics Data System (ADS)

    Holmes, Thomas R. H.; Hain, Christopher R.; Crow, Wade T.; Anderson, Martha C.; Kustas, William P.

    2018-02-01

    A newly developed microwave (MW) land surface temperature (LST) product is used to substitute thermal infrared (TIR)-based LST in the Atmosphere-Land Exchange Inverse (ALEXI) modeling framework for estimating evapotranspiration (ET) from space. ALEXI implements a two-source energy balance (TSEB) land surface scheme in a time-differential approach, designed to minimize sensitivity to absolute biases in input records of LST through the analysis of the rate of temperature change in the morning. Thermal infrared retrievals of the diurnal LST curve, traditionally from geostationary platforms, are hindered by cloud cover, reducing model coverage on any given day. This study tests the utility of diurnal temperature information retrieved from a constellation of satellites with microwave radiometers that together provide six to eight observations of Ka-band brightness temperature per location per day. This represents the first ever attempt at a global implementation of ALEXI with MW-based LST and is intended as the first step towards providing all-weather capability to the ALEXI framework. The analysis is based on 9-year-long, global records of ALEXI ET generated using both MW- and TIR-based diurnal LST information as input. In this study, the MW-LST (MW-based LST) sampling is restricted to the same clear-sky days as in the IR-based implementation to be able to analyze the impact of changing the LST dataset separately from the impact of sampling all-sky conditions. The results show that long-term bulk ET estimates from both LST sources agree well, with a spatial correlation of 92 % for total ET in the Europe-Africa domain and agreement in seasonal (3-month) totals of 83-97 % depending on the time of year. Most importantly, the ALEXI-MW (MW-based ALEXI) also matches ALEXI-IR (IR-based ALEXI) very closely in terms of 3-month inter-annual anomalies, demonstrating its ability to capture the development and extent of drought conditions. Weekly ET output from the two parallel ALEXI implementations is further compared to a common ground measured reference provided by the Fluxnet consortium. Overall, the two model implementations generate similar performance metrics (correlation and RMSE) for all but the most challenging sites in terms of spatial heterogeneity and level of aridity. It is concluded that a constellation of MW satellites can effectively be used to provide LST for estimating ET through ALEXI, which is an important step towards all-sky satellite-based retrieval of ET using an energy balance framework.

  6. Aura OMI observations of changes in SO2 and NO2 emissions at local, regional and global scales

    NASA Astrophysics Data System (ADS)

    Krotkov, N. A.; McLinden, C. A.; Li, C.; Lamsal, L. N.; Celarier, E. A.; Marchenko, S. V.; Swartz, W.; Bucsela, E. J.; Joiner, J.; Duncan, B. N.; Boersma, K. F.; Veefkind, P.; Levelt, P.; Fioletov, V.; Dickerson, R. R.; He, H.; Lu, Z.; Streets, D. G.

    2015-12-01

    Space-based pollution monitoring from current and planned satellite UV-Vis spectrometers play an increasingly important role in studies of tropospheric chemistry and also air quality applications to help mitigate anthropogenic and natural impacts on sensitive ecosystems, and human health. We present long-term changes in tropospheric SO2 and NO2 over some of the most polluted industrialized regions of the world observed by the Ozone Monitoring Instrument (OMI) onboard NASA's Aura satellite. Using OMI data, we identified about 400 SO2 "hot spots" and estimated emissions from them. In many regions emissions and their ambient pollution levels have decreased significantly, such as over eastern US, Europe and China. OMI observed about 50% reduction in SO2 and NO2 pollution over the North China plain in 2012-2014 that can be attributed to both government efforts to restrain emissions from the power and industrial sectors and the economic slowdown. While much smaller, India's SO2 and NO2 emissions from coal power plants and smelters are growing at a fast pace, increasing by about 200% and 50% from 2005 to 2014. Over Europe and the US OMI-observed trends agree well with those from available in situ measurements of surface concentrations, deposition and emissions data. However, for some regions (e.g., Mexico, Middle East) the emission inventories may be incomplete and OMI can provide emission estimates for missing sources, such as SO2 sources observed over the Persian Gulf. It is essential to continue long-term overlapping satellite data records of air quality with increased spatial and temporal resolution to resolve point pollution sources using oversampling technique. We discuss how Aura OMI pollution measurements and emission estimates will be continued with the US JPSS and European Sentinel series for the next 20 years and further enhanced by the addition of three geostationary UV-VIS instruments.

  7. Evaluating changes in water quality with respect to nonpoint source nutrient management strategies in the Chesapeake Bay Watershed

    NASA Astrophysics Data System (ADS)

    Keisman, J.; Sekellick, A.; Blomquist, J.; Devereux, O. H.; Hively, W. D.; Johnston, M.; Moyer, D.; Sweeney, J.

    2014-12-01

    Chesapeake Bay is a eutrophic ecosystem with periodic hypoxia and anoxia, algal blooms, diminished submerged aquatic vegetation, and degraded stocks of marine life. Knowledge of the effectiveness of actions taken across the watershed to reduce nitrogen (N) and phosphorus (P) loads to the bay (i.e. "best management practices" or BMPs) is essential to its restoration. While nutrient inputs from point sources (e.g. wastewater treatment plants and other industrial and municipal operations) are tracked, inputs from nonpoint sources, including atmospheric deposition, farms, lawns, septic systems, and stormwater, are difficult to measure. Estimating reductions in nonpoint source inputs attributable to BMPs requires compilation and comparison of data on water quality, climate, land use, point source discharges, and BMP implementation. To explore the relation of changes in nonpoint source inputs and BMP implementation to changes in water quality, a subset of small watersheds (those containing at least 10 years of water quality monitoring data) within the Chesapeake Watershed were selected for study. For these watersheds, data were compiled on geomorphology, demographics, land use, point source discharges, atmospheric deposition, and agricultural practices such as livestock populations, crop acres, and manure and fertilizer application. In addition, data on BMP implementation for 1985-2012 were provided by the Environmental Protection Agency Chesapeake Bay Program Office (CBPO) and the U.S. Department of Agriculture. A spatially referenced nonlinear regression model (SPARROW) provided estimates attributing N and P loads associated with receiving waters to different nutrient sources. A recently developed multiple regression technique ("Weighted Regressions on Time, Discharge and Season" or WRTDS) provided an enhanced understanding of long-term trends in N and P loads and concentrations. A suite of deterministic models developed by the CBPO was used to estimate expected nutrient load reductions attributable to BMPs. Further quantification of the relation of land-based nutrient sources and BMPs to water quality in the bay and its tributaries must account for inconsistency in BMP data over time and uncertainty regarding BMP locations and effectiveness.

  8. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  9. Health care reform and change in public-private mix of financing: a Korean case.

    PubMed

    Jeong, Hyoung-Sun

    2005-10-01

    The objective of this paper is to examine the changes in the Korean health care system invoked by the reform (in the latter part of 2000) in regard to the separation of drug prescription and dispensation, especially from the point of view of the public-private financing mix. It seeks particularly to estimate and analyse the relative financing mix in terms of both modes of production and types of medical provider. The data used to estimate health care expenditure financed by out-of-pocket expenditure by were sourced from the National Health and Nutritional Survey (conducted by interviewing representatives of households) and the General Household Survey (a household diary survey). National Health Insurance data, etc. were used to estimate health expenditure financed by public sources. This study concentrates on the short-run empirical links between the reform and the public-private mix in finance. The reform increased remarkably the public share in total health expenditure. This public share increase has been prominent particularly in the case of expenditure on drugs since the reform has absorbed much of the previously uncovered drugs into the National Health Insurance coverage. However, a higher public share in medical goods than in out-patient care would raise an issue in terms of prioritization of benefit packages. The five-fold increase in the public share of expenditure at pharmacies reflects not only the fact that drugs previously not covered by NHI are covered now but also the fact that prescribed drugs are currently purchased mainly at pharmacies, as opposed to in doctors' clinics, as a result of the reform.

  10. The European Infrasound Bulletin

    NASA Astrophysics Data System (ADS)

    Pilger, Christoph; Ceranna, Lars; Ross, J. Ole; Vergoz, Julien; Le Pichon, Alexis; Brachet, Nicolas; Blanc, Elisabeth; Kero, Johan; Liszka, Ludwik; Gibbons, Steven; Kvaerna, Tormod; Näsholm, Sven Peter; Marchetti, Emanuele; Ripepe, Maurizio; Smets, Pieter; Evers, Laslo; Ghica, Daniela; Ionescu, Constantin; Sindelarova, Tereza; Ben Horin, Yochai; Mialle, Pierrick

    2018-05-01

    The European Infrasound Bulletin highlights infrasound activity produced mostly by anthropogenic sources, recorded all over Europe and collected in the course of the ARISE and ARISE2 projects (Atmospheric dynamics Research InfraStructure in Europe). Data includes high-frequency (> 0.7 Hz) infrasound detections at 24 European infrasound arrays from nine different national institutions complemented with infrasound stations of the International Monitoring System for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Data were acquired during 16 years of operation (from 2000 to 2015) and processed to identify and locate ˜ 48,000 infrasound events within Europe. The source locations of these events were derived by combining at least two corresponding station detections per event. Comparisons with ground-truth sources, e.g., Scandinavian mining activity, are provided as well as comparisons with the CTBT Late Event Bulletin (LEB). Relocation is performed using ray-tracing methods to estimate celerity and back-azimuth corrections for source location based on meteorological wind and temperature values for each event derived from European Centre for Medium-range Weather Forecast (ECMWF) data. This study focuses on the analysis of repeating, man-made infrasound events (e.g., mining blasts and supersonic flights) and on the seasonal, weekly and diurnal variation of the infrasonic activity of sources in Europe. Drawing comparisons to previous studies shows that improvements in terms of detection, association and location are made within this study due to increasing the station density and thus the number of events and determined source regions. This improves the capability of the infrasound station network in Europe to more comprehensively estimate the activity of anthropogenic infrasound sources in Europe.

  11. Towards next generation time-domain diffuse optics devices

    NASA Astrophysics Data System (ADS)

    Dalla Mora, Alberto; Contini, Davide; Arridge, Simon R.; Martelli, Fabrizio; Tosi, Alberto; Boso, Gianluca; Farina, Andrea; Durduran, Turgut; Martinenghi, Edoardo; Torricelli, Alessandro; Pifferi, Antonio

    2015-03-01

    Diffuse Optics is growing in terms of applications ranging from e.g. oximetry, to mammography, molecular imaging, quality assessment of food and pharmaceuticals, wood optics, physics of random media. Time-domain (TD) approaches, although appealing in terms of quantitation and depth sensibility, are presently limited to large fiber-based systems, with limited number of source-detector pairs. We present a miniaturized TD source-detector probe embedding integrated laser sources and single-photon detectors. Some electronics are still external (e.g. power supply, pulse generators, timing electronics), yet full integration on-board using already proven technologies is feasible. The novel devices were successfully validated on heterogeneous phantoms showing performances comparable to large state-of-the-art TD rack-based systems. With an investigation based on simulations we provide numerical evidence that the possibility to stack many TD compact source-detector pairs in a dense, null source-detector distance arrangement could yield on the brain cortex about 1 decade higher contrast as compared to a continuous wave (CW) approach. Further, a 3-fold increase in the maximum depth (down to 6 cm) is estimated, opening accessibility to new organs such as the lung or the heart. Finally, these new technologies show the way towards compact and wearable TD probes with orders of magnitude reduction in size and cost, for a widespread use of TD devices in real life.

  12. MEqTrees Telescope and Radio-sky Simulations and CPU Benchmarking

    NASA Astrophysics Data System (ADS)

    Shanmugha Sundaram, G. A.

    2009-09-01

    MEqTrees is a Python-based implementation of the classical Measurement Equation, wherein the various 2×2 Jones matrices are parametrized representations in the spatial and sky domains for any generic radio telescope. Customized simulations of radio-source sky models and corrupt Jones terms are demonstrated based on a policy framework, with performance estimates derived for array configurations, ``dirty''-map residuals and processing power requirements for such computations on conventional platforms.

  13. Advances in audio source seperation and multisource audio content retrieval

    NASA Astrophysics Data System (ADS)

    Vincent, Emmanuel

    2012-06-01

    Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.

  14. Water resources management: Hydrologic characterization through hydrograph simulation may bias streamflow statistics

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Kiang, J. E.

    2017-12-01

    The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.

  15. Multi-Scale Analysis of Trends in Northeastern Temperate Forest Springtime Phenology

    NASA Astrophysics Data System (ADS)

    Moon, M.; Melaas, E. K.; Sulla-menashe, D. J.; Friedl, M. A.

    2017-12-01

    The timing of spring leaf emergence is highly variable in many ecosystems, exerts first-order control growing season length, and significantly modulates seasonally-integrated photosynthesis. Numerous studies have reported trends toward earlier spring phenology in temperate forests, with some papers indicating that this trend is also leading to increased carbon uptake. At broad spatial scales, however, most of these studies have used data from coarse spatial resolution instruments such as MODIS, which does not resolve ecologically important landscape-scale patterns in phenology. In this work, we examine how long-term trends in spring phenology differ across three data sources acquired at different scales of measurements at the Harvard Forest in central Massachusetts. Specifically, we compared trends in the timing of phenology based on long-term in-situ measurements of phenology, estimates based on eddy-covariance measurements of net carbon uptake transition dates, and from two sources of satellite-based remote sensing (MODIS and Landsat) land surface phenology (LSP) data. Our analysis focused on the flux footprint surrounding the Harvard Forest Environmental Measurements (EMS) tower. Our results reveal clearly defined trends toward earlier springtime phenology in Landsat LSP and in the timing of tower-based net carbon uptake. However, we find no statistically significant trend in springtime phenology measured from MODIS LSP data products, possibly because the time series of MODIS observations is relatively short (13 years). The trend in tower-based transition data exhibited a larger negative value than the trend derived from Landsat LSP data (-0.42 and -0.28 days per year for 21 and 28 years, respectively). More importantly, these results have two key implications regarding how changes in spring phenology are impacting carbon uptake at landscape-scale. First, long-term trends in spring phenology can be quite different, depending on what data source is used to estimate the trend, and 2) the response of carbon uptake to climate change may be more sensitive than the response of land surface phenology itself.

  16. Software Transition Project Retrospectives and the Application of SEL Effort Estimation Model and Boehm's COCOMO to Complex Software Transition Projects

    NASA Technical Reports Server (NTRS)

    McNeill, Justin

    1995-01-01

    The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.

  17. Estimating the number of injecting drug users in Scotland's HCV-diagnosed population using capture-recapture methods.

    PubMed

    McDonald, S A; Hutchinson, S J; Schnier, C; McLeod, A; Goldberg, D J

    2014-01-01

    In countries maintaining national hepatitis C virus (HCV) surveillance systems, a substantial proportion of individuals report no risk factors for infection. Our goal was to estimate the proportion of diagnosed HCV antibody-positive persons in Scotland (1991-2010) who probably acquired infection through injecting drug use (IDU), by combining data on IDU risk from four linked data sources using log-linear capture-recapture methods. Of 25,521 HCV-diagnosed individuals, 14,836 (58%) reported IDU risk with their HCV diagnosis. Log-linear modelling estimated a further 2484 HCV-diagnosed individuals with IDU risk, giving an estimated prevalence of 83. Stratified analyses indicated variation across birth cohort, with estimated prevalence as low as 49% in persons born before 1960 and greater than 90% for those born since 1960. These findings provide public-health professionals with a more complete profile of Scotland's HCV-infected population in terms of transmission route, which is essential for targeting educational, prevention and treatment interventions.

  18. Correcting STIS CCD Point-Source Spectra for CTE Loss

    NASA Technical Reports Server (NTRS)

    Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus

    2006-01-01

    We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.

  19. Burden Calculator: a simple and open analytical tool for estimating the population burden of injuries.

    PubMed

    Bhalla, Kavi; Harrison, James E

    2016-04-01

    Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. Multitaper scan-free spectrum estimation using a rotational shear interferometer.

    PubMed

    Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  1. Multitaper scan-free spectrum estimation using a rotational shear interferometer

    NASA Astrophysics Data System (ADS)

    Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  2. Atmospheric particulate emissions from dry abrasive blasting using coal slag.

    PubMed

    Kura, Bhaskar; Kambham, Kalpalatha; Sangameswaran, Sivaramakrishnan; Potana, Sandhya

    2006-08-01

    Coal slag is one of the widely used abrasives in dry abrasive blasting. Atmospheric emissions from this process include particulate matter (PM) and heavy metals, such as chromium, lead, manganese, nickel. Quantities and characteristics of PM emissions depend on abrasive characteristics and process parameters. Emission factors are key inputs to estimate emissions. Experiments were conducted to study the effect of blast pressure, abrasive feed rate, and initial surface contamination on total PM (TPM) emission factors for coal slag. Rusted and painted mild steel surfaces were used as base plates. Blasting was carried out in an enclosed chamber, and PM was collected from an exhaust duct using U.S. Environment Protection Agency source sampling methods for stationary sources. Results showed that there is significant effect of blast pressure, feed rate, and surface contamination on TPM emissions. Mathematical equations were developed to estimate emission factors in terms of mass of emissions per unit mass of abrasive used, as well as mass of emissions per unit of surface area cleaned. These equations will help industries in estimating PM emissions based on blast pressure and abrasive feed rate. In addition, emissions can be reduced by choosing optimum operating conditions.

  3. Relating the variability of tone-burst otoacoustic emission and auditory brainstem response latencies to the underlying cochlear mechanics

    NASA Astrophysics Data System (ADS)

    Verhulst, Sarah; Shera, Christopher A.

    2015-12-01

    Forward and reverse cochlear latency and its relation to the frequency tuning of the auditory filters can be assessed using tone bursts (TBs). Otoacoustic emissions (TBOAEs) estimate the cochlear roundtrip time, while auditory brainstem responses (ABRs) to the same stimuli aim at measuring the auditory filter buildup time. Latency ratios are generally close to two and controversy exists about the relationship of this ratio to cochlear mechanics. We explored why the two methods provide different estimates of filter buildup time, and ratios with large inter-subject variability, using a time-domain model for OAEs and ABRs. We compared latencies for twenty models, in which all parameters but the cochlear irregularities responsible for reflection-source OAEs were identical, and found that TBOAE latencies were much more variable than ABR latencies. Multiple reflection-sources generated within the evoking stimulus bandwidth were found to shape the TBOAE envelope and complicate the interpretation of TBOAE latency and TBOAE/ABR ratios in terms of auditory filter tuning.

  4. A radiometric Bode's Law: Predictions for Uranus

    NASA Technical Reports Server (NTRS)

    Desch, M. D.; Kaiser, M. L.

    1984-01-01

    The magnetospheres of three planets, Earth, Jupiter, and Saturn, are known to be sources of intense, nonthermal radio bursts. The emissions from these sources undergo pronounced long term intensity fluctuations that are caused by the solar wind interaction with the magnetosphere of each planet. Determinations by spacecraft of the low frequency radio spectra and radiation beam geometry now permit a reliable assessment of the overall efficiency of the solar wind in stimulating these emissions. Earlier estimates of how magnetospheric radio output scales with the solar wind energy input must be revised greatly, with the result that, while the efficiency is much lower than previously thought, it is remarkably uniform from planet to planet. The formulation of a radiometric Bode's Law from which a planet's magnetic moment is estimated from its radio emission output is presented. Applying the radiometric scaling law to Uranus, the low-frequency radio power is likely to be measured by the Voyager 2 spacecraft as it approaches this planet.

  5. Mobile sensing of point-source fugitive methane emissions using Bayesian inference: the determination of the likelihood function

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Albertson, J. D.

    2016-12-01

    Natural gas is considered as a bridge fuel towards clean energy due to its potential lower greenhouse gas emission comparing with other fossil fuels. Despite numerous efforts, an efficient and cost-effective approach to monitor fugitive methane emissions along the natural gas production-supply chain has not been developed yet. Recently, mobile methane measurement has been introduced which applies a Bayesian approach to probabilistically infer methane emission rates and update estimates recursively when new measurements become available. However, the likelihood function, especially the error term which determines the shape of the estimate uncertainty, is not rigorously defined and evaluated with field data. To address this issue, we performed a series of near-source (< 30 m) controlled methane release experiments using a specialized vehicle mounted with fast response methane analyzers and a GPS unit. Methane concentrations were measured at two different heights along mobile traversals downwind of the sources, and concurrent wind and temperature data are recorded by nearby 3-D sonic anemometers. With known methane release rates, the measurements were used to determine the functional form and the parameterization of the likelihood function in the Bayesian inference scheme under different meteorological conditions.

  6. Volcanic eruption source parameters from active and passive microwave sensors

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Marzano, Frank S.; Cimini, Domenico; Mereu, Luigi

    2016-04-01

    It is well known, in the volcanology community, that precise information of the source parameters characterising an eruption are of predominant interest for the initialization of the Volcanic Transport and Dispersion Models (VTDM). Source parameters of main interest would be the top altitude of the volcanic plume, the flux of the mass ejected at the emission source, which is strictly related to the cloud top altitude, the distribution of volcanic mass concentration along the vertical column as well as the duration of the eruption and the erupted volume. Usually, the combination of a-posteriori field and numerical studies allow constraining the eruption source parameters for a given volcanic event thus making possible the forecast of ash dispersion and deposition from future volcanic eruptions. So far, remote sensors working at visible and infrared channels (cameras and radiometers) have been mainly used to detect, track and provide estimates of the concentration content and the prevailing size of the particles propagating within the ash clouds up to several thousand of kilometres far from the source as well as track back, a-posteriori, the accuracy of the VATDM outputs thus testing the initial choice made for the source parameters. Acoustic wave (infrasound) and microwave fixed scan radar (voldorad) were also used to infer source parameters. In this work we want to put our attention on the role of sensors operating at microwave wavelengths as complementary tools for the real time estimations of source parameters. Microwaves can benefit of the operability during night and day and a relatively negligible sensitivity to the presence of clouds (non precipitating weather clouds) at the cost of a limited coverage and larger spatial resolution when compared with infrared sensors. Thanks to the aforementioned advantages, the products from microwaves sensors are expected to be sensible mostly to the whole path traversed along the tephra cloud making microwaves particularly appealing for estimates close to the volcano emission source. Near the source the cloud optical thickness is expected to be large enough to induce saturation effects at the infrared sensor receiver thus vanishing the brightness temperature difference methods for the ash cloud identification. In the light of the introduction above, some case studies at Eyjafjallajökull 2010 (Iceland), Etna (Italy) and Calbuco (Cile), on 5-10 May 2010, 23rd Nov., 2013 and 23 Apr., 2015, respectively, are analysed in terms of source parameter estimates (manly the cloud top and mass flax rate) from ground based microwave weather radar (9.6 GHz) and satellite Low Earth Orbit microwave radiometers (50 - 183 GH). A special highlight will be given to the advantages and limitations of microwave-related products with respect to more conventional tools.

  7. Effects of volcano topography on seismic broad-band waveforms

    NASA Astrophysics Data System (ADS)

    Neuberg, Jürgen; Pointer, Tim

    2000-10-01

    Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.

  8. Exact relations for energy transfer in self-gravitating isothermal turbulence

    NASA Astrophysics Data System (ADS)

    Banerjee, Supratik; Kritsuk, Alexei G.

    2017-11-01

    Self-gravitating isothermal supersonic turbulence is analyzed in the asymptotic limit of large Reynolds numbers. Based on the inviscid invariance of total energy, an exact relation is derived for homogeneous (not necessarily isotropic) turbulence. A modified definition for the two-point energy correlation functions is used to comply with the requirement of detailed energy equipartition in the acoustic limit. In contrast to the previous relations (S. Galtier and S. Banerjee, Phys. Rev. Lett. 107, 134501 (2011), 10.1103/PhysRevLett.107.134501; S. Banerjee and S. Galtier, Phys. Rev. E 87, 013019 (2013), 10.1103/PhysRevE.87.013019), the current exact relation shows that the pressure dilatation terms play practically no role in the energy cascade. Both the flux and source terms are written in terms of two-point differences. Sources enter the relation in a form of mixed second-order structure functions. Unlike the kinetic and thermodynamic potential energies, the gravitational contribution is absent from the flux term. An estimate shows that, for the isotropic case, the correlation between density and gravitational acceleration may play an important role in modifying the energy transfer in self-gravitating turbulence. The exact relation is also written in an alternative form in terms of two-point correlation functions, which is then used to describe scale-by-scale energy budget in spectral space.

  9. The use of gravimetric data from GRACE mission in the understanding of polar motion variations

    NASA Astrophysics Data System (ADS)

    Seoane, L.; Nastula, J.; Bizouard, C.; Gambis, D.

    2009-08-01

    Tesseral coefficients C21 and S21 derived from Gravity Recovery and Climate Experiment (GRACE) observations allow to compute the mass term of the polar-motion excitation function. This independent estimation can improve the geophysical models and, in addition, determine the unmodelled phenomena. In this paper, we intend to validate the polar motion excitation derived from GRACE's last release (GRACE Release 4) computed by different institutes: GeoForschungsZentrum (GFZ), Postdam, Germany; Center for Space Research (CSR), Austin, USA; Jet Propulsion Laboratory (JPL), Pasadena, USA, and the Groupe de Recherche en Géodésie Spatiale (GRGS), Toulouse, France. For this purpose, we compare these excitations functions first to the mass term obtained from observed Earth's rotation variations free of the motion term and, second, to the mass term estimated from geophysical fluids models. We confirm the large improvement of the CSR solution, and we show that the GRGS estimate is also well correlated with the geodetic observations. Significant discrepancies exist between the solutions of each centre. The source of these differences is probably related to the data processing strategy. We also consider residuals computed after removing the geophysical models or the gravimetric solutions from the geodetic mass term. We show that the residual excitation based on models is smoother than the gravimetric data, which are still noisy. Still, they are comparable for the χ2 component. It appears that χ2 residual signals using GFZ and JPL data have less variability. Finally, for assessing the impact of the geophysical fluids models choice on our results, we checked two different oceanic excitation series. We show the significant differences in the residuals correlations, especially for the χ1 more sensitive to the oceanic signals.

  10. Accident Source Terms for Pressurized Water Reactors with High-Burnup Cores Calculated using MELCOR 1.8.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.

    2016-12-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less

  11. Evaluation of Intercontinental Transport of Ozone Using Full-tagged, Tagged-N and Sensitivity Methods

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Liu, J.; Mauzerall, D. L.; Emmons, L. K.; Horowitz, L. W.; Fan, S.; Li, X.; Tao, S.

    2014-12-01

    Long-range transport of ozone is of great concern, yet the source-receptor relationships derived previously depend strongly on the source attribution techniques used. Here we describe a new tagged ozone mechanism (full-tagged), the design of which seeks to take into account the combined effects of emissions of ozone precursors, CO, NOx and VOCs, from a particular source, while keeping the current state of chemical equilibrium unchanged. We label emissions from the target source (A) and background (B). When two species from A and B sources react with each other, half of the resulting products are labeled A, and half B. Thus the impact of a given source on downwind regions is recorded through tagged chemistry. We then incorporate this mechanism into the Model for Ozone and Related chemical Tracers (MOZART-4) to examine the impact of anthropogenic emissions within North America, Europe, East Asia and South Asia on ground-level ozone downwind of source regions during 1999-2000. We compare our results with two previously used methods -- the sensitivity and tagged-N approaches. The ozone attributed to a given source by the full-tagged method is more widely distributed spatially, but has weaker seasonal variability than that estimated by the other methods. On a seasonal basis, for most source/receptor pairs, the full-tagged method estimates the largest amount of tagged ozone, followed by the sensitivity and tagged-N methods. In terms of trans-Pacific influence of ozone pollution, the full-tagged method estimates the strongest impact of East Asian (EA) emissions on the western U.S. (WUS) in MAM and JJA (~3 ppbv), which is substantially different in magnitude and seasonality from tagged-N and sensitivity studies. This difference results from the full-tagged method accounting for the maintenance of peroxy radicals (e.g., CH3O2, CH3CO3, and HO2), in addition to NOy, as effective reservoirs of EA source impact across the Pacific, allowing for a significant contribution to ozone formation over WUS (particularly in summer). Thus, the full-tagged method, with its clear discrimination of source and background contributions on a per-reaction basis, provides unique insights into the critical role of VOCs (and additional reactive nitrogen species) in determining the nonlinear inter-continental influence of ozone pollution.

  12. Area estimation of environmental phenomena from NOAA-n satellite data. [TIROS N satellite

    NASA Technical Reports Server (NTRS)

    Tappan, G. (Principal Investigator); Miller, G. E.

    1982-01-01

    A technique for documenting changes in size of NOAA-n pixels in order to calibrate the data for use in performing area calculations is described. Based on Earth-satellite geometry, a function for calculating the effective pixel size, measured in terms of ground area, on any given pixel was derived. The equation is an application of the law of sines plus an arclength formula. Effective pixel dimensions for NOAA 6 and 7 satellites for all pixels between nadir and the extreme view angles are presented. The NOAA 6 data were used to estimate the areas of several lakes, with an accuracy within 5%. Sources of error are discussed.

  13. Hanford Environmental Dose Reconstruction Project monthly report, November 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-12-31

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed members representing the states of Oregon, Washington. and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks: Source terms; environmental transport; environmental monitoring data; demography, food consumption and agriculture; environmentalmore » pathways and dose estimates.« less

  14. Hanford Environmental Dose Reconstruction Project monthly report, November 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-01-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed members representing the states of Oregon, Washington. and Idaho, a representative of Native American tribes, and an individual representing the public. The project is divided into the following technical tasks: Source terms; environmental transport; environmental monitoring data; demography, food consumption and agriculture; environmentalmore » pathways and dose estimates.« less

  15. Energy performance of a ventilation system for a block of apartments with a ground source heat pump as generation system

    NASA Astrophysics Data System (ADS)

    Lucchi, M.; Lorenzini, M.; Valdiserri, P.

    2017-01-01

    This work presents a numerical simulation of the annual performance of two different systems: a traditional one composed by a gas boiler-chiller pair and one consisting of a ground source heat pump (GSHP) both coupled to two thermal storage tanks. The systems serve a bloc of flats located in northern Italy and are assessed over a typical weather year, covering both the heating and cooling seasons. The air handling unit (AHU) coupled with the GSHP exhibits excellent characteristics in terms of temperature control, and has high performance parameters (EER and COP), which make conduction costs about 30% lower than those estimated for the traditional plant.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  17. Characterization of Spatial Impact of Particles Emitted from a Cement Material Production Facility on Outdoor Particle Deposition in the Surrounding Community.

    PubMed

    Yu, Chang Ho; Fan, Zhihua Tina; McCandlish, Elizabeth; Stern, Alan H; Lioy, Paul J

    2011-10-01

    The objective of this study was to estimate the contribution of a facility that processes steel production slag into raw material for cement production to local outdoor particle deposition in Camden, NJ. A dry deposition sampler that can house four 37-mm quartz fiber filters was developed and used for the collection of atmospheric particle deposits. Two rounds of particle collection (3-4 weeks each) were conducted in 8-11 locations 200-800 m downwind of the facility. Background samples were concurrently collected in a remote area located ∼2 km upwind from the facility. In addition, duplicate surface wipe samples were collected side-by-side from each of the 13 locations within the same sampling area during the first deposition sampling period. One composite source material sample was also collected from a pile stored in the facility. Both the bulk of the source material and the <38 μm fraction subsample were analyzed to obtain the elemental source profile. The particle deposition flux in the study area was higher (24-83 mg/m 2 ·day) than at the background sites (13-17 mg/m 2 ·day). The concentration of Ca, a major element in the cement source production material, was found to exponentially decrease with increasing downwind distance from the facility (P < 0.05). The ratio of Ca/Al, an indicator of Ca enrichment due to anthropogenic sources in a given sample, showed a similar trend. These observations suggest a significant contribution of the facility to the local particle deposition. The contribution of the facility to outdoor deposited particle mass was further estimated by three independent models using the measurements obtained from this study. The estimated contributions to particle deposition in the study area were 1.8-7.4% from the regression analysis of the Ca concentration in particle deposition samples against the distance from the facility, 0-11% from the U.S. Environmental Protection Agency (EPA) Chemical Mass Balance (CMB) source-receptor model, and 7.6-13% from the EPA Industrial Source Complex Short Term (ISCST3) dispersion model using the particle-size-adjusted permit-based emissions estimates. [Box: see text].

  18. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  19. Potential health risks from postulated accidents involving the Pu-238 RTG on the Ulysses solar exploration mission

    NASA Technical Reports Server (NTRS)

    Goldman, Marvin; Hoover, Mark D.; Nelson, Robert C.; Templeton, William; Bollinger, Lance; Anspaugh, Lynn

    1991-01-01

    Potential radiation impacts from launch of the Ulysses solar exploration experiment were evaluated using eight postulated accident scenarios. Lifetime individual dose estimates rarely exceeded 1 mrem. Most of the potential health effects would come from inhalation exposures immediately after an accident, rather than from ingestion of contaminated food or water, or from inhalation of resuspended plutonium from contaminated ground. For local Florida accidents (that is, during the first minute after launch), an average source term accident was estimated to cause a total added cancer risk of up to 0.2 deaths. For accidents at later time after launch, a worldwide cancer risk of up to three cases was calculated (with a four in a million probability). Upper bound estimates were calculated to be about 10 times higher.

  20. Fish-Eye Observing with Phased Array Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Wijnholds, S. J.

    The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.

  1. Inverse source problems in elastodynamics

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  2. Repeat immigration: A previously unobserved source of heterogeneity?

    PubMed

    Aradhya, Siddartha; Scott, Kirk; Smith, Christopher D

    2017-07-01

    Register data allow for nuanced analyses of heterogeneities between sub-groups which are not observable in other data sources. One heterogeneity for which register data is particularly useful is in identifying unique migration histories of immigrant populations, a group of interest across disciplines. Years since migration is a commonly used measure of integration in studies seeking to understand the outcomes of immigrants. This study constructs detailed migration histories to test whether misclassified migrations may mask important heterogeneities. In doing so, we identify a previously understudied group of migrants called repeat immigrants, and show that they differ systematically from permanent immigrants. In addition, we quantify the degree to which migration information is misreported in the registers. The analysis is carried out in two steps. First, we estimate income trajectories for repeat immigrants and permanent immigrants to understand the degree to which they differ. Second, we test data validity by cross-referencing migration information with changes in income to determine whether there are inconsistencies indicating misreporting. From the first part of the analysis, the results indicate that repeat immigrants systematically differ from permanent immigrants in terms of income trajectories. Furthermore, income trajectories differ based on the way in which years since migration is calculated. The second part of the analysis suggests that misreported migration events, while present, are negligible. Repeat immigrants differ in terms of income trajectories, and may differ in terms of other outcomes as well. Furthermore, this study underlines that Swedish registers provide a reliable data source to analyze groups which are unidentifiable in other data sources.

  3. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  4. Long-term air pollution exposure and cardio- respiratory mortality: a review

    PubMed Central

    2013-01-01

    Current day concentrations of ambient air pollution have been associated with a range of adverse health effects, particularly mortality and morbidity due to cardiovascular and respiratory diseases. In this review, we summarize the evidence from epidemiological studies on long-term exposure to fine and coarse particles, nitrogen dioxide (NO2) and elemental carbon on mortality from all-causes, cardiovascular disease and respiratory disease. We also summarize the findings on potentially susceptible subgroups across studies. We identified studies through a search in the databases Medline and Scopus and previous reviews until January 2013 and performed a meta-analysis if more than five studies were available for the same exposure metric. There is a significant number of new studies on long-term air pollution exposure, covering a wider geographic area, including Asia. These recent studies support associations found in previous cohort studies on PM2.5. The pooled effect estimate expressed as excess risk per 10 μg/m3 increase in PM2.5 exposure was 6% (95% CI 4, 8%) for all-cause and 11% (95% CI 5, 16%) for cardiovascular mortality. Long-term exposure to PM2.5 was more associated with mortality from cardiovascular disease (particularly ischemic heart disease) than from non-malignant respiratory diseases (pooled estimate 3% (95% CI −6, 13%)). Significant heterogeneity in PM2.5 effect estimates was found across studies, likely related to differences in particle composition, infiltration of particles indoors, population characteristics and methodological differences in exposure assessment and confounder control. All-cause mortality was significantly associated with elemental carbon (pooled estimate per 1 μg/m3 6% (95% CI 5, 7%)) and NO2 (pooled estimate per 10 μg/m3 5% (95% CI 3, 8%)), both markers of combustion sources. There was little evidence for an association between long term coarse particulate matter exposure and mortality, possibly due to the small number of studies and limitations in exposure assessment. Across studies, there was little evidence for a stronger association among women compared to men. In subjects with lower education and obese subjects a larger effect estimate for mortality related to fine PM was found, though the evidence for differences related to education has been weakened in more recent studies. PMID:23714370

  5. [Methodological approaches of a social budget of disability].

    PubMed

    Fardeau, M

    1994-01-01

    By gathering data from different sources, it may be possible to estimate the French social budget of disability. In 1990, approximatively 126.9 million FF were devoted by the nation to its disabled population. One quarter of the amount is "in kind", for financing training centers, nursing homes for the disabled... The three remaining quarters are composed of "cash benefits" (disability allowances, work accident annuities,...). The approach makes it possible the assessment of disability in economic terms.

  6. Long-term biases in geomagnetic K and aa indices

    USGS Publications Warehouse

    Love, J.J.

    2011-01-01

    Analysis is made of the geomagnetic-activity aa index and its source K-index data from groups of ground-based observatories in Britain, and Australia, 1868.0-2009.0, solar cycles 11-23. The K data show persistent biases, especially for high (low) K-activity levels at British (Australian) observatories. From examination of multiple subsets of the K data we infer that the biases are not predominantly the result of changes in observatory location, localized induced magnetotelluric currents, changes in magnetometer technology, or the modernization of K-value estimation methods. Instead, the biases appear to be artifacts of the latitude-dependent scaling used to assign K values to particular local levels of geomagnetic activity. The biases are not effectively removed by weighting factors used to estimate aa. We show that long-term averages of the aa index, such as annual averages, are dominated by medium-level geomagnetic activity levels having K values of 3 and 4. ?? 2011 Author(s).

  7. Species and temperature predictions in a semi-industrial MILD furnace using a non-adiabatic conditional source-term estimation formulation

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey William; Devaud, Cecile

    2017-05-01

    A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.

  8. The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts

    NASA Astrophysics Data System (ADS)

    Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel

    2015-01-01

    Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in the slope below 200 mJy, indicating a strong evolution in number of density for galaxies at these fluxes. In general, models tend to overpredict the counts at brighter flux densities, underlying the importance of studying the Rayleigh-Jeans part of the spectral energy distribution to refine the theoretical recipes of the models. Our iterative method for source identification allowed the detection of a family of 500 μm sources that are not foreground objects belonging to Virgo and not found in other catalogs. Herschel is an ESA space observatory with science instruments provided by a European-led principal investigator consortia and with an important participation from NASA.The 250, 350, 500 μm, and the total catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A129

  9. Design and implementation of wireless dose logger network for radiological emergency decision support system.

    PubMed

    Gopalakrishnan, V; Baskaran, R; Venkatraman, B

    2016-08-01

    A decision support system (DSS) is implemented in Radiological Safety Division, Indira Gandhi Centre for Atomic Research for providing guidance for emergency decision making in case of an inadvertent nuclear accident. Real time gamma dose rate measurement around the stack is used for estimating the radioactive release rate (source term) by using inverse calculation. Wireless gamma dose logging network is designed, implemented, and installed around the Madras Atomic Power Station reactor stack to continuously acquire the environmental gamma dose rate and the details are presented in the paper. The network uses XBee-Pro wireless modules and PSoC controller for wireless interfacing, and the data are logged at the base station. A LabView based program is developed to receive the data, display it on the Google Map, plot the data over the time scale, and register the data in a file to share with DSS software. The DSS at the base station evaluates the real time source term to assess radiation impact.

  10. Design and implementation of wireless dose logger network for radiological emergency decision support system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopalakrishnan, V.; Baskaran, R.; Venkatraman, B.

    A decision support system (DSS) is implemented in Radiological Safety Division, Indira Gandhi Centre for Atomic Research for providing guidance for emergency decision making in case of an inadvertent nuclear accident. Real time gamma dose rate measurement around the stack is used for estimating the radioactive release rate (source term) by using inverse calculation. Wireless gamma dose logging network is designed, implemented, and installed around the Madras Atomic Power Station reactor stack to continuously acquire the environmental gamma dose rate and the details are presented in the paper. The network uses XBee–Pro wireless modules and PSoC controller for wireless interfacing,more » and the data are logged at the base station. A LabView based program is developed to receive the data, display it on the Google Map, plot the data over the time scale, and register the data in a file to share with DSS software. The DSS at the base station evaluates the real time source term to assess radiation impact.« less

  11. Performance Analysis of Physical Layer Security of Opportunistic Scheduling in Multiuser Multirelay Cooperative Networks

    PubMed Central

    Shim, Kyusung; Do, Nhu Tri; An, Beongku

    2017-01-01

    In this paper, we study the physical layer security (PLS) of opportunistic scheduling for uplink scenarios of multiuser multirelay cooperative networks. To this end, we propose a low-complexity, yet comparable secrecy performance source relay selection scheme, called the proposed source relay selection (PSRS) scheme. Specifically, the PSRS scheme first selects the least vulnerable source and then selects the relay that maximizes the system secrecy capacity for the given selected source. Additionally, the maximal ratio combining (MRC) technique and the selection combining (SC) technique are considered at the eavesdropper, respectively. Investigating the system performance in terms of secrecy outage probability (SOP), closed-form expressions of the SOP are derived. The developed analysis is corroborated through Monte Carlo simulation. Numerical results show that the PSRS scheme significantly improves the secure ability of the system compared to that of the random source relay selection scheme, but does not outperform the optimal joint source relay selection (OJSRS) scheme. However, the PSRS scheme drastically reduces the required amount of channel state information (CSI) estimations compared to that required by the OJSRS scheme, specially in dense cooperative networks. PMID:28212286

  12. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.

  13. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  14. Long term continuous field survey to assess nutrient emission impact from irrigated paddy field into river catchment

    NASA Astrophysics Data System (ADS)

    Kogure, Kanami; Aichi, Masaatsu; Zessner, Matthias

    2017-04-01

    In order to achieve good river environment, it is very important to understand and to control nutrient behavior such as Nitrogen and Phosphorus. As we could reduce impact from urban and industrial activities by wastewater treatment, pollution from point sources are likely to be controlled. Besides them, nutrient emission from agricultural activity is dominant pollution source into the river system. In many countries in Asia and Africa, rice is widely cultivated and paddy field covers large areas. In Japan 54% of its arable land is occupied with irrigated paddy field. While paddy field can deteriorate river water quality due to fertilization, it is also suggested that paddy field can purify water. We carried out field survey in middle reach of the Tone River Basin with focus on a paddy field IM. The objectives of the research are 1) understanding of water and nutrient balance in paddy field, 2) data collection for assessing nutrient emission. Field survey was conducted from June 2015 to October 2016 covering two flooding seasons in summer. In our measurement, all input and output were measured regarding water, N and P to quantify water and nutrient balance in the paddy field. By measuring water quality and flow rate of inflow, outflow, infiltrating water, ground water and flooding water, we tried to quantitatively understand water, N and P cycle in a paddy field including seasonal trends, and changes accompanied with rainy events and agricultural activities like fertilization. Concerning water balance, infiltration rate was estimated by following equation. Infiltration=Irrigation water + Precipitation - Evapotranspiration -Outflow We estimated mean daily water balance during flooding season. Infiltration is 11.9mm/day in our estimation for summer in 2015. Daily water reduction depth (WRD) is sum of Evapotranspiration and Infiltration. WRD is 21.5mm/day in IM and agrees with average value in previous research. Regarding nutrient balance, we estimated an annual N and P balance. N and P surplus are calculated by difference between input and output in a paddy field. As to nutrient balance in 2015 surplus shows minus value between input as fertilizer and output as rice product. However, by taking account of input via irrigation water as nutrient source, N and P input and output balance with errors by 9% and 14%. Results of long term continuous survey suggest that irrigation water is one of nutrient sources in rice cultivation.

  15. Exercise and insulin resistance in youth: a meta-analysis.

    PubMed

    Fedewa, Michael V; Gist, Nicholas H; Evans, Ellen M; Dishman, Rod K

    2014-01-01

    The prevalence of obesity and diabetes is increasing among children, adolescents, and adults. Although estimates of the efficacy of exercise training on fasting insulin and insulin resistance have been provided, for adults similar estimates have not been provided for youth. This systematic review and meta-analysis provides a quantitative estimate of the effectiveness of exercise training on fasting insulin and insulin resistance in children and adolescents. Potential sources were limited to peer-reviewed articles published before June 25, 2013, and gathered from the PubMed, SPORTDiscus, Physical Education Index, and Web of Science online databases. Analysis was limited to randomized controlled trials by using combinations of the terms adolescent, child, pediatric, youth, exercise training, physical activity, diabetes, insulin, randomized trial, and randomized controlled trial. The authors assessed 546 sources, of which 4.4% (24 studies) were eligible for inclusion. Thirty-two effects were used to estimate the effect of exercise training on fasting insulin, with 15 effects measuring the effect on insulin resistance. Estimated effects were independently calculated by multiple authors, and conflicts were resolved before calculating the overall effect. Based on the cumulative results from these studies, a small to moderate effect was found for exercise training on fasting insulin and improving insulin resistance in youth (Hedges' d effect size = 0.48 [95% confidence interval: 0.22-0.74], P < .001 and 0.31 [95% confidence interval: 0.06-0.56], P < .05, respectively). These results support the use of exercise training in the prevention and treatment of type 2 diabetes.

  16. Long Term Association of Tropospheric Trace gases over Pakistan by exploiting satellite observations and development of Econometric Regression based Model

    NASA Astrophysics Data System (ADS)

    Zeb, Naila; Fahim Khokhar, Muhammad; Khan, Saud Ahmed; Noreen, Asma; Murtaza, Rabbia

    2017-04-01

    Air pollution is the expected key environmental issue of Pakistan as it is ranked among top polluted countries in the region. Ongoing rapid economic growth without any adequate measures is leading to worst air quality over time. The study aims to monitor long term atmospheric composition and association of trace gases over Pakistan. Tropospheric concentrations of CO, TOC, NO2 and HCHO derived from multiple satellite instruments are used for study from year 2005 to 2014. The study will provide first database for tropospheric trace gases over Pakistan. Spatio-temporal assessment identified hotspots and possible sources of trace gases over the Pakistan. High concentrations of trace gases are mainly observed over Punjab region, which may be attributed to its metropolitan importance. It is the major agricultural, industrialized and urbanized (nearly 60 % of the Pakistan's population) sector of the country. The expected sources are the agricultural fires, biomass/fossil fuel burning for heating purposes, urbanization, industrialization and meteorological variations. Seasonal variability is observed to explore seasonal patterns over the decade. Well defined seasonal cycles of trace gases are observed over the whole study period. The observed seasonal patterns also showed some noteworthy association among trace gases, which is further explored by different statistical tests. Seasonal Mann Kendall test is applied to test the significance of trend in series whereas correlation is carried out to measure the strength of association among trace gases. Strong correlation is observed for trace gases especially between CO and TOC. Partial Mann Kendall test is used to ideally identify the impact of each covariate on long term trend of CO and TOC by partialling out each correlating trace gas (covariate). It is observed that TOC, NO2 and HCHO has significant impact on long term trend of CO whereas, TOC critically depends on NO2 concentrations for long term increase over the region. Furthermore to explore causal relation, regression analysis is employed to estimate model for CO and TOC. This model numerically estimated the long term association of trace gases over the region.

  17. Observer aging and long-term avian survey data quality

    PubMed Central

    Farmer, Robert G; Leonard, Marty L; Mills Flemming, Joanna E; Anderson, Sean C

    2014-01-01

    Long-term wildlife monitoring involves collecting time series data, often using the same observers over multiple years. Aging-related changes to these observers may be an important, under-recognized source of error that can bias management decisions. In this study, we used data from two large, independent bird surveys, the Atlas of the Breeding Birds of Ontario (“OBBA”) and the North American Breeding Bird Survey (“BBS”), to test for age-related observer effects in long-term time series of avian presence and abundance. We then considered the effect of such aging phenomena on current population trend estimates. We found significantly fewer detections among older versus younger observers for 13 of 43 OBBA species, and declines in detection as an observer ages for 4 of 6 vocalization groups comprising 59 of 64 BBS species. Consistent with hearing loss influencing this pattern, we also found evidence for increasingly severe detection declines with increasing call frequency among nine high-pitched bird species (OBBA); however, there were also detection declines at other frequencies, suggesting important additional effects of aging, independent of hearing loss. We lastly found subtle, significant relationships between some species' published population trend estimates and (1) their corresponding vocalization frequency (n ≥ 22 species) and (2) their estimated declines in detectability among older observers (n = 9 high-frequency, monotone species), suggesting that observer aging can negatively bias long-term monitoring data for some species in part through hearing loss effects. We recommend that survey designers and modelers account for observer age where possible. PMID:25360286

  18. Signature of inverse Compton emission from blazars

    NASA Astrophysics Data System (ADS)

    Gaur, Haritma; Mohan, Prashanth; Wierzcholska, Alicja; Gu, Minfeng

    2018-01-01

    Blazars are classified into high-, intermediate- and low-energy-peaked sources based on the location of their synchrotron peak. This lies in infra-red/optical to ultra-violet bands for low- and intermediate-peaked blazars. The transition from synchrotron to inverse Compton emission falls in the X-ray bands for such sources. We present the spectral and timing analysis of 14 low- and intermediate-energy-peaked blazars observed with XMM-Newton spanning 31 epochs. Parametric fits to X-ray spectra help constrain the possible location of transition from the high-energy end of the synchrotron to the low-energy end of the inverse Compton emission. In seven sources in our sample, we infer such a transition and constrain the break energy in the range 0.6-10 keV. The Lomb-Scargle periodogram is used to estimate the power spectral density (PSD) shape. It is well described by a power law in a majority of light curves, the index being flatter compared to general expectation from active galactic nuclei, ranging here between 0.01 and 1.12, possibly due to short observation durations resulting in an absence of long-term trends. A toy model involving synchrotron self-Compton and external Compton (EC; disc, broad line region, torus) mechanisms are used to estimate magnetic field strength ≤0.03-0.88 G in sources displaying the energy break and infer a prominent EC contribution. The time-scale for variability being shorter than synchrotron cooling implies steeper PSD slopes which are inferred in these sources.

  19. Solar-powered irrigation systems. Technical progress report, July 1977--January 1978

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1978-02-28

    Dispersed solar thermal power systems applied to farm irrigation energy needs are analyzed. The 17 western states, containing 84% of nationwide irrigated croplands and consuming 93% of nationwide irrigation energy, have been selected to determine were solar irrigation systems can compete most favorably with conventional energy sources. Financial analysis of farms, according to size and ownership, was accomplished to permit realistic comparative analyses of system lifetime costs. Market potential of optimized systems has been estimated for the 17-state region for near-term (1985) and intermediate-term (2000) applications. Technical, economic, and institutional factors bearing on penetration and capture of this market aremore » being identified.« less

  20. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  1. Seismic Yield Estimates of UTTR Surface Explosions

    NASA Astrophysics Data System (ADS)

    Hayward, C.; Park, J.; Stump, B. W.

    2016-12-01

    Since 2007 the Utah Test and Training Range (UTTR) has used explosive demolition as a method to destroy excess solid rocket motors ranging in size from 19 tons to less than 2 tons. From 2007 to 2014, 20 high quality seismic stations within 180 km recorded most of the more than 200 demolitions. This provides an interesting dataset to examine seismic source scaling for surface explosions. Based upon observer records, shots were of 4 sizes, corresponding to the size of the rocket motors. Instrument corrections for the stations were quality controlled by examining the P-wave amplitudes of all magnitude 6.5-8 earthquakes from 30 to 90 degrees away. For each station recording, the instrument corrected RMS seismic amplitude in the first 10 seconds after the P-onset was calculated. Waveforms at any given station for all the observed explosions are nearly identical. The observed RMS amplitudes were fit to a model including a term for combined distance and station correction, a term for observed RMS amplitude, and an error term for the actual demolition size. The observed seismic yield relationship is RMS=k*Weight2/3 . Estimated yields for the largest shots vary by about 50% from the stated weights, with a nearly normal distribution.

  2. Assessment of short-term PM2.5-related mortality due to different emission sources in the Yangtze River Delta, China

    NASA Astrophysics Data System (ADS)

    Wang, Jiandong; Wang, Shuxiao; Voorhees, A. Scott; Zhao, Bin; Jang, Carey; Jiang, Jingkun; Fu, Joshua S.; Ding, Dian; Zhu, Yun; Hao, Jiming

    2015-12-01

    Air pollution is a major environmental risk to health. In this study, short-term premature mortality due to particulate matter equal to or less than 2.5 μm in aerodynamic diameter (PM2.5) in the Yangtze River Delta (YRD) is estimated by using a PC-based human health benefits software. The economic loss is assessed by using the willingness to pay (WTP) method. The contributions of each region, sector and gaseous precursor are also determined by employing brute-force method. The results show that, in the YRD in 2010, the short-term premature deaths caused by PM2.5 are estimated to be 13,162 (95% confidence interval (CI): 10,761-15,554), while the economic loss is 22.1 (95% CI: 18.1-26.1) billion Chinese Yuan. The industrial and residential sectors contributed the most, accounting for more than 50% of the total economic loss. Emissions of primary PM2.5 and NH3 are major contributors to the health-related loss in winter, while the contribution of gaseous precursors such as SO2 and NOx is higher than primary PM2.5 in summer.

  3. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.

  4. Bottom friction optimization for a better barotropic tide modelling

    NASA Astrophysics Data System (ADS)

    Boutet, Martial; Lathuilière, Cyril; Son Hoang, Hong; Baraille, Rémy

    2015-04-01

    At a regional scale, barotropic tides are the dominant source of variability of currents and water heights. A precise representation of these processes is essential because of their great impacts on human activities (submersion risks, marine renewable energies, ...). Identified sources of error for tide modelling at a regional scale are the followings: bathymetry, boundary forcing and dissipation due to bottom friction. Nevertheless, bathymetric databases are nowadays known with a good accuracy, especially over shelves, and global tide models performances are better than ever. The most promising improvement is thus the bottom friction representation. The method used to estimate bottom friction is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of tangent linear and adjoint version of the circulation model. Experiments are carried out to estimate bottom friction with the HYbrid Coordinate Ocean Model (HYCOM) in barotropic mode (one isopycnal layer). The study area is the Northeastern Atlantic margin which is characterized by strong currents and an intense dissipation. Bottom friction is parameterized with a quadratic term and friction coefficient is computed with the water height and the bottom roughness. The latter parameter is the one to be estimated. Assimilated data are the available tide gauge observations. First, the bottom roughness is estimated taking into account bottom sediment natures and bathymetric ranges. Then, it is estimated with geographical degrees of freedom. Finally, the impact of the estimation of a mixed quadratic/linear friction is evaluated.

  5. Economic impact of Tegaderm chlorhexidine gluconate (CHG) dressing in critically ill patients.

    PubMed

    Thokala, Praveen; Arrowsmith, Martin; Poku, Edith; Martyn-St James, Marissa; Anderson, Jeff; Foster, Steve; Elliott, Tom; Whitehouse, Tony

    2016-09-01

    To estimate the economic impact of a Tegaderm TM chlorhexidine gluconate (CHG) gel dressing compared with a standard intravenous (i.v.) dressing (defined as non-antimicrobial transparent film dressing), used for insertion site care of short-term central venous and arterial catheters (intravascular catheters) in adult critical care patients using a cost-consequence model populated with data from published sources. A decision analytical cost-consequence model was developed which assigned each patient with an indwelling intravascular catheter and a standard dressing, a baseline risk of associated dermatitis, local infection at the catheter insertion site and catheter-related bloodstream infections (CRBSI), estimated from published secondary sources. The risks of these events for patients with a Tegaderm CHG were estimated by applying the effectiveness parameters from the clinical review to the baseline risks. Costs were accrued through costs of intervention (i.e. Tegaderm CHG or standard intravenous dressing) and hospital treatment costs depended on whether the patients had local dermatitis, local infection or CRBSI. Total costs were estimated as mean values of 10,000 probabilistic sensitivity analysis (PSA) runs. Tegaderm CHG resulted in an average cost-saving of £77 per patient in an intensive care unit. Tegaderm CHG also has a 98.5% probability of being cost-saving compared to standard i.v. dressings. The analyses suggest that Tegaderm CHG is a cost-saving strategy to reduce CRBSI and the results were robust to sensitivity analyses.

  6. Direct-location versus verbal report methods for measuring auditory distance perception in the far field.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O

    2018-06-01

    In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.

  7. Moment tensor analysis of very shallow sources

    DOE PAGES

    Chiang, Andrea; Dreger, Douglas S.; Ford, Sean R.; ...

    2016-10-11

    An issue for moment tensor (MT) inversion of shallow seismic sources is that some components of the Green’s functions have vanishing amplitudes at the free surface, which can result in bias in the MT solution. The effects of the free surface on the stability of the MT method become important as we continue to investigate and improve the capabilities of regional full MT inversion for source–type identification and discrimination. It is important to understand free–surface effects on discriminating shallow explosive sources for nuclear monitoring purposes. It may also be important in natural systems that have very shallow seismicity, such asmore » volcanic and geothermal systems. We examine the effects of the free surface on the MT via synthetic testing and apply the MT–based discrimination method to three quarry blasts from the HUMMING ALBATROSS experiment. These shallow chemical explosions at ~10 m depth and recorded up to several kilometers distance represent rather severe source–station geometry in terms of free–surface effects. We show that the method is capable of recovering a predominantly explosive source mechanism, and the combined waveform and first–motion method enables the unique discrimination of these events. Furthermore, recovering the design yield using seismic moment estimates from MT inversion remains challenging, but we can begin to put error bounds on our moment estimates using the network sensitivity solution technique.« less

  8. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  9. Determining forage availability and use patterns for bison in the Hayden Valley of Yellowstone National Park

    USGS Publications Warehouse

    Olenicki, Thomas J.; Irby, Lynn R.

    2005-01-01

    4. Estimate annual production and standing crop available during non-growing seasons for herbaceous and shrub layers in major habitat types in the Hayden Valley. Our efforts to describe forage use by bison focused on assessing finer scale habitat use is a core summer range for bison in YNP. We also collected information on bison food habits and forage quality to begin to explain the “whys” of bison distribution. Short-term impacts of bison forage utilization were addressed by comparing standing biomass in plots protected from grazing with plots exposed to grazing. Historical data were not available to directly address long-term effects of ungulate foraging in the Hayden Valley, but we were able to indirectly assess some aspects of this question by determining the frequency of repeat grazing over a 3-year period and the rate at which trees along the margins of the Hayden Valley were being killed by bison rubbing The third objective, determining the relative efficacy of different vegetation monitoring approaches, was accomplished by comparing estimates of standing biomass and biomas: utilization obtained via conventional exclosure techniques with estimates based on remote sensing techniques (ground-based and satellite-borne multi-spectral radiometry|[MSR]). We addressed efficacy in terms of precision and accuracy of estimates, reliability, and logistical costs at different coverage scales. The fourth objective, estimation of forage available for ungulates in the Hayden Valley, was achieved using conventional exclosure methodology and remote sensing. We were able to estimate herbaceous biomass production during 3 different years. Exclosures allowed us to estimated changes instanding crop of herbaceous vegetation at the plant community (conventional cover types, moisture plant growth form groups, and communities defined by dominant graminoids) and catena (a repeating sequence of communities tied to landscape physiognomy) scales. We developed empirical approaches that allowed us to estimate standing biomass of herbaceous plants from reflectance data obtained from ground-based and satellite-borne multi-spectral radiometry (MSR) units. We demonstrated the potential to estimate biomass of shrubs using the same approaches. We did not have time and resources to complete vegetation maps that would optimize estimates from remote sources, but we have outlined procedures that can be followed in the future to obtain biomass estimates at the landscape scale.

  10. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  11. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  12. Aerosol characterization over the southeastern United States using high resolution aerosol mass spectrometry: spatial and seasonal variation of aerosol composition, sources, and organic nitrates

    NASA Astrophysics Data System (ADS)

    Xu, L.; Suresh, S.; Guo, H.; Weber, R. J.; Ng, N. L.

    2015-04-01

    We deployed a High-Resolution Time-of-Flight Aerosol Mass Spectrometer (HR-ToF-AMS) and an Aerosol Chemical Speciation Monitor (ACSM) to characterize the chemical composition of submicron non-refractory particles (NR-PM1) in the southeastern US. Measurements were performed in both rural and urban sites in the greater Atlanta area, GA and Centreville, AL for approximately one year, as part of Southeastern Center of Air Pollution and Epidemiology study (SCAPE) and Southern Oxidant and Aerosol Study (SOAS). Organic aerosol (OA) accounts for more than half of NR1 mass concentration regardless of sampling sites and seasons. Positive matrix factorization (PMF) analysis of HR-ToF-AMS measurements identified various OA sources, depending on location and season. Hydrocarbon-like OA (HOA) and cooking OA (COA) have important but not dominant contributions to total OA in urban sites. Biomass burning OA (BBOA) concentration shows a distinct seasonal variation with a larger enhancement in winter than summer. We find a good correlation between BBOA and brown carbon, indicating biomass burning is an important source for brown carbon, although an additional, unidentified brown carbon source is likely present at the rural Yorkville site. Isoprene-derived OA (Isoprene-OA) is only deconvolved in warmer months and contributes 18-36% of total OA. The presence of Isoprene-OA factor in urban sites is more likely from local production in the presence of NOx than transport from rural sites. More-oxidized and less-oxidized oxygenated organic aerosol (MO-OOA and LO-OOA, respectively) are dominant fractions (47-79%) of OA in all sites. MO-OOA correlates well with ozone in summer, but not in winter, indicating MO-OOA sources may vary with seasons. LO-OOA, which reaches a daily maximum at night, correlates better with estimated nitrate functionality from organic nitrates than total nitrates. Based on the HR-ToF-AMS measurements, we estimate that the nitrate functionality from organic nitrates contributes 63-100% of total measured nitrates in summer. Further, the contribution of organic nitrates to total OA is estimated to be 5-12% in summer, suggesting that organic nitrates are important components in the ambient aerosol in the southeastern US. The spatial distribution of OA is investigated by comparing simultaneous HR-ToF-AMS measurements with ACSM measurements at two different sampling sites. OA is found to be spatially homogeneous in summer, possibly due to stagnant air mass and a dominant amount of regional SOA in the southeastern US. The homogeneity is less in winter, which is likely due to spatial variation of primary emissions. We observed that the seasonality of OA concentration shows a clear urban/rural contrast. While OA exhibits weak seasonal variation in the urban sites, its concentration is higher in summer than winter for rural sites. This observation from our year-long measurements is consistent with 14 years of organic carbon (OC) data from the SouthEastern Aerosol Research and Characterization (SEARCH) network. The comparison between short-term measurements with advanced instruments and long-term measurements of basic air quality indicators not only tests the robustness of the short-term measurements but also provides insights in interpreting long-term measurements. We find that OA factors resolved from PMF analysis on HR-ToF-AMS measurements have distinctly different diurnal variations. The compensation of OA factors with different diurnal trends is one possible reason for the repeatedly observed, relatively flat OA diurnal profile in the southeastern US. In addition, analysis of long-term measurements shows that the correlation between OC and sulfate is substantially higher in summer than winter. This seasonality could be partly due to the effects of sulfate on isoprene SOA formation as revealed by the short-term, intensive measurements.

  13. Aerosol characterization over the southeastern United States using high-resolution aerosol mass spectrometry: spatial and seasonal variation of aerosol composition and sources with a focus on organic nitrates

    NASA Astrophysics Data System (ADS)

    Xu, L.; Suresh, S.; Guo, H.; Weber, R. J.; Ng, N. L.

    2015-07-01

    We deployed a High-Resolution Time-of-Flight Aerosol Mass Spectrometer (HR-ToF-AMS) and an Aerosol Chemical Speciation Monitor (ACSM) to characterize the chemical composition of submicron non-refractory particulate matter (NR-PM1) in the southeastern USA. Measurements were performed in both rural and urban sites in the greater Atlanta area, Georgia (GA), and Centreville, Alabama (AL), for approximately 1 year as part of Southeastern Center for Air Pollution and Epidemiology study (SCAPE) and Southern Oxidant and Aerosol Study (SOAS). Organic aerosol (OA) accounts for more than half of NR-PM1 mass concentration regardless of sampling sites and seasons. Positive matrix factorization (PMF) analysis of HR-ToF-AMS measurements identified various OA sources, depending on location and season. Hydrocarbon-like OA (HOA) and cooking OA (COA) have important, but not dominant, contributions to total OA in urban sites (i.e., 21-38 % of total OA depending on site and season). Biomass burning OA (BBOA) concentration shows a distinct seasonal variation with a larger enhancement in winter than summer. We find a good correlation between BBOA and brown carbon, indicating biomass burning is an important source for brown carbon, although an additional, unidentified brown carbon source is likely present at the rural Yorkville site. Isoprene-derived OA factor (isoprene-OA) is only deconvolved in warmer months and contributes 18-36 % of total OA. The presence of isoprene-OA factor in urban sites is more likely from local production in the presence of NOx than transport from rural sites. More-oxidized and less-oxidized oxygenated organic aerosol (MO-OOA and LO-OOA, respectively) are dominant fractions (47-79 %) of OA in all sites. MO-OOA correlates well with ozone in summer but not in winter, indicating MO-OOA sources may vary with seasons. LO-OOA, which reaches a daily maximum at night, correlates better with estimated nitrate functionality from organic nitrates than total nitrates. Based on the HR-ToF-AMS measurements, we estimate that the nitrate functionality from organic nitrates contributes 63-100 % to the total measured nitrates in summer. Furthermore, the contribution of organic nitrates to total OA is estimated to be 5-12 % in summer, suggesting that organic nitrates are important components in the ambient aerosol in the southeastern USA. The spatial distribution of OA is investigated by comparing simultaneous HR-ToF-AMS measurements with ACSM measurements at two different sampling sites. OA is found to be spatially homogeneous in summer due possibly to stagnant air mass and a dominant amount of regional secondary organic aerosol (SOA) in the southeastern USA. The homogeneity is less in winter, which is likely due to spatial variation of primary emissions. We observe that the seasonality of OA concentration shows a clear urban/rural contrast. While OA exhibits weak seasonal variation in the urban sites, its concentration is higher in summer than winter for rural sites. This observation from our year-long measurements is consistent with 14 years of organic carbon (OC) data from the SouthEastern Aerosol Research and Characterization (SEARCH) network. The comparison between short-term measurements with advanced instruments and long-term measurements of basic air quality indicators not only tests the robustness of the short-term measurements but also provides insights in interpreting long-term measurements. We find that OA factors resolved from PMF analysis on HR-ToF-AMS measurements have distinctly different diurnal variations. The compensation of OA factors with different diurnal trends is one possible reason for the repeatedly observed, relatively flat OA diurnal profile in the southeastern USA. In addition, analysis of long-term measurements shows that the correlation between OC and sulfate is substantially stronger in summer than winter. This seasonality could be partly due to the effects of sulfate on isoprene SOA formation as revealed by the short-term intensive measurements.

  14. Using NDACC column measurements of carbonyl sulfide to estimate its sources and sinks

    NASA Astrophysics Data System (ADS)

    Wang, Yuting; Marshall, Julia; Palm, Mathias; Deutscher, Nicholas; Roedenbeck, Christian; Warneke, Thorsten; Notholt, Justus; Baker, Ian; Berry, Joe; Suntharalingam, Parvadha; Jones, Nicholas; Mahieu, Emmanuel; Lejeune, Bernard; Hannigan, James; Conway, Stephanie; Strong, Kimberly; Campbell, Elliott; Wolf, Adam; Kremser, Stefanie

    2016-04-01

    Carbonyl sulfide (OCS) is taken up by plants during photosynthesis through a similar pathway as carbon dioxide (CO2), but is not emitted by respiration, and thus holds great promise as an additional constraint on the carbon cycle. It might act as a sort of tracer of photosynthesis, a way to separate gross primary productivity (GPP) from the net ecosystem exchange (NEE) that is typically derived from flux modeling. However the estimates of OCS sources and sinks still have significant uncertainties, which make it difficult to use OCS as a photosynthetic tracer, and the existing long-term surface-based measurements are sparse. The NDACC-IRWG measures the absorption of OCS in the atmosphere, and provides a potential long-term database of OCS total/partial columns, which can be used to evaluate OCS fluxes. We have retrieved OCS columns from several NDACC sites around the globe, and compared them to model simulation with OCS land fluxes based on the simple biosphere model (SiB). The disagreement between the measurements and the forward simulations indicates that (1) the OCS land fluxes from SiB are too low in the northern boreal region; (2) the ocean fluxes need to be optimized. A statistical linear flux model describing OCS is developed in the TM3 inversion system, and is used to estimate the OCS fluxes. We performed flux inversions using only NOAA OCS surface measurements as an observational constraint and with both surface and NDACC OCS column measurements, and assessed the differences. The posterior uncertainties of the inverted OCS fluxes decreased with the inclusion of NDACC data comparing to those using surface data only, and could be further reduced if more NDACC sites were included.

  15. Constructing a Measure of Private-pay Nursing Home Days.

    PubMed

    Thomas, Kali S; Silver, Benjamin; Gozalo, Pedro L; Dosa, David; Grabowski, David C; Makineni, Rajesh; Mor, Vincent

    2018-05-01

    Nursing home (NH) care is financed through multiple sources. Although Medicaid is the predominant payer for NH care, over 20% of residents pay out-of-pocket for their care. Despite this large percentage, an accepted measure of private-pay NH occupancy has not been established and little is known about the types of facilities and the long-term care markets that cater to this population. To describe 2 novel measures of private-pay utilization in the NH setting, including the proportion of privately financed residents and resident days, and examine their construct validity. Retrospective descriptive analysis of US NHs in 2007-2009. We used Medicare claims, Medicare Enrollment records, and the Minimum Data Set to create measures of private-pay resident prevalence and proportion of privately financed NH days. We compared our estimates of private-pay utilization to payer data collected in the NH annual certification survey and evaluated the relationships of our measures with facility characteristics. Our measures of private-pay resident prevalence and private-pay days are highly correlated (r=0.83, P<0.001 and r=0.83, P<0.001, respectively) with the rate of "other payer" reported in the annual certification survey. We also observed a significantly higher proportion of private-pay residents and days in higher quality facilities. This new methodology provides estimates of private-pay resident prevalence and resident days. These measures were correlated with estimates using other data sources and validated against measures of facility quality. These data set the stage for additional work to examine questions related to NH payment, quality of care, and responses to changes in the long-term care market.

  16. The use of polar organic compounds to estimate the contribution of domestic solid fuel combustion and biogenic sources to ambient levels of organic carbon and PM2.5 in Cork Harbour, Ireland.

    PubMed

    Kourtchev, Ivan; Hellebust, Stig; Bell, Jennifer M; O'Connor, Ian P; Healy, Robert M; Allanic, Arnaud; Healy, David; Wenger, John C; Sodeau, John R

    2011-05-01

    PM(2.5) samples collected at Cork Harbour, Ireland during summer, autumn, late autumn and winter, 2008-2009 were analyzed for polar organic compounds that are useful markers for aerosol source characterization. The determined compounds include tracers for biomass burning primary particles, fungal spores, markers for secondary organic aerosol (SOA) from isoprene, α-/β-pinene, and d-limonene. Seasonal and temporal variations and other characteristic features of the detected tracers are discussed in terms of aerosol sources and processes. The biogenic species were detected only during the summer period where the contributions of isoprene SOA and fungal spores to the PM(2.5) organic carbon (OC) were estimated to be 1.6% and 1% respectively. The biomass burning markers, and in particular levoglucosan, were present in all samples and attributed to the combustion of cellulose-containing fuels including wood, peat, bituminous and smokeless coal. The contribution of domestic solid fuel (DSF) burning to the measured OC mass concentration was estimated at 10.8, 50, 66.4 and 74.9% for summer, autumn, late autumn and winter periods, respectively, based on factors derived from a series of burning experiments on locally available fuels. Application of an alternative approach, namely principal component analysis-multiple linear regression (PCA-MLR), to the measured concentrations of the polar organic marker compounds used in conjunction with real-time air quality data provided similar trends and estimates for DSF combustion during all seasons except summer. This study clearly demonstrates that, despite the ban on the sale of bituminous coal in Cork and other large urban areas in Ireland, DSF combustion is still the major source of OC during autumn and winter periods and also makes a significant contribution to PM(2.5) levels. The developed marker approach for estimating the contribution of DSF combustion to ambient OC concentrations can, in principle, also be applied to other locations. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%

    PubMed Central

    Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.

    2016-01-01

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818

  18. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.

    PubMed

    Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M

    2016-09-13

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.

  19. Quantifying Sources and Fluxes of Aquatic Carbon in U.S. Streams and Reservoirs Using Spatially Referenced Regression Models

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Smith, R. A.; Alexander, R. B.; Schwarz, G. E.

    2004-12-01

    Organic carbon (OC) is a critical water quality characteristic in riverine systems that is an important component of the aquatic carbon cycle and energy balance. Examples of processes controlled by OC interactions are complexation of trace metals; enhancement of the solubility of hydrophobic organic contaminants; formation of trihalomethanes in drinking water; and absorption of visible and UV radiation. Organic carbon also can have indirect effects on water quality by influencing internal processes of aquatic ecosystems (e.g. photosynthesis and autotrophic and heterotrophic activity). The importance of organic matter dynamics on water quality has been recognized, but challenges remain in quantitatively addressing OC processes over broad spatial scales in a hydrological context. In this study, we apply spatially referenced watershed models (SPARROW) to statistically estimate long-term mean-annual rates of dissolved- and total- organic carbon export in streams and reservoirs across the conterminous United States. We make use of a GIS framework for the analysis, describing sources, transport, and transformations of organic matter from spatial databases providing characterizations of climate, land use, primary productivity, topography, soils, and geology. This approach is useful because it illustrates spatial patterns of organic carbon fluxes in streamflow, highlighting hot spots (e.g., organic-rich environments in the southeastern coastal plain). Further, our simulations provide estimates of the relative contributions to streams from allochthonous and autochthonous sources. We quantify surface water fluxes of OC with estimates of uncertainty in relation to the overall US carbon budget; our simulations highlight that aquatic sources and sinks of OC may be a more significant component of regional carbon cycling than was previously thought. Further, we are using our simulations to explore the potential role of climate and other changes in the terrestrial environment on OC fluxes in aquatic systems.

  20. Algorithms for System Identification and Source Location.

    NASA Astrophysics Data System (ADS)

    Nehorai, Arye

    This thesis deals with several topics in least squares estimation and applications to source location. It begins with a derivation of a mapping between Wiener theory and Kalman filtering for nonstationary autoregressive moving average (ARMO) processes. Applying time domain analysis, connections are found between time-varying state space realizations and input-output impulse response by matrix fraction description (MFD). Using these connections, the whitening filters are derived by the two approaches, and the Kalman gain is expressed in terms of Wiener theory. Next, fast estimation algorithms are derived in a unified way as special cases of the Conjugate Direction Method. The fast algorithms included are the block Levinson, fast recursive least squares, ladder (or lattice) and fast Cholesky algorithms. The results give a novel derivation and interpretation for all these methods, which are efficient alternatives to available recursive system identification algorithms. Multivariable identification algorithms are usually designed only for left MFD models. In this work, recursive multivariable identification algorithms are derived for right MFD models with diagonal denominator matrices. The algorithms are of prediction error and model reference type. Convergence analysis results obtained by the Ordinary Differential Equation (ODE) method are presented along with simulations. Sources of energy can be located by estimating time differences of arrival (TDOA's) of waves between the receivers. A new method for TDOA estimation is proposed for multiple unknown ARMA sources and additive correlated receiver noise. The method is based on a formula that uses only the receiver cross-spectra and the source poles. Two algorithms are suggested that allow tradeoffs between computational complexity and accuracy. A new time delay model is derived and used to show the applicability of the methods for non -integer TDOA's. Results from simulations illustrate the performance of the algorithms. The last chapter analyzes the response of exact least squares predictors for enhancement of sinusoids with additive colored noise. Using the matrix inversion lemma and the Christoffel-Darboux formula, the frequency response and amplitude gain of the sinusoids are expressed as functions of the signal and noise characteristics. The results generalize the available white noise case.

  1. California Drought Recovery Assessment Using GRACE Satellite Gravimetry Information

    NASA Astrophysics Data System (ADS)

    Love, C. A.; Aghakouchak, A.; Madadgar, S.; Tourian, M. J.

    2015-12-01

    California has been experiencing its most extreme drought in recent history due to a combination of record high temperatures and exceptionally low precipitation. An estimate for when the drought can be expected to end is needed for risk mitigation and water management. A crucial component of drought recovery assessments is the estimation of terrestrial water storage (TWS) deficit. Previous studies on drought recovery have been limited to surface water hydrology (precipitation and/or runoff) for estimating changes in TWS, neglecting the contribution of groundwater deficits to the recovery time of the system. Groundwater requires more time to recover than surface water storage; therefore, the inclusion of groundwater storage in drought recovery assessments is essential for understanding the long-term vulnerability of a region. Here we assess the probability, for varying timescales, of California's current TWS deficit returning to its long-term historical mean. Our method consists of deriving the region's fluctuations in TWS from changes in the gravity field observed by NASA's Gravity Recovery and Climate Experiment (GRACE) satellites. We estimate the probability that meteorological inputs, precipitation minus evaporation and runoff, over different timespans will balance the current GRACE-derived TWS deficit (e.g. in 3, 6, 12 months). This method improves upon previous techniques as the GRACE-derived water deficit comprises all hydrologic sources, including surface water, groundwater, and snow cover. With this empirical probability assessment we expect to improve current estimates of California's drought recovery time, thereby improving risk mitigation.

  2. Global Economic Impact of Dental Diseases.

    PubMed

    Listl, S; Galloway, J; Mossey, P A; Marcenes, W

    2015-10-01

    Reporting the economic burden of oral diseases is important to evaluate the societal relevance of preventing and addressing oral diseases. In addition to treatment costs, there are indirect costs to consider, mainly in terms of productivity losses due to absenteeism from work. The purpose of the present study was to estimate the direct and indirect costs of dental diseases worldwide to approximate the global economic impact. Estimation of direct treatment costs was based on a systematic approach. For estimation of indirect costs, an approach suggested by the World Health Organization's Commission on Macroeconomics and Health was employed, which factored in 2010 values of gross domestic product per capita as provided by the International Monetary Fund and oral burden of disease estimates from the 2010 Global Burden of Disease Study. Direct treatment costs due to dental diseases worldwide were estimated at US$298 billion yearly, corresponding to an average of 4.6% of global health expenditure. Indirect costs due to dental diseases worldwide amounted to US$144 billion yearly, corresponding to economic losses within the range of the 10 most frequent global causes of death. Within the limitations of currently available data sources and methodologies, these findings suggest that the global economic impact of dental diseases amounted to US$442 billion in 2010. Improvements in population oral health may imply substantial economic benefits not only in terms of reduced treatment costs but also because of fewer productivity losses in the labor market. © International & American Associations for Dental Research 2015.

  3. On the scale dependence of earthquake stress drop

    NASA Astrophysics Data System (ADS)

    Cocco, Massimo; Tinti, Elisa; Cirella, Antonella

    2016-10-01

    We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.

  4. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  5. Passage relevance models for genomics search.

    PubMed

    Urbain, Jay; Frieder, Ophir; Goharian, Nazli

    2009-03-19

    We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  6. Inverse Estimation of California Methane Emissions and Their Uncertainties using FLEXPART-WRF

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Brioude, J. F.; Angevine, W. M.; McKeen, S. A.; Peischl, J.; Nowak, J. B.; Henze, D. K.; Bousserez, N.; Fischer, M. L.; Jeong, S.; Liu, Z.; Michelsen, H. A.; Santoni, G.; Daube, B. C.; Kort, E. A.; Frost, G. J.; Ryerson, T. B.; Wofsy, S. C.; Trainer, M.

    2015-12-01

    Methane (CH4) has a large global warming potential and mediates global tropospheric chemistry. In California, CH4 emissions estimates derived from "top-down" methods based on atmospheric observations have been found to be greater than expected from "bottom-up" population-apportioned national and state inventories. Differences between bottom-up and top-down estimates suggest that the understanding of California's CH4 sources is incomplete, leading to uncertainty in the application of regulations to mitigate regional CH4 emissions. In this study, we use airborne measurements from the California research at the Nexus of Air Quality and Climate Change (CalNex) campaign in 2010 to estimate CH4 emissions in the South Coast Air Basin (SoCAB), which includes California's largest metropolitan area (Los Angeles), and in the Central Valley, California's main agricultural and livestock management area. Measurements from 12 daytime flights, prior information from national and regional official inventories (e.g. US EPA's National Emission Inventory, the California Air Resources Board inventories, the Liu et al. Hybrid Inventory, and the California Greenhouse Gas Emissions Measurement dataset), and the FLEXPART-WRF transport model are used in our mesoscale Bayesian inverse system. We compare our optimized posterior CH4 inventory to the prior bottom-up inventories in terms of total emissions (Mg CH4/hr) and the spatial distribution of the emissions (0.1 degree), and quantify uncertainties in our posterior estimates. Our inversions show that the oil and natural gas industry (extraction, processing and distribution) is the main source accounting for the gap between top-down and bottom-up inventories over the SoCAB, while dairy farms are the largest CH4 source in the Central Valley. CH4 emissions of dairy farms in the San Joaquin Valley and variations of CH4 emissions in the rice-growing regions of Sacramento Valley are quantified and discussed. We also estimate CO and NH3 surface fluxes and use their observed correlation with CH4 mixing ratio to further evaluate our CH4 total emission estimates, and understand the spatial distribution of CH4 emissions.

  7. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  8. Assessing bisphenol A (BPA) exposure risk from long-term dietary intakes in Taiwan.

    PubMed

    Chen, Wei-Yu; Shen, Yi-Pei; Chen, Szu-Chieh

    2016-02-01

    Dietary intake is the major bisphenol A (BPA) exposure route in humans, and is a cause of BPA-related adverse effects. The large-scale exposure risk of humans to BPA through dietary sources in Taiwan is less well studied. The aim of this study was to assess the average daily dose (ADD) and hazardous quotient (HQ) of BPA exposure risk from long-term dietary intake of BPA, as well as BPA concentrations in different age-sex groups in Taiwan. We reanalyzed the BPA concentrations of regular daily food sources (rice, poultry, livestock, seafood, protein, fruits, and vegetables) and used a national dietary survey to estimate the contribution of variance to ADDs and potential human health effect for different age-sex groups. This study found that the daily consumption of chicken, pork/beef, and seafood were estimated to be 33.77 (Male)/22.65 (Female), 91.70 (M)/66.35 (F), and 54.15 (M)/40.78 (F) g/day, respectively. The highest BPA ADD was found in the 6-9 years age group (95% CI=0.0006-0.0027 mg/kg-bw/day), whereas the lowest BPA ADD was in the ≥65 years age group (0.0002-0.0020 mg/kg-bw/day). Based on the latest EFSA guidelines (0.004 mg/kg-bw/day), the 97.5 percentile HQ of BPA intake in different age-sex groups in Taiwan posed no risks through dietary intake. However, a combination of multiple exposure routes and long-term exposure in specific populations may be of concern in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Osmium Isotopic Evolution of the Mantle Sources of Precambrian Ultramafic Rocks

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, A.; Walker, R. J.

    2006-12-01

    The Os isotopic composition of the modern mantle, as recorded collectively by ocean island basalts, mid- oceanic ridge basalts (MORB) and abyssal peridotites, is evidently highly heterogeneous (γ Os(I) ranging from <-10 to >+25). One important question, therefore, is how and when the Earth's mantle developed such large-scale Os isotopic heterogeneities. Previous Os isotopic studies of ancient ultramafic systems, including komatiites and picrites, have shown that the Os isotopic heterogeneity of the terrestrial mantle can be traced as far back as the late-Archean (~ 2.7-2.8 Ga). This observation is based on the initial Os isotopic ratios obtained for the mantle sources of some of the ancient ultramafic rocks determined through analyses of numerous Os-rich whole-rock and/or mineral samples. In some cases, the closed-system behavior of these ancient ultramafic rocks was demonstrated via the generation of isochrons of precise ages, consistent with those obtained from other radiogenic isotopic systems. Thus, a compilation of the published initial ^{187}Os/^{188}Os ratios reported for the mantle sources of komatiitic and picritic rocks is now possible that covers a large range of geologic time spanning from the Mesozoic (ca. 89 Ma Gorgona komatiites) to the Mid-Archean (e.g., ca. 3.3 Ga Commondale komatiites), which provides a comprehensive picture of the Os isotopic evolution of their mantle sources through geologic time. Several Precambrian komatiite/picrite systems are characterized by suprachondritic initial ^{187}Os/^{188}Os ratios (e.g., Belingwe, Kostomuksha, Pechenga). Such long-term enrichments in ^{187}Os of the mantle sources for these rocks may be explained via recycling of old mafic oceanic crust or incorporation of putative suprachondritic outer core materials entrained into their mantle sources. The relative importance of the two processes for some modern mantle-derived systems (e.g., Hawaiian picrites) is an issue of substantial debate. Importantly, however, the high-precision initial Os isotopic compositions of the majority of ultramafic systems show strikingly uniform initial ^{187}Os/^{188}Os ratios, consistent with their derivation from sources that had Os isotopic evolution trajectory very similar to that of carbonaceous chondrites. In addition, the Os isotopic evolution trajectories of the mantle sources for most komatiites show resolvably lower average Re/Os than that estimated for the Primitive Upper Mantle (PUM), yet significantly higher than that obtained in some estimates for the modern convecting upper mantle, as determined via analyses of abyssal peridotites. One possibility is that most of the komatiites sample mantle sources that are unique relative to the sources of abyssal peridotites and MORB. Previous arguments that komatiites originate via large extents of partial melting of relatively deep upper mantle, or even lower mantle materials could, therefore, implicate a source that is different from the convecting upper mantle. If so, this source is remarkably uniform in its long-term Re/Os, and it shows moderate depletion in Re relative to the PUM. Alternatively, if the komatiites are generated within the convective upper mantle through relatively large extents of partial melting, they may provide a better estimate of the Os isotopic composition of the convective upper mantle than that obtained via analyses of MORB, abyssal peridotites and ophiolites.

  10. An examination of sources of sensitivity of consumer surplus estimates in travel cost models.

    PubMed

    Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E

    2015-03-15

    We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Development of a Distributed Source Containment Transport, Transformation, and Fate (CTT&F) Sub-Model for Military Installations

    DTIC Science & Technology

    2007-08-01

    includes soil erodibility terms from the Universal Soil Lass Equation ( USLE ) for estimating the overland sediment transport capacity (for both the x and y...q = unit flow rate of water = va h [L2/T] vc = critical velocity for erosion overland [L/T] K = USLE soil erodibility factor C = USLE soil ...cover factor P = USLE soil management practice factor Be = width of eroding surface in flow direction [L]. In channels, sediment particles can be

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The feasibility of constructing a 25-50 MWe geothermal power plant using low salinity hydrothermal fluid as the energy source was assessed. Here, the geotechnical aspects of geothermal power generation and their relationship to environmental impacts in the Imperial Valley of California were investigated. Geology, geophysics, hydrogeology, seismicity and subsidence are discussed in terms of the availability of data, state-of-the-art analytical techniques, historical and technical background and interpretation of current data. Estimates of the impact of these geotechnical factors on the environment in the Imperial Valley, if geothermal development proceeds, are discussed.

  13. Use of Online Sources of Information by Dental Practitioners: Findings from The Dental Practice-Based Research Network

    PubMed Central

    Funkhouser, Ellen; Agee, Bonita S.; Gordan, Valeria V.; Rindal, D. Brad; Fellows, Jeffrey L.; Qvist, Vibeke; McClelland, Jocelyn; Gilbert, Gregg H.

    2013-01-01

    Objectives Estimate the proportion of dental practitioners who use online sources of information for practice guidance. Methods From a survey of 657 dental practitioners in The Dental Practice Based Research Network, four indicators of online use for practice guidance were calculated: read journals online, obtained continuing education (CDE) through online sources, rated an online source as most influential, and reported frequently using an online source for guidance. Demographics, journals read, and use of various sources of information for practice guidance in terms of frequency and influence were ascertained for each. Results Overall, 21% (n=138) were classified into one of the four indicators of online use: 14% (n=89) rated an online source as most influential and 13% (n=87) reported frequently using an online source for guidance; few practitioners (5%, n=34) read journals online, fewer (3%, n=17) obtained CDE through online sources. Use of online information sources varied considerably by region and practice characteristics. In general, the 4 indicators represented practitioners with as many differences as similarities to each other and to offline users. Conclusion A relatively small proportion of dental practitioners use information from online sources for practice guidance. Variation exists regarding practitioners’ use of online source resources and how they rate the value of offline information sources for practice guidance. PMID:22994848

  14. The source, discharge, and chemical characteristics of water from Agua Caliente Spring, Palm Springs, California

    USGS Publications Warehouse

    Brandt, Justin; Catchings, Rufus D.; Christensen, Allen H.; Flint, Alan L.; Gandhok, Gini; Goldman, Mark R.; Halford, Keith J.; Langenheim, V.E.; Martin, Peter; Rymer, Michael J.; Schroeder, Roy A.; Smith, Gregory A.; Sneed, Michelle; Martin, Peter

    2011-01-01

    Agua Caliente Spring, in downtown Palm Springs, California, has been used for recreation and medicinal therapy for hundreds of years and currently (2008) is the source of hot water for the Spa Resort owned by the Agua Caliente Band of the Cahuilla Indians. The Agua Caliente Spring is located about 1,500 feet east of the eastern front of the San Jacinto Mountains on the southeast-sloping alluvial plain of the Coachella Valley. The objectives of this study were to (1) define the geologic structure associated with the Agua Caliente Spring; (2) define the source(s), and possibly the age(s), of water discharged by the spring; (3) ascertain the seasonal and longer-term variability of the natural discharge, water temperature, and chemical characteristics of the spring water; (4) evaluate whether water-level declines in the regional aquifer will influence the temperature of the spring discharge; and, (5) estimate the quantity of spring water that leaks out of the water-collector tank at the spring orifice.

  15. High Latitude Dust Sources, Transport Pathways and Impacts

    NASA Astrophysics Data System (ADS)

    Bullard, J. E.; Baddock, M. C.; Darlington, E.; Mockford, T.; Van-Soest, M.

    2017-12-01

    Estimates from field studies, remote sensing and modelling all suggest around 5% of global dust emissions originate in the high latitudes (≥50°N and ≥40°S), a similar proportion to that from the USA (excluding Alaska) or Australia. This paper identifies contemporary sources of dust within the high latitudes and their role within local, regional and hemispherical environmental systems. Field data and remote sensing analyses are used to identify the environmental and climatic conditions that characterize high latitude dust sources in both hemispheres. Examples from Arctic and sub-Arctic dust sources are used to demonstrate and explain the different regional relationships among dust emissions, glacio-fluvial dynamics and snow cover. The relative timing of dust input to high latitude terrestrial, cryospheric and marine systems determines its short to medium term environmental impact. This is highlighted through quantifying the importance of locally-redistributed dust as a nutrient input to high latitude soils and lakes in West Greenland.

  16. Potential sources of precipitation in Lake Baikal basin

    NASA Astrophysics Data System (ADS)

    Shukurov, K. A.; Mokhov, I. I.

    2017-11-01

    Based on the data of long-term measurements at 23 meteorological stations in the Russian part of the Lake Baikal basin the probabilities of daily precipitation with different intensity and their contribution to the total precipitation are estimated. Using the trajectory model HYSPLIT_4 for each meteorological station for the period 1948-2016 the 10-day backward trajectories of air parcels, the height of these trajectories and distribution of specific humidity along the trajectories are calculated. The average field of power of potential sources of daily precipitation (less than 10 mm) for all meteorological stations in the Russian part of the Lake Baikal basin was obtained using the CWT (concentration weighted trajectory) method. The areas have been identified from which within 10 days water vapor can be transported to the Lake Baikal basin, as well as regions of the most and least powerful potential sources. The fields of the mean height of air parcels trajectories and the mean specific humidity along the trajectories are compared with the field of mean power of potential sources.

  17. Analysis and optimization of minor actinides transmutation blankets with regards to neutron and gamma sources

    NASA Astrophysics Data System (ADS)

    Kooymana, Timothée; Buiron, Laurent; Rimpault, Gérald

    2017-09-01

    Heterogeneous loading of minor actinides in radial blankets is a potential solution to implement minor actinides transmutation in fast reactors. However, to compensate for the lower flux level experienced by the blankets, the fraction of minor actinides to be loaded in the blankets must be increased to maintain acceptable performances. This severely increases the decay heat and neutron source of the blanket assemblies, both before and after irradiation, by more than an order of magnitude in the case of neutron source for instance. We propose here to implement an optimization methodology of the blankets design with regards to various parameters such as the local spectrum or the mass to be loaded, with the objective of minimizing the final neutron source of the spent assembly while maximizing the transmutation performances of the blankets. In a first stage, an analysis of the various contributors to long and short term neutron and gamma source is carried out while in a second stage, relevant estimators are designed for use in the effective optimization process, which is done in the last step. A comparison with core calculations is finally done for completeness and validation purposes. It is found that the use of a moderated spectrum in the blankets can be beneficial in terms of final neutron and gamma source without impacting minor actinides transmutation performances compared to more energetic spectrum that could be achieved using metallic fuel for instance. It is also confirmed that, if possible, the use of hydrides as moderating material in the blankets is a promising option to limit the total minor actinides inventory in the fuel cycle. If not, it appears that focus should be put upon an increased residence time for the blankets rather than an increase in the acceptable neutron source for handling and reprocessing.

  18. On-farm estimation of energy balance in dairy cows using only frequent body weight measurements and body condition score.

    PubMed

    Thorup, V M; Edwards, D; Friggens, N C

    2012-04-01

    Precise energy balance estimates for individual cows are of great importance to monitor health, reproduction, and feed management. Energy balance is usually calculated as energy input minus output (EB(inout)), requiring measurements of feed intake and energy output sources (milk, maintenance, activity, growth, and pregnancy). Except for milk yield, direct measurements of the other sources are difficult to obtain in practice, and estimates contain considerable error sources, limiting on-farm use. Alternatively, energy balance can be estimated from body reserve changes (EB(body)) using body weight (BW) and body condition score (BCS). Automated weighing systems exist and new technology performing semi-automated body condition scoring has emerged, so frequent automated BW and BCS measurements are feasible. We present a method to derive individual EB(body) estimates from frequently measured BW and BCS and evaluate the performance of the estimated EB(body) against the traditional EB(inout) method. From 76 Danish Holstein and Jersey cows, parity 1 or 2+, on a glycerol-rich or a whole grain-rich total mixed ration, BW was measured automatically at each milking. The BW was corrected for the weight of milk produced and for gutfill. Changes in BW and BCS were used to calculate changes in body protein, body lipid, and EB(body) during the first 150 d in milk. The EB(body) was compared with the traditional EB(inout) by isolating the term within EB(inout) associated with most uncertainty; that is, feed energy content (FEC); FEC=(EB(body)+EMilk+EMaintenance+Eactivity)/dry matter intake, where the energy requirements are for milk produced (EMilk), maintenance (EMaintenance), and activity (EActivity). Estimated FEC agreed well with FEC values derived from tables (the mean estimate was 0.21 MJ of effective energy/kg of dry matter or 2.2% higher than the mean table value). Further, the FEC profile did not suggest systematic bias in EB(body) with stage of lactation. The EB(body) estimated from daily BW, adjusted for milk and meal-related gutfill and combined with frequent BCS, can provide a successful tool. This offers a pragmatic solution to on-farm calculation of energy balance with the perspective of improved precision under commercial conditions. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Mineral dust transport in the Arctic modelled with FLEXPART

    NASA Astrophysics Data System (ADS)

    Groot Zwaaftink, Christine; Grythe, Henrik; Stohl, Andreas

    2016-04-01

    Aeolian transport of mineral dust is suggested to play an important role in many processes. For instance, mineral aerosols affect the radiation balance of the atmosphere, and mineral deposits influence ice sheet mass balances and terrestrial and ocean ecosystems. While many efforts have been done to model global dust transport, relatively little attention has been given to mineral dust in the Arctic. Even though this region is more remote from the world's major dust sources and dust concentrations may be lower than elsewhere, effects of mineral dust on for instance the radiation balance can be highly relevant. Furthermore, there are substantial local sources of dust in or close to the Arctic (e.g., in Iceland), whose impact on Arctic dust concentrations has not been studied in detail. We therefore aim to estimate contributions of different source regions to mineral dust in the Arctic. We have developed a dust mobilization routine in combination with the Lagrangian dispersion model FLEXPART to make such estimates. The lack of details on soil properties in many areas requires a simple routine for global simulations. However, we have paid special attention to the dust sources on Iceland. The mobilization routine does account for topography, snow cover and soil moisture effects, in addition to meteorological parameters. FLEXPART, driven with operational meteorological data from European Centre for Medium-Range Weather Forecasts, was used to do a three-year global dust simulation for the years 2010 to 2012. We assess the model performance in terms of surface concentration and deposition at several locations spread over the globe. We will discuss how deposition and dust load patterns in the Arctic change throughout seasons based on the source of the dust. Important source regions for mineral dust found in the Arctic are not only the major desert areas, such as the Sahara, but also local bare-soil regions. From our model results, it appears that total dust load in the Arctic atmosphere is dominated by dust from Africa and Asia. However, in the lower atmosphere, local sources also contribute strongly to dust concentrations. Especially from Iceland, significant amounts of dust are mobilized. These local sources with relatively shallow transport of dust also affect the spatial distribution of dust deposition. For instance, model estimates show that in autumn and winter most of the deposited dust in Greenland originates from sources north of 60 degrees latitude.

  20. Sensitivity to experimental data of pollutant site mean concentration in stormwater runoff.

    PubMed

    Mourad, M; Bertrand-Krajewski, J L; Chebbo, G

    2005-01-01

    Urban wet weather discharges are known to be a great source of pollutants for receiving waters, which protection requires the estimation of long-term discharged pollutant loads. Pollutant loads can be estimated by multiplying a site mean concentration (SMC) by the total runoff volume during a given period of time. The estimation of the SMC value as a weighted mean value with event runoff volumes as weights is affected by uncertainties due to the variability of event mean concentrations and to the number of events used. This study carried out on 13 catchments gives orders of magnitude of these uncertainties and shows the limitations of usual practices using few measured events. The results obtained show that it is not possible to propose a standard minimal number of events to be measured on any catchment in order to evaluate the SMC value with a given uncertainty.

  1. Hanford Environmental Dose Reconstruction Project. Monthly report, December 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, S.M.; McMakin, A.H.

    1991-12-31

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon and Washington, a representative of Native American tribes, and an individual representing the public.more » The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on human (dose estimates): Source Terms; Environmental Transport; Environmental Monitoring Data; Demographics, Agriculture, Food Habits and; Environmental Pathways and Dose Estimates.« less

  2. Hanford Environmental Dose Reconstruction Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finch, S.M.; McMakin, A.H.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The TSP consists of experts in environmental pathways, epidemiology, surface-water transport, ground-water transport, statistics, demography, agriculture, meteorology, nuclear engineering, radiation dosimetry, and cultural anthropology. Included are appointed technical members representing the states of Oregon and Washington, a representative of Native American tribes, and an individual representing the public.more » The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on human (dose estimates): Source Terms; Environmental Transport; Environmental Monitoring Data; Demographics, Agriculture, Food Habits and; Environmental Pathways and Dose Estimates.« less

  3. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less

  4. A priori Estimates for 3D Incompressible Current-Vortex Sheets

    NASA Astrophysics Data System (ADS)

    Coulombel, J.-F.; Morando, A.; Secchi, P.; Trebeschi, P.

    2012-04-01

    We consider the free boundary problem for current-vortex sheets in ideal incompressible magneto-hydrodynamics. It is known that current-vortex sheets may be at most weakly (neutrally) stable due to the existence of surface waves solutions to the linearized equations. The existence of such waves may yield a loss of derivatives in the energy estimate of the solution with respect to the source terms. However, under a suitable stability condition satisfied at each point of the initial discontinuity and a flatness condition on the initial front, we prove an a priori estimate in Sobolev spaces for smooth solutions with no loss of derivatives. The result of this paper gives some hope for proving the local existence of smooth current-vortex sheets without resorting to a Nash-Moser iteration. Such result would be a rigorous confirmation of the stabilizing effect of the magnetic field on Kelvin-Helmholtz instabilities, which is well known in astrophysics.

  5. Recharge estimation in semi-arid karst catchments: Central West Bank, Palestine

    NASA Astrophysics Data System (ADS)

    Jebreen, Hassan; Wohnlich, Stefan; Wisotzky, Frank; Banning, Andre; Niedermayr, Andrea; Ghanem, Marwan

    2018-03-01

    Knowledge of groundwater recharge constitutes a valuable tool for sustainable management in karst systems. In this respect, a quantitative evaluation of groundwater recharge can be considered a pre-requisite for the optimal operation of groundwater resources systems, particular for semi-arid areas. This paper demonstrates the processes affecting recharge in Palestine aquifers. The Central Western Catchment is one of the main water supply sources in the West Bank. Quantification of potential recharge rates are estimated using chloride mass balance (CMB) and empirical recharge equations over the catchment. The results showing the spatialized recharge rate, which ranges from 111-216 mm/year, representing 19-37% of the long-term mean annual rainfall. Using Water Balance models and climatological data (e. g. solar radiation, monthly temperature, average monthly relative humidity and precipitation), actual evapotranspiration (AET) is estimated. The mean annual actual evapotranspiration was about 66-70% of precipitation.

  6. Hyperspectral image reconstruction for x-ray fluorescence tomography

    DOE PAGES

    Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...

    2015-01-01

    A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less

  7. The StreamCat Dataset: Accumulated Attributes for NHDPlusV2 Catchments (Version 2.1) for the Conterminous United States: Base Flow Index

    EPA Pesticide Factsheets

    This dataset represents the base flow index values within individual, local NHDPlusV2 catchments and upstream, contributing watersheds. Attributes of the landscape layer were calculated for every local NHDPlusV2 catchment and accumulated to provide watershed-level metrics. (See Supplementary Info for Glossary of Terms) The base-flow index (BFI) grid for the conterminous United States was developed to estimate (1) BFI values for ungaged streams, and (2) ground-water recharge throughout the conterminous United States (see Source_Information). Estimates of BFI values at ungaged streams and BFI-based ground-water recharge estimates are useful for interpreting relations between land use and water quality in surface and ground water. The bfi (%) was summarized by local catchment and by watershed to produce local catchment-level and watershed-level metrics as a continuous data type (see Data Structure and Attribute Information for a description).

  8. Toward quantitative forecasts of volcanic ash dispersal: Using satellite retrievals for optimal estimation of source terms

    NASA Astrophysics Data System (ADS)

    Zidikheri, Meelis J.; Lucas, Christopher; Potts, Rodney J.

    2017-08-01

    Airborne volcanic ash is a hazard to aviation. There is an increasing demand for quantitative forecasts of ash properties such as ash mass load to allow airline operators to better manage the risks of flying through airspace likely to be contaminated by ash. In this paper we show how satellite-derived mass load information at times prior to the issuance of the latest forecast can be used to estimate various model parameters that are not easily obtained by other means such as the distribution of mass of the ash column at the volcano. This in turn leads to better forecasts of ash mass load. We demonstrate the efficacy of this approach using several case studies.

  9. Emissions of microplastic fibers from microfiber fleece during domestic washing.

    PubMed

    Pirc, U; Vidmar, M; Mozer, A; Kržan, A

    2016-11-01

    Microplastics are found in marine and freshwater environments; however, their specific sources are not yet well understood. Understanding sources will be of key importance in efforts to reduce emissions into the environment. We examined the emissions of microfibers from domestic washing of a new microfiber polyester fleece textile. Analyzing released fibers collected with a 200 μm filter during 10 mild, successive washing cycles showed that emission initially decreased and then stabilized at approx. 0.0012 wt%. This value is our estimation for the long-term release of fibers during each washing. Use of detergent and softener did not significantly influence emission. Release of fibers during tumble drying was approx. 3.5 times higher than during washing.

  10. Satellite lidar and radar: Key components of the future climate observing system

    NASA Astrophysics Data System (ADS)

    Winker, D. M.

    2017-12-01

    Cloud feedbacks represent the dominant source of uncertainties in estimates of climate sensitivity and aerosols represent the largest source of uncertainty in climate forcing. Both observation of long-term changes and observational constraints on the processes responsible for those changes are necessary. The existing 30-year record of passive satellite observations has not yet provided constraints to significantly reduce these uncertainties, though. We now have more than a decade of experience with active sensors flying in the A-Train. These new observations have demonstrated the strengths of active sensors and the benefits of continued and more advanced active sensors. This talk will discuss the multiple roles for active sensors as an essential component of a global climate observing system.

  11. A comprehensive experimental characterization of the iPIX gamma imager

    NASA Astrophysics Data System (ADS)

    Amgarou, K.; Paradiso, V.; Patoz, A.; Bonnet, F.; Handley, J.; Couturier, P.; Becker, F.; Menaa, N.

    2016-08-01

    The results of more than 280 different experiments aimed at exploring the main features and performances of a newly developed gamma imager, called iPIX, are summarized in this paper. iPIX is designed to quickly localize radioactive sources while estimating the ambient dose equivalent rate at the measurement point. It integrates a 1 mm thick CdTe detector directly bump-bonded to a Timepix chip, a tungsten coded-aperture mask, and a mini RGB camera. It also represents a major technological breakthrough in terms of lightness, compactness, usability, response sensitivity, and angular resolution. As an example of its key strengths, an 241Am source with a dose rate of only few nSv/h can be localized in less than one minute.

  12. Estimates of Power Plant NOx Emissions and Lifetimes from OMI NO2 Satellite Retrievals

    NASA Technical Reports Server (NTRS)

    de Foy, Benjamin; Lu, Zifeng; Streets, David G.; Lamsal, Lok N.; Duncan, Bryan N.

    2015-01-01

    Isolated power plants with well characterized emissions serve as an ideal test case of methods to estimate emissions using satellite data. In this study we evaluate the Exponentially-Modified Gaussian (EMG) method and the box model method based on mass balance for estimating known NOx emissions from satellite retrievals made by the Ozone Monitoring Instrument (OMI). We consider 29 power plants in the USA which have large NOx plumes that do not overlap with other sources and which have emissions data from the Continuous Emission Monitoring System (CEMS). This enables us to identify constraints required by the methods, such as which wind data to use and how to calculate background values. We found that the lifetimes estimated by the methods are too short to be representative of the chemical lifetime. Instead, we introduce a separate lifetime parameter to account for the discrepancy between estimates using real data and those that theory would predict. In terms of emissions, the EMG method required averages from multiple years to give accurate results, whereas the box model method gave accurate results for individual ozone seasons.

  13. A review of global terrestrial evapotranspiration: Observation, modeling, climatology, and climatic variability

    NASA Astrophysics Data System (ADS)

    Wang, Kaicun; Dickinson, Robert E.

    2012-06-01

    This review surveys the basic theories, observational methods, satellite algorithms, and land surface models for terrestrial evapotranspiration, E (or λE, i.e., latent heat flux), including a long-term variability and trends perspective. The basic theories used to estimate E are the Monin-Obukhov similarity theory (MOST), the Bowen ratio method, and the Penman-Monteith equation. The latter two theoretical expressions combine MOST with surface energy balance. Estimates of E can differ substantially between these three approaches because of their use of different input data. Surface and satellite-based measurement systems can provide accurate estimates of diurnal, daily, and annual variability of E. But their estimation of longer time variability is largely not established. A reasonable estimate of E as a global mean can be obtained from a surface water budget method, but its regional distribution is still rather uncertain. Current land surface models provide widely different ratios of the transpiration by vegetation to total E. This source of uncertainty therefore limits the capability of models to provide the sensitivities of E to precipitation deficits and land cover change.

  14. Human-Induced Long-Term Shifts in Gull Diet from Marine to Terrestrial Sources in North America's Coastal Pacific: More Evidence from More Isotopes (δ2H, δ34S).

    PubMed

    Hobson, Keith A; Blight, Louise K; Arcese, Peter

    2015-09-15

    Measurements of naturally occurring stable isotopes in tissues of seabirds and their prey are a powerful tool for investigating long-term changes in marine foodwebs. Recent isotopic (δ(15)N, δ(13)C) evidence from feathers of Glaucous-winged Gulls (Larus glaucescens) has shown that over the last 150 years, this species shifted from a midtrophic marine diet to one including lower trophic marine prey and/or more terrestrial or freshwater foods. However, long-term isotopic patterns of δ(15)N and δ(13)C cannot distinguish between the relative importance of lower trophic-level marine foods and terrestrial sources. We examined 48 feather stable-hydrogen (δ(2)H) and -sulfur (δ(34)S) isotope values from this same 150-year feather set and found additional isotopic evidence supporting the hypothesis that gulls shifted to terrestrial and/or freshwater prey. Mean feather δ(2)H and δ(34)S values (± SD) declined from the earliest period (1860-1915; n = 12) from -2.5 ± 21.4 ‰ and 18.9 ± 2.7 ‰, respectively, to -35.5 ± 15.5 ‰ and 14.8 ± 2.4 ‰, respectively, for the period 1980-2009 (n = 12). We estimated a shift of ∼ 30% increase in dependence on terrestrial/freshwater sources. These results are consistent with the hypothesis that gulls increased terrestrial food inputs in response to declining forage fish availability.

  15. Estimation of daily PM10 concentrations in Italy (2006-2012) using finely resolved satellite data, land use variables and meteorology.

    PubMed

    Stafoggia, Massimo; Schwartz, Joel; Badaloni, Chiara; Bellander, Tom; Alessandrini, Ester; Cattani, Giorgio; De' Donato, Francesca; Gaeta, Alessandra; Leone, Gianluca; Lyapustin, Alexei; Sorek-Hamer, Meytar; de Hoogh, Kees; Di, Qian; Forastiere, Francesco; Kloog, Itai

    2017-02-01

    Health effects of air pollution, especially particulate matter (PM), have been widely investigated. However, most of the studies rely on few monitors located in urban areas for short-term assessments, or land use/dispersion modelling for long-term evaluations, again mostly in cities. Recently, the availability of finely resolved satellite data provides an opportunity to estimate daily concentrations of air pollutants over wide spatio-temporal domains. Italy lacks a robust and validated high resolution spatio-temporally resolved model of particulate matter. The complex topography and the air mixture from both natural and anthropogenic sources are great challenges difficult to be addressed. We combined finely resolved data on Aerosol Optical Depth (AOD) from the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm, ground-level PM 10 measurements, land-use variables and meteorological parameters into a four-stage mixed model framework to derive estimates of daily PM 10 concentrations at 1-km2 grid over Italy, for the years 2006-2012. We checked performance of our models by applying 10-fold cross-validation (CV) for each year. Our models displayed good fitting, with mean CV-R2=0.65 and little bias (average slope of predicted VS observed PM 10 =0.99). Out-of-sample predictions were more accurate in Northern Italy (Po valley) and large conurbations (e.g. Rome), for background monitoring stations, and in the winter season. Resulting concentration maps showed highest average PM 10 levels in specific areas (Po river valley, main industrial and metropolitan areas) with decreasing trends over time. Our daily predictions of PM 10 concentrations across the whole Italy will allow, for the first time, estimation of long-term and short-term effects of air pollution nationwide, even in areas lacking monitoring data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Sediment Transport Variability in Global Rivers: Implications for the Interpretation of Paleoclimate Signals

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E. W.

    2001-12-01

    A new numerical approach (HydroTrend, v.2) allows the daily flux of sediment to be estimated for any river, whether gauged or not. The model can be driven by actual climate measurements (precipitation, temperature) or with statistical estimates of climate (modeled climate, remotely-sensed climate). In both cases, the character (e.g. soil depth, relief, vegetation index) of the drainage terrain is needed to complete the model domain. The HydroTrend approach allows us to examine the effects of climate on the supply of sediment to continental margins, and the nature of supply variability. A new relationship is defined as: $Qs = f (Psi) Qs-bar (Q/Q-bar)c+-σ where Qs-bar is the long-term sediment load, Q-bar is the long-term discharge, c and sigma are mean and standard deviation of the inter-annual variability of the rating coefficient, and Psi captures the measurement errors associated with Q and Qs, and the annual transients, affecting the supply of sediment including sediment and water source, and river (flood wave) dynamics. F = F(Psi, s). Smaller-discharge rivers have larger values of s, and s asymptotes to a small but consistent value for larger-discharge rivers. The coefficient c is directly proportional to the long-term suspended load (Qs-bar) and basin relief (R), and inversely proportional to mean annual temperature (T). sigma is directly proportional to the mean annual discharge. The long-term sediment load is given by: Qs-bar = a R1.5 A0.5 TT $ where a is a global constant, A is basin area; and TT is a function of mean annual temperature. This new approach provides estimates of sediment flux at the dynamic (daily) level and provides us a means to experiment on the sensitivity of marine sedimentary deposits in recording a paleoclimate signal. In addition the method provides us with spatial estimates for the flux of sediment to the coastal zone at the global scale.

  17. The impact of fish and the commercial marine harvest on the ocean iron cycle.

    PubMed

    Moreno, Allison R; Haffa, Arlene L M

    2014-01-01

    Although iron is the fourth most abundant element in the Earth's crust, bioavailable iron limits marine primary production in about one third of the ocean. This lack of iron availability has implications in climate change because the removal of carbon dioxide from the atmosphere by phytoplankton requires iron. Using literature values for global fish biomass estimates, and elemental composition data we estimate that fish biota store between 0.7-7 × 10(11) g of iron. Additionally, the global fish population recycles through excretion between 0.4-1.5 × 10(12) g of iron per year, which is of a similar magnitude as major recognized sources of iron (e.g. dust, sediments, ice sheet melting). In terms of biological impact this iron could be superior to dust inputs due to the distributed deposition and to the greater solubility of fecal pellets compared to inorganic minerals. To estimate a loss term due to anthropogenic activity the total commercial catch for 1950 to 2010 was obtained from the Food and Agriculture Organization of the United Nations. Marine catch data were separated by taxa. High and low end values for elemental composition were obtained for each taxonomic category from the literature and used to calculate iron per mass of total harvest over time. The marine commercial catch is estimated to have removed 1-6 × 10(9) g of iron in 1950, the lowest values on record. There is an annual increase to 0.7-3 × 10(10) g in 1996, which declines to 0.6-2 × 10(10) g in 2010. While small compared to the total iron terms in the cycle, these could have compounding effects on distribution and concentration patterns globally over time. These storage, recycling, and export terms of biotic iron are not currently included in ocean iron mass balance calculations. These data suggest that fish and anthropogenic activity should be included in global oceanic iron cycles.

  18. Nutation determination using the Global Positioning System

    NASA Astrophysics Data System (ADS)

    Yao, Kunliang; Capitaine, Nicole; Umnig, Elke; Weber, Robert

    2012-08-01

    VLBI observation of extragalactic radio sources is the only technique that allows high accuracy determination of nutation on a regular basis. However, this is limited to periods of nutation greater than about 30 days due to the current resolution of VL BI estimation. It is there fore important to use another technique to improve nutation at shorter periods. It has been shown by Rothacher et al. (1999) and Weber & Rothacher (2001) that GPS is a potential technique for the determination of the short period terms of nutation. The met hod, which is based on the estimation of nutation rates with respect to an a priori model, is limited to nutation terms in the higher frequency range (with periods up to about 21 days) due to deficiencies in the modeling of the satellite orbits. The high accuracy and high time resolution of the GPS observations that are now achieved give us the possibility to estimate the nutation variations with respect to the IAU2000A nutation, with an expected precision of 10 microarcseconds (μas ). The purpose of our study is to use recent GPS observations obtained by 140 IGS stations (IGS08 Core Reference Frame sites included) to estimate the short period nutations. Two methods are applied: one is to investigate the retrograde diurnal term of polar motion with nutation fixed to the IAU 2006/2000 precession - nutation, using CNES/GRGS software GINS/DYNAMO at Observatoire de Paris; another one is to investigate the nutation time derivative, with polar motion fixed, using Bernese GPS software at University of Technology in Vienna. In this poster, we report on our preliminary results with data set covering a period of 3 years (2009 - 2011), with appropriate time resolutions and on the comparison between the two approaches.

  19. The impacts of non-renewable and renewable energy on CO2 emissions in Turkey.

    PubMed

    Bulut, Umit

    2017-06-01

    As a result of great increases in CO 2 emissions in the last few decades, many papers have examined the relationship between renewable energy and CO 2 emissions in the energy economics literature, because as a clean energy source, renewable energy can reduce CO 2 emissions and solve environmental problems stemming from increases in CO 2 emissions. When one analyses these papers, he/she will observe that they employ fixed parameter estimation methods, and time-varying effects of non-renewable and renewable energy consumption/production on greenhouse gas emissions are ignored. In order to fulfil this gap in the literature, this paper examines the effects of non-renewable and renewable energy on CO 2 emissions in Turkey over the period 1970-2013 by employing fixed parameter and time-varying parameter estimation methods. Estimation methods reveal that CO 2 emissions are positively related to non-renewable energy and renewable energy in Turkey. Since policy makers expect renewable energy to decrease CO 2 emissions, this paper argues that renewable energy is not able to satisfy the expectations of policy makers though fewer CO 2 emissions arise through production of electricity using renewable sources. In conclusion, the paper argues that policy makers should implement long-term energy policies in Turkey.

  20. Measuring the scale dependence of intrinsic alignments using multiple shear estimates

    NASA Astrophysics Data System (ADS)

    Leonard, C. Danielle; Mandelbaum, Rachel

    2018-06-01

    We present a new method for measuring the scale dependence of the intrinsic alignment (IA) contamination to the galaxy-galaxy lensing signal, which takes advantage of multiple shear estimation methods applied to the same source galaxy sample. By exploiting the resulting correlation of both shape noise and cosmic variance, our method can provide an increase in the signal-to-noise of the measured IA signal as compared to methods which rely on the difference of the lensing signal from multiple photometric redshift bins. For a galaxy-galaxy lensing measurement which uses LSST sources and DESI lenses, the signal-to-noise on the IA signal from our method is predicted to improve by a factor of ˜2 relative to the method of Blazek et al. (2012), for pairs of shear estimates which yield substantially different measured IA amplitudes and highly correlated shape noise terms. We show that statistical error necessarily dominates the measurement of intrinsic alignments using our method. We also consider a physically motivated extension of the Blazek et al. (2012) method which assumes that all nearby galaxy pairs, rather than only excess pairs, are subject to IA. In this case, the signal-to-noise of the method of Blazek et al. (2012) is improved.

  1. Reliable video transmission over fading channels via channel state estimation

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-04-01

    Transmission of continuous media such as video over time- varying wireless communication channels can benefit from the use of adaptation techniques in both source and channel coding. An adaptive feedback-based wireless video transmission scheme is investigated in this research with special emphasis on feedback-based adaptation. To be more specific, an interactive adaptive transmission scheme is developed by letting the receiver estimate the channel state information and send it back to the transmitter. By utilizing the feedback information, the transmitter is capable of adapting the level of protection by changing the flexible RCPC (rate-compatible punctured convolutional) code ratio depending on the instantaneous channel condition. The wireless channel is modeled as a fading channel, where the long-term and short- term fading effects are modeled as the log-normal fading and the Rayleigh flat fading, respectively. Then, its state (mainly the long term fading portion) is tracked and predicted by using an adaptive LMS (least mean squares) algorithm. By utilizing the delayed feedback on the channel condition, the adaptation performance of the proposed scheme is first evaluated in terms of the error probability and the throughput. It is then extended to incorporate variable size packets of ITU-T H.263+ video with the error resilience option. Finally, the end-to-end performance of wireless video transmission is compared against several non-adaptive protection schemes.

  2. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    NASA Astrophysics Data System (ADS)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  3. A framework for estimating the determinants of spatial and temporal variation in vital rates and inferring the occurrence of unobserved extreme events

    PubMed Central

    Jesenšek, Dušan; Crivelli, Alain J.

    2018-01-01

    We develop a general framework that combines long-term tag–recapture data and powerful statistical and modelling techniques to investigate how population, environmental and climate factors determine variation in vital rates and population dynamics in an animal species, using as a case study the population of brown trout living in Upper Volaja (Western Slovenia). This population has been monitored since 2004. Upper Volaja is a sink, receiving individuals from a source population living above a waterfall. We estimate the numerical contribution of the source population on the sink population and test the effects of temperature, population density and extreme events on variation in vital rates among 2647 individually tagged brown trout. We found that individuals dispersing downstream from the source population help maintain high population densities in the sink population despite poor recruitment. The best model of survival for individuals older than juveniles includes additive effects of birth cohort and sampling occasion. Fast growth of older cohorts and higher population densities in 2004–2005 suggest very low population densities in the late 1990s, which we hypothesize were caused by a flash flood that strongly reduced population size and created the habitat conditions for faster individual growth and transient higher population densities after the extreme event. PMID:29657746

  4. Mercury in the Gulf of Mexico: sources to receptors.

    PubMed

    Harris, Reed; Pollman, Curtis; Landing, William; Evans, David; Axelrad, Donald; Hutchinson, David; Morey, Steven L; Rumbold, Darren; Dukhovskoy, Dmitry; Adams, Douglas H; Vijayaraghavan, Krish; Holmes, Christopher; Atkinson, R Dwight; Myers, Tom; Sunderland, Elsie

    2012-11-01

    Gulf of Mexico (Gulf) fisheries account for 41% of the U.S. marine recreational fish catch and 16% of the nation's marine commercial fish landings. Mercury (Hg) concentrations are elevated in some fish species in the Gulf, including king mackerel, sharks, and tilefish. All five Gulf states have fish consumption advisories based on Hg. Per-capita fish consumption in the Gulf region is elevated compared to the U.S. national average, and recreational fishers in the region have a potential for greater MeHg exposure due to higher levels of fish consumption. Atmospheric wet Hg deposition is estimated to be higher in the Gulf region compared to most other areas in the U.S., but the largest source of Hg to the Gulf as a whole is the Atlantic Ocean (>90%) via large flows associated with the Loop Current. Redistribution of atmospheric, Atlantic and terrestrial Hg inputs to the Gulf occurs via large scale water circulation patterns, and further work is needed to refine estimates of the relative importance of these Hg sources in terms of contributing to fish Hg levels in different regions of the Gulf. Measurements are needed to better quantify external loads, in-situ concentrations, and fluxes of total Hg and methylmercury in the water column, sediments, and food web. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. A framework for estimating the determinants of spatial and temporal variation in vital rates and inferring the occurrence of unobserved extreme events.

    PubMed

    Vincenzi, Simone; Jesenšek, Dušan; Crivelli, Alain J

    2018-03-01

    We develop a general framework that combines long-term tag-recapture data and powerful statistical and modelling techniques to investigate how population, environmental and climate factors determine variation in vital rates and population dynamics in an animal species, using as a case study the population of brown trout living in Upper Volaja (Western Slovenia). This population has been monitored since 2004. Upper Volaja is a sink, receiving individuals from a source population living above a waterfall. We estimate the numerical contribution of the source population on the sink population and test the effects of temperature, population density and extreme events on variation in vital rates among 2647 individually tagged brown trout. We found that individuals dispersing downstream from the source population help maintain high population densities in the sink population despite poor recruitment. The best model of survival for individuals older than juveniles includes additive effects of birth cohort and sampling occasion. Fast growth of older cohorts and higher population densities in 2004-2005 suggest very low population densities in the late 1990s, which we hypothesize were caused by a flash flood that strongly reduced population size and created the habitat conditions for faster individual growth and transient higher population densities after the extreme event.

  6. A new modeling approach for assessing the contribution of industrial and traffic emissions to ambient NOx concentrations

    NASA Astrophysics Data System (ADS)

    Chen, Shimon; Yuval; Broday, David M.

    2018-01-01

    The Optimized Dispersion Model (ODM) is uniquely capable of incorporating emission estimates, ambient air quality monitoring data and meteorology to provide reliable high-resolution (in both time and space) air quality estimates using non-linear regression. However, it was so far not capable of describing the effects of emissions from elevated sources. We formulated an additional term to extend the ODM such that these sources can be accounted for, and implemented it in modeling the fine spatiotemporal patterns of ambient NOx concentrations over the coastal plain of Israel. The diurnal and seasonal variation in the contribution of industry to the ambient NOx is presented, as well as its spatial features. Although industrial stacks are responsible for 88% of the NOx emissions in the study area, their contribution to ambient NOx levels is generally about 2% with a maximal upper bound of 27%. Meteorology has a major role in this source allocation, with the highest impact of industry in the summer months, when the wind is blowing inland past the coastal stacks and vertical mixing is substantial. The new Optimized Dispersion Model (ODM) out-performs both Inverse-Distance-Weighing (IDW) interpolation and a previous ODM version in predicting ambient NOx concentrations. The performance of the new model is thoroughly assessed.

  7. A quantitative approach to combine sources in stable isotope mixing models

    EPA Science Inventory

    Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...

  8. Estimates of Radiation Effects on Cancer Risks in the Mayak Worker, Techa River and Atomic Bomb Survivor Studies.

    PubMed

    Preston, Dale L; Sokolnikov, Mikhail E; Krestinina, Lyudmila Yu; Stram, Daniel O

    2017-04-01

    For almost 50 y, the Life Span Study cohort of atomic bomb survivor studies has been the primary source of the quantitative estimates of cancer and non-cancer risks that form the basis of international radiation protection standards. However, the long-term follow-up and extensive individual dose reconstruction for the Russian Mayak worker cohort (MWC) and Techa River cohort (TRC) are providing quantitative information about radiation effects on cancer risks that complement the atomic bomb survivor-based risk estimates. The MWC, which includes ~26 000 men and women who began working at Mayak between 1948 and 1982, is the primary source for estimates of the effects of plutonium on cancer risks and also provides information on the effects of low-dose rate external gamma exposures. The TRC consists of ~30 000 men and women of all ages who received low-dose-rate, low-dose exposures as a consequence of Mayak's release of radioactive material into the Techa River. The TRC data are of interest because the exposures are broadly similar to those experienced by populations exposed as a consequence of nuclear accidents such as Chernobyl. In this presentation, it is described the strengths and limitations of these three cohorts, outline and compare recent solid cancer and leukemia risk estimates and discussed why information from the Mayak and Techa River studies might play a role in the development and refinement of the radiation risk estimates that form the basis for radiation protection standards. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Closed-Form 3-D Localization for Single Source in Uniform Circular Array with a Center Sensor

    NASA Astrophysics Data System (ADS)

    Bae, Eun-Hyon; Lee, Kyun-Kyung

    A novel closed-form algorithm is presented for estimating the 3-D location (azimuth angle, elevation angle, and range) of a single source in a uniform circular array (UCA) with a center sensor. Based on the centrosymmetry of the UCA and noncircularity of the source, the proposed algorithm decouples and estimates the 2-D direction of arrival (DOA), i.e. azimuth and elevation angles, and then estimates the range of the source. Notwithstanding a low computational complexity, the proposed algorithm provides an estimation performance close to that of the benchmark estimator 3-D MUSIC.

  10. Historical (1850-2000) gridded anthropogenic and biomass burning emissions of reactive gases and aerosols:methodology and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamarque, J. F.; Bond, Tami C.; Eyring, Veronika

    2010-08-11

    We present and discuss a new dataset of gridded emissions covering the historical period (1850-2000) in decadal increments at a horizontal resolution of 0.5° in latitude and longitude. The primary purpose of this inventory is to provide consistent gridded emissions of reactive gases and aerosols for use in chemistry model simulations needed by climate models for the Climate Model Intercomparison Program #5 (CMIP5) in support of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment report. Our best estimate for the year 2000 inventory represents a combination of existing regional and global inventories to capture the best information available atmore » this point; 40 regions and 12 sectors were used to combine the various sources. The historical reconstruction of each emitted compound, for each region and sector, was then forced to agree with our 2000 estimate, ensuring continuity between past and 2000 emissions. Application of these emissions into two chemistry-climate models is used to test their ability to capture long-term changes in atmospheric ozone, carbon monoxide and aerosols distributions. The simulated long-term change in the Northern mid-latitudes surface and mid-troposphere ozone is not quite as rapid as observed. However, stations outside this latitude band show much better agreement in both present-day and long-term trend. The model simulations consistently underestimate the carbon monoxide trend, while capturing the long-term trend at the Mace Head station. The simulated sulfate and black carbon deposition over Greenland is in very good agreement with the ice-core observations spanning the simulation period. Finally, aerosol optical depth and additional aerosol diagnostics are shown to be in good agreement with previously published estimates.« less

  11. On the Formation and Seasonal Properties of Topical Cirrus Clouds over Amazon Basin (2.89ºS, 59.97ºW): Observations from Lidar, Radiosonde and Satellite instruments.

    NASA Astrophysics Data System (ADS)

    Blagev, D. P.; Mendoza, D. L.; Rea, S.; Sorensen, J.

    2014-12-01

    Adverse health effects have been associated with urban pollutant exposure arising from close proximity to highly-emitting sources and atmospheric mixing. The relative air pollution exposure dose and time effects on various diseases remains unknown. This study compares the increased risk of health complications when patients are exposed to short term high-levels of air pollution vs. longer term exposure to lower levels of air pollution. We used the electronic medical record of an integrated hospital system based in Utah, Intermountain Healthcare, to identify a cohort of patients with Chronic Obstructive Pulmonary Disease (COPD) who were seen between 2009-2014. We determined patient demographics as well as comorbidity data and healthcare utilization. To determine the approximate air pollution dose and time exposure, we used the Hestia highly-resolved emissions inventory for Salt Lake County, Utah in conjunction with emissions based on the National Emissions Inventory (NEI). Hourly emissions of CO2 and criteria air pollutants were gridded at a 0.002o x 0.002o resolution for the study years. The resulting emissions were transported using the CALPUFF and AERMOD dispersion models to estimate air pollutant concentrations at an hourly 0.002o x 0.002oresolution. Additionally, pollutant concentrations were estimated at each patient's home and work address to estimate exposure. Multivariate analysis adjusting for patient demographics, comorbidities and severity of COPD was performed to determine association between air pollution exposure and the risk of hospitalization or emergency department (ED) visit for COPD exacerbation and an equivalency estimate for air pollution exposure was developed. We noted associations with air pollution levels for each pollutant and hospitalizations and ED visits for COPD and other patient comorbidities. We also present an equivalency estimate for dose of air pollution exposure and health outcomes. This analysis compares the increased risk of health complications when patients are exposed to short term high-levels of air pollution vs. longer term exposure to lower levels of air pollution. These findings highlight pollutant emissions and exposures spatial and temporal heterogeneity and associated health effects.

  12. Interaction between Chronic Obstructive Pulmonary Disease (COPD) and other important health conditions and measurable air pollution

    NASA Astrophysics Data System (ADS)

    Blagev, D. P.; Mendoza, D. L.; Rea, S.; Sorensen, J.

    2015-12-01

    Adverse health effects have been associated with urban pollutant exposure arising from close proximity to highly-emitting sources and atmospheric mixing. The relative air pollution exposure dose and time effects on various diseases remains unknown. This study compares the increased risk of health complications when patients are exposed to short term high-levels of air pollution vs. longer term exposure to lower levels of air pollution. We used the electronic medical record of an integrated hospital system based in Utah, Intermountain Healthcare, to identify a cohort of patients with Chronic Obstructive Pulmonary Disease (COPD) who were seen between 2009-2014. We determined patient demographics as well as comorbidity data and healthcare utilization. To determine the approximate air pollution dose and time exposure, we used the Hestia highly-resolved emissions inventory for Salt Lake County, Utah in conjunction with emissions based on the National Emissions Inventory (NEI). Hourly emissions of CO2 and criteria air pollutants were gridded at a 0.002o x 0.002o resolution for the study years. The resulting emissions were transported using the CALPUFF and AERMOD dispersion models to estimate air pollutant concentrations at an hourly 0.002o x 0.002oresolution. Additionally, pollutant concentrations were estimated at each patient's home and work address to estimate exposure. Multivariate analysis adjusting for patient demographics, comorbidities and severity of COPD was performed to determine association between air pollution exposure and the risk of hospitalization or emergency department (ED) visit for COPD exacerbation and an equivalency estimate for air pollution exposure was developed. We noted associations with air pollution levels for each pollutant and hospitalizations and ED visits for COPD and other patient comorbidities. We also present an equivalency estimate for dose of air pollution exposure and health outcomes. This analysis compares the increased risk of health complications when patients are exposed to short term high-levels of air pollution vs. longer term exposure to lower levels of air pollution. These findings highlight pollutant emissions and exposures spatial and temporal heterogeneity and associated health effects.

  13. LOOKING FOR GRANULATION AND PERIODICITY IMPRINTS IN THE SUNSPOT TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, Ilídio; Silva, Hugo G., E-mail: ilidio.lopes@tecnico.ulisboa.pt, E-mail: hgsilva@uevora.pt

    2015-05-10

    The sunspot activity is the end result of the cyclic destruction and regeneration of magnetic fields by the dynamo action. We propose a new method to analyze the daily sunspot areas data recorded since 1874. By computing the power spectral density of daily data series using the Mexican hat wavelet, we found a power spectrum with a well-defined shape, characterized by three features. The first term is the 22 yr solar magnetic cycle, estimated in our work to be 18.43 yr. The second term is related to the daily volatility of sunspots. This term is most likely produced by themore » turbulent motions linked to the solar granulation. The last term corresponds to a periodic source associated with the solar magnetic activity, for which the maximum power spectral density occurs at 22.67 days. This value is part of the 22–27 day periodicity region that shows an above-average intensity in the power spectra. The origin of this 22.67 day periodic process is not clearly identified, and there is a possibility that it can be produced by convective flows inside the star. The study clearly shows a north–south asymmetry. The 18.43 yr periodical source is correlated between the two hemispheres, but the 22.67 day one is not correlated. It is shown that toward the large timescales an excess occurs in the northern hemisphere, especially near the previous two periodic sources. To further investigate the 22.67 day periodicity, we made a Lomb–Scargle spectral analysis. The study suggests that this periodicity is distinct from others found nearby.« less

  14. Path spectra derived from inversion of source and site spectra for earthquakes in Southern California

    NASA Astrophysics Data System (ADS)

    Klimasewski, A.; Sahakian, V. J.; Baltay, A.; Boatwright, J.; Fletcher, J. B.; Baker, L. M.

    2017-12-01

    A large source of epistemic uncertainty in Ground Motion Prediction Equations (GMPEs) is derived from the path term, currently represented as a simple geometric spreading and intrinsic attenuation term. Including additional physical relationships between the path properties and predicted ground motions would produce more accurate and precise, region-specific GMPEs by reclassifying some of the random, aleatory uncertainty as epistemic. This study focuses on regions of Southern California, using data from the Anza network and Southern California Seismic network to create a catalog of events magnitude 2.5 and larger from 1998 to 2016. The catalog encompasses regions of varying geology and therefore varying path and site attenuation. Within this catalog of events, we investigate several collections of event region-to-station pairs, each of which share similar origin locations and stations so that all events have similar paths. Compared with a simple regional GMPE, these paths consistently have high or low residuals. By working with events that have the same path, we can isolate source and site effects, and focus on the remaining residual as path effects. We decompose the recordings into source and site spectra for each unique event and site in our greater Southern California regional database using the inversion method of Andrews (1986). This model represents each natural log record spectra as the sum of its natural log event and site spectra, while constraining each record to a reference site or Brune source spectrum. We estimate a regional, path-specific anelastic attenuation (Q) and site attenuation (t*) from the inversion site spectra and corner frequency from the inversion event spectra. We then compute the residuals between the observed record data, and the inversion model prediction (event*site spectra). This residual is representative of path effects, likely anelastic attenuation along the path that varies from the regional median attenuation. We examine the residuals for our different sets independently to see how path terms differ between event-to-station collections. The path-specific information gained from this can inform development of terms for regional GMPEs, through understanding of these seismological phenomena.

  15. Acoustic Source Bearing Estimation (ASBE) computer program development

    NASA Technical Reports Server (NTRS)

    Wiese, Michael R.

    1987-01-01

    A new bearing estimation algorithm (Acoustic Source Analysis Technique - ASAT) and an acoustic analysis computer program (Acoustic Source Bearing Estimation - ASBE) are described, which were developed by Computer Sciences Corporation for NASA Langley Research Center. The ASBE program is used by the Acoustics Division/Applied Acoustics Branch and the Instrument Research Division/Electro-Mechanical Instrumentation Branch to analyze acoustic data and estimate the azimuths from which the source signals radiated. Included are the input and output from a benchmark test case.

  16. CO2 fluxes from a tropical neighborhood: sources and sinks

    NASA Astrophysics Data System (ADS)

    Velasco, E.; Roth, M.; Tan, S.; Quak, M.; Britter, R.; Norford, L.

    2011-12-01

    Cities are the main contributors to the CO2 rise in the atmosphere. The CO2 released from the various emission sources is typically quantified by a bottom-up aggregation process that accounts for emission factors and fossil fuel consumption data. This approach does not consider the heterogeneity and variability of the urban emission sources, and error propagation can result in large uncertainties. In this context, direct measurements of CO2 fluxes that include all major and minor anthropogenic and natural sources and sinks from a specific district can be used to evaluate emission inventories. This study reports and compares CO2 fluxes measured directly using the eddy covariance method with emissions estimated by emissions factors and activity data for a residential neighborhood of Singapore, a highly populated and urbanized tropical city. The flux measurements were conducted during one year. No seasonal variability was found as a consequence of the constant climate conditions of tropical places; but a clear diurnal pattern with morning and late afternoon peaks in phase with the rush-hour traffic was observed. The magnitude of the fluxes throughout daylight hours is modulated by the urban vegetation, which is abundant in terms of biomass but not of land-cover (15%). Even though the carbon uptake by vegetation is significant, it does not exceed the anthropogenic emissions and the monitored district is a net CO2 source of 20.3 ton km-2 day-1 on average. The carbon uptake by vegetation is investigated as the difference between the estimated emissions and the measured fluxes during daytime.

  17. Electromagnetic fields and the public: EMF standards and estimation of risk

    NASA Astrophysics Data System (ADS)

    Grigoriev, Yury

    2010-04-01

    Mobile communications are a relatively new and additional source of electromagnetic exposure for the population. Standard daily mobile-phone use is known to increase RF-EMF (radiofrequency electromagnetic field) exposure to the brains of users of all ages, whilst mobile-phone base stations, and base station units for cordless phones, can regularly increase the exposures of large numbers of the population to RF-EMF radiation in everyday life. The need to determine appropriate standards stipulating the maximum acceptable short-term and long-term RF-EMF levels encountered by the public, and set such levels as general guidelines, is of great importance in order to help preserve the general public's health and that of the next generation of humanity.

  18. Scoping estimates of the LDEF satellite induced radioactivity

    NASA Technical Reports Server (NTRS)

    Armstrong, Tony W.; Colborn, B. L.

    1990-01-01

    The Long Duration Exposure Facility (LDEF) satellite was recovered after almost six years in space. It was well-instrumented with ionizing radiation dosimeters, including thermoluminescent dosimeters, plastic nuclear track detectors, and a variety of metal foil samples for measuring nuclear activation products. The extensive LDEF radiation measurements provide the type of radiation environments and effects data needed to evaluate and help resolve uncertainties in present radiation models and calculational methods. A calculational program was established to aid in LDEF data interpretation and to utilize LDEF data for assessing the accuracy of current models. A summary of the calculational approach is presented. The purpose of the reported calculations is to obtain a general indication of: (1) the importance of different space radiation sources (trapped, galactic, and albedo protons, and albedo neutrons); (2) the importance of secondary particles; and (3) the spatial dependence of the radiation environments and effects expected within the spacecraft. The calculational method uses the High Energy Transport Code (HETC) to estimate the importance of different sources and secondary particles in terms of fluence, absorbed dose in tissue and silicon, and induced radioactivity as a function of depth in aluminum.

  19. Estimated water use in Mississippi, 1980

    USGS Publications Warehouse

    Callahan, J.A.

    1980-01-01

    Large quantities of good quality ground and surface water are readily available in nearly all parts of Mississippi, and there is also an abundant supply of saline water in the estuaries along the Mississippi Gulf Coast. The total estimated water use in the State in 1980 from groundwater and surface water was 3532 million gallons/day (mgd), including 662 mgd of saline water. Freshwater used from all sources in Mississippi during the period 1975 through 1980 increased from 2510 mgd to > 2870 mgd, a 14% increase. Although modest increases of freshwater use may be expected in public, self-supplied industrial, and thermoelectric supplies, large future increases in the use of freshwater may be expected primarily as a result of growth in irrigation and aquaculture. Management and protection of the quantity and quality of the available freshwater supply are often problems associated with increased use. Water use data, both temporal and spatial, are needed by the State of Mississippi to provide for intelligent, long-term management of the resources; one table gives data on the principal categories of water use, sources, and use by county. (Lantz-PTT)

  20. Nitrogen emissions, deposition, and monitoring in the Western United States

    USGS Publications Warehouse

    Fenn, M.E.; Haeuber, R.; Tonnesen, G.S.; Baron, Jill S.; Grossman-Clarke, S.; Hope, D.; Jaffe, D.A.; Copeland, S.; Geiser, L.; Rueth, H.M.; Sickman, J.O.

    2003-01-01

    Nitrogen (N) deposition in the western United States ranges from 1 to 4 kilograms (kg) per hectare (ha) per year over much of the region to as high as 30 to 90 kg per ha per year downwind of major urban and agricultural areas. Primary N emissions sources are transportation, agriculture, and industry. Emissions of N as ammonia are about 50% as great as emissions of N as nitrogen oxides. An unknown amount of N deposition to the West Coast originates from Asia. Nitrogen deposition has increased in the West because of rapid increases in urbanization, population, distance driven, and large concentrated animal feeding operations. Studies of ecological effects suggest that emissions reductions are needed to protect sensitive ecosystem components. Deposition rates are unknown for most areas in the West, although reasonable estimates are available for sites in California, the Colorado Front Range, and central Arizona. National monitoring networks provide long-term wet deposition data and, more recently, estimated dry deposition data at remote sites. However, there is little information for many areas near emissions sources.

  1. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  2. Effectiveness and Cost-Effectiveness of Antidepressants in Primary Care: A Multiple Treatment Comparison Meta-Analysis and Cost-Effectiveness Model

    PubMed Central

    Ramsberg, Joakim; Asseburg, Christian; Henriksson, Martin

    2012-01-01

    Objective To determine effectiveness and cost-effectiveness over a one-year time horizon of pharmacological first line treatment in primary care for patients with moderate to severe depression. Design A multiple treatment comparison meta-analysis was employed to determine the relative efficacy in terms of remission of 10 antidepressants (citalopram, duloxetine escitalopram, fluoxetine, fluvoxamine mirtazapine, paroxetine, reboxetine, sertraline and venlafaxine). The estimated remission rates were then applied in a decision-analytic model in order to estimate costs and quality of life with different treatments at one year. Data Sources Meta-analyses of remission rates from randomised controlled trials, and cost and quality-of-life data from published sources. Results The most favourable pharmacological treatment in terms of remission was escitalopram with an 8- to 12-week probability of remission of 0.47. Despite a high acquisition cost, this clinical effectiveness translated into escitalopram being both more effective and having a lower total cost than all other comparators from a societal perspective. From a healthcare perspective, the cost per QALY of escitalopram was €3732 compared with venlafaxine. Conclusion Of the investigated antidepressants, escitalopram has the highest probability of remission and is the most effective and cost-effective pharmacological treatment in a primary care setting, when evaluated over a one year time-horizon. Small differences in remission rates may be important when assessing costs and cost-effectiveness of antidepressants. PMID:22876296

  3. Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.

    PubMed

    Wu, Dongjin; Xia, Linyuan; Geng, Jijun

    2018-06-19

    Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.

  4. Assessing the Impact of Source-Zone Remediation Efforts at the Contaminant-Plume Scale Through Analysis of Contaminant Mass Discharge

    PubMed Central

    Brusseau, M. L.; Hatton, J.; DiGuiseppi, W.

    2011-01-01

    The long-term impact of source-zone remediation efforts was assessed for a large site contaminated by trichloroethene. The impact of the remediation efforts (soil vapor extraction and in-situ chemical oxidation) was assessed through analysis of plume-scale contaminant mass discharge, which was measured using a high-resolution data set obtained from 23 years of operation of a large pump-and-treat system. The initial contaminant mass discharge peaked at approximately 7 kg/d, and then declined to approximately 2 kg/d. This latter value was sustained for several years prior to the initiation of source-zone remediation efforts. The contaminant mass discharge in 2010, measured several years after completion of the two source-zone remediation actions, was approximately 0.2 kg/d, which is ten times lower than the value prior to source-zone remediation. The time-continuous contaminant mass discharge data can be used to evaluate the impact of the source-zone remediation efforts on reducing the time required to operate the pump-and-treat system, and to estimate the cost savings associated with the decreased operational period. While significant reductions have been achieved, it is evident that the remediation efforts have not completely eliminated contaminant mass discharge and associated risk. Remaining contaminant mass contributing to the current mass discharge is hypothesized to comprise poorly-accessible mass in the source zones, as well as aqueous (and sorbed) mass present in the extensive lower-permeability units located within and adjacent to the contaminant plume. The fate of these sources is an issue of critical import to the remediation of chlorinated-solvent contaminated sites, and development of methods to address these sources will be required to achieve successful long-term management of such sites and to ultimately transition them to closure. PMID:22115080

  5. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    PubMed

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  6. Estimating the prevalence of 26 health-related indicators at neighbourhood level in the Netherlands using structured additive regression.

    PubMed

    van de Kassteele, Jan; Zwakhals, Laurens; Breugelmans, Oscar; Ameling, Caroline; van den Brink, Carolien

    2017-07-01

    Local policy makers increasingly need information on health-related indicators at smaller geographic levels like districts or neighbourhoods. Although more large data sources have become available, direct estimates of the prevalence of a health-related indicator cannot be produced for neighbourhoods for which only small samples or no samples are available. Small area estimation provides a solution, but unit-level models for binary-valued outcomes that can handle both non-linear effects of the predictors and spatially correlated random effects in a unified framework are rarely encountered. We used data on 26 binary-valued health-related indicators collected on 387,195 persons in the Netherlands. We associated the health-related indicators at the individual level with a set of 12 predictors obtained from national registry data. We formulated a structured additive regression model for small area estimation. The model captured potential non-linear relations between the predictors and the outcome through additive terms in a functional form using penalized splines and included a term that accounted for spatially correlated heterogeneity between neighbourhoods. The registry data were used to predict individual outcomes which in turn are aggregated into higher geographical levels, i.e. neighbourhoods. We validated our method by comparing the estimated prevalences with observed prevalences at the individual level and by comparing the estimated prevalences with direct estimates obtained by weighting methods at municipality level. We estimated the prevalence of the 26 health-related indicators for 415 municipalities, 2599 districts and 11,432 neighbourhoods in the Netherlands. We illustrate our method on overweight data and show that there are distinct geographic patterns in the overweight prevalence. Calibration plots show that the estimated prevalences agree very well with observed prevalences at the individual level. The estimated prevalences agree reasonably well with the direct estimates at the municipal level. Structured additive regression is a useful tool to provide small area estimates in a unified framework. We are able to produce valid nationwide small area estimates of 26 health-related indicators at neighbourhood level in the Netherlands. The results can be used for local policy makers to make appropriate health policy decisions.

  7. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  9. Beam current enhancement of microwave plasma ion source utilizing double-port rectangular cavity resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab

    2012-02-15

    Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profilemore » of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.« less

  10. Beam current enhancement of microwave plasma ion source utilizing double-port rectangular cavity resonator.

    PubMed

    Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab; Yang, J J; Hwang, Y S

    2012-02-01

    Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profile of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.

  11. Improving bioaerosol exposure assessments of composting facilities — Comparative modelling of emissions from different compost ages and processing activities

    NASA Astrophysics Data System (ADS)

    Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.

    We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.

  12. Guiding optimal biofuels :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paap, Scott M.; West, Todd H.; Manley, Dawn Kataoka

    2013-01-01

    In the current study, processes to produce either ethanol or a representative fatty acid ethyl ester (FAEE) via the fermentation of sugars liberated from lignocellulosic materials pretreated in acid or alkaline environments are analyzed in terms of economic and environmental metrics. Simplified process models are introduced and employed to estimate process performance, and Monte Carlo analyses were carried out to identify key sources of uncertainty and variability. We find that the near-term performance of processes to produce FAEE is significantly worse than that of ethanol production processes for all metrics considered, primarily due to poor fermentation yields and higher electricitymore » demands for aerobic fermentation. In the longer term, the reduced cost and energy requirements of FAEE separation processes will be at least partially offset by inherent limitations in the relevant metabolic pathways that constrain the maximum yield potential of FAEE from biomass-derived sugars.« less

  13. A Multigroup Method for the Calculation of Neutron Fluence with a Source Term

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.; Clowdsley, M. S.

    1998-01-01

    Current research on the Grant involves the development of a multigroup method for the calculation of low energy evaporation neutron fluences associated with the Boltzmann equation. This research will enable one to predict radiation exposure under a variety of circumstances. Knowledge of radiation exposure in a free-space environment is a necessity for space travel, high altitude space planes and satellite design. This is because certain radiation environments can cause damage to biological and electronic systems involving both short term and long term effects. By having apriori knowledge of the environment one can use prediction techniques to estimate radiation damage to such systems. Appropriate shielding can be designed to protect both humans and electronic systems that are exposed to a known radiation environment. This is the goal of the current research efforts involving the multi-group method and the Green's function approach.

  14. Broadband Phase Retrieval for Image-Based Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A focus-diverse phase-retrieval algorithm has been shown to perform adequately for the purpose of image-based wavefront sensing when (1) broadband light (typically spanning the visible spectrum) is used in forming the images by use of an optical system under test and (2) the assumption of monochromaticity is applied to the broadband image data. Heretofore, it had been assumed that in order to obtain adequate performance, it is necessary to use narrowband or monochromatic light. Some background information, including definitions of terms and a brief description of pertinent aspects of image-based phase retrieval, is prerequisite to a meaningful summary of the present development. Phase retrieval is a general term used in optics to denote estimation of optical imperfections or aberrations of an optical system under test. The term image-based wavefront sensing refers to a general class of algorithms that recover optical phase information, and phase-retrieval algorithms constitute a subset of this class. In phase retrieval, one utilizes the measured response of the optical system under test to produce a phase estimate. The optical response of the system is defined as the image of a point-source object, which could be a star or a laboratory point source. The phase-retrieval problem is characterized as image-based in the sense that a charge-coupled-device camera, preferably of scientific imaging quality, is used to collect image data where the optical system would normally form an image. In a variant of phase retrieval, denoted phase-diverse phase retrieval [which can include focus-diverse phase retrieval (in which various defocus planes are used)], an additional known aberration (or an equivalent diversity function) is superimposed as an aid in estimating unknown aberrations by use of an image-based wavefront-sensing algorithm. Image-based phase-retrieval differs from such other wavefront-sensing methods, such as interferometry, shearing interferometry, curvature wavefront sensing, and Shack-Hartmann sensing, all of which entail disadvantages in comparison with image-based methods. The main disadvantages of these non-image based methods are complexity of test equipment and the need for a wavefront reference.

  15. Nonparametric Stochastic Model for Uncertainty Quantifi cation of Short-term Wind Speed Forecasts

    NASA Astrophysics Data System (ADS)

    AL-Shehhi, A. M.; Chaouch, M.; Ouarda, T.

    2014-12-01

    Wind energy is increasing in importance as a renewable energy source due to its potential role in reducing carbon emissions. It is a safe, clean, and inexhaustible source of energy. The amount of wind energy generated by wind turbines is closely related to the wind speed. Wind speed forecasting plays a vital role in the wind energy sector in terms of wind turbine optimal operation, wind energy dispatch and scheduling, efficient energy harvesting etc. It is also considered during planning, design, and assessment of any proposed wind project. Therefore, accurate prediction of wind speed carries a particular importance and plays significant roles in the wind industry. Many methods have been proposed in the literature for short-term wind speed forecasting. These methods are usually based on modeling historical fixed time intervals of the wind speed data and using it for future prediction. The methods mainly include statistical models such as ARMA, ARIMA model, physical models for instance numerical weather prediction and artificial Intelligence techniques for example support vector machine and neural networks. In this paper, we are interested in estimating hourly wind speed measures in United Arab Emirates (UAE). More precisely, we predict hourly wind speed using a nonparametric kernel estimation of the regression and volatility functions pertaining to nonlinear autoregressive model with ARCH model, which includes unknown nonlinear regression function and volatility function already discussed in the literature. The unknown nonlinear regression function describe the dependence between the value of the wind speed at time t and its historical data at time t -1, t - 2, … , t - d. This function plays a key role to predict hourly wind speed process. The volatility function, i.e., the conditional variance given the past, measures the risk associated to this prediction. Since the regression and the volatility functions are supposed to be unknown, they are estimated using nonparametric kernel methods. In addition, to the pointwise hourly wind speed forecasts, a confidence interval is also provided which allows to quantify the uncertainty around the forecasts.

  16. Detection, Source Location, and Analysis of Volcano Infrasound

    NASA Astrophysics Data System (ADS)

    McKee, Kathleen F.

    The study of volcano infrasound focuses on low frequency sound from volcanoes, how volcanic processes produce it, and the path it travels from the source to our receivers. In this dissertation we focus on detecting, locating, and analyzing infrasound from a number of different volcanoes using a variety of analysis techniques. These works will help inform future volcano monitoring using infrasound with respect to infrasonic source location, signal characterization, volatile flux estimation, and back-azimuth to source determination. Source location is an important component of the study of volcano infrasound and in its application to volcano monitoring. Semblance is a forward grid search technique and common source location method in infrasound studies as well as seismology. We evaluated the effectiveness of semblance in the presence of significant topographic features for explosions of Sakurajima Volcano, Japan, while taking into account temperature and wind variations. We show that topographic obstacles at Sakurajima cause a semblance source location offset of 360-420 m to the northeast of the actual source location. In addition, we found despite the consistent offset in source location semblance can still be a useful tool for determining periods of volcanic activity. Infrasonic signal characterization follows signal detection and source location in volcano monitoring in that it informs us of the type of volcanic activity detected. In large volcanic eruptions the lowermost portion of the eruption column is momentum-driven and termed the volcanic jet or gas-thrust zone. This turbulent fluid-flow perturbs the atmosphere and produces a sound similar to that of jet and rocket engines, known as jet noise. We deployed an array of infrasound sensors near an accessible, less hazardous, fumarolic jet at Aso Volcano, Japan as an analogue to large, violent volcanic eruption jets. We recorded volcanic jet noise at 57.6° from vertical, a recording angle not normally feasible in volcanic environments. The fumarolic jet noise was found to have a sustained, low amplitude signal with a spectral peak between 7-10 Hz. From thermal imagery we measure the jet temperature ( 260 °C) and estimate the jet diameter ( 2.5 m). From the estimated jet diameter, an assumed Strouhal number of 0.19, and the jet noise peak frequency, we estimated the jet velocity to be 79 - 132 m/s. We used published gas data to then estimate the volatile flux at 160 - 270 kg/s (14,000 - 23,000 t/d). These estimates are typically difficult to obtain in volcanic environments, but provide valuable information on the eruption. At regional and global length scales we use infrasound arrays to detect signals and determine their source back-azimuths. A ground coupled airwave (GCA) occurs when an incident acoustic pressure wave encounters the Earth's surface and part of the energy of the wave is transferred to the ground. GCAs are commonly observed from sources such as volcanic eruptions, bolides, meteors, and explosions. They have been observed to have retrograde particle motion. When recorded on collocated seismo-acoustic sensors, the phase between the infrasound and seismic signals is 90°. If the sensors are separated wind noise is usually incoherent and an additional phase is added due to the sensor separation. We utilized the additional phase and the characteristic particle motion to determine a unique back-azimuth solution to an acoustic source. The additional phase will be different depending on the direction from which a wave arrives. Our technique was tested using synthetic seismo-acoustic data from a coupled Earth-atmosphere 3D finite difference code and then applied to two well-constrained datasets: Mount St. Helens, USA, and Mount Pagan, Commonwealth of the Northern Mariana Islands Volcanoes. The results from our method are within <1° - 5° of the actual and traditional infrasound array processing determined back-azimuths. Ours is a new method to detect and determine the back-azimuth to infrasonic signals, which will be useful when financial and spatial resources are limited.

  17. A hemispherical Langmuir probe array detector for angular resolved measurements on droplet-based laser-produced plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambino, Nadia, E-mail: gambinon@ethz.ch; Brandstätter, Markus; Rollinger, Bob

    2014-09-15

    In this work, a new diagnostic tool for laser-produced plasmas (LPPs) is presented. The detector is based on a multiple array of six motorized Langmuir probes. It allows to measure the dynamics of a LPP in terms of charged particles detection with particular attention to droplet-based LPP sources for EUV lithography. The system design permits to temporally resolve the angular and radial plasma charge distribution and to obtain a hemispherical mapping of the ions and electrons around the droplet plasma. The understanding of these dynamics is fundamental to improve the debris mitigation techniques for droplet-based LPP sources. The device hasmore » been developed, built, and employed at the Laboratory for Energy Conversion, ETH Zürich. The experimental results have been obtained on the droplet-based LPP source ALPS II. For the first time, 2D mappings of the ion kinetic energy distribution around the droplet plasma have been obtained with an array of multiple Langmuir probes. These measurements show an anisotropic expansion of the ions in terms of kinetic energy and amount of ion charge around the droplet target. First estimations of the plasma density and electron temperature were also obtained from the analysis of the probe current signals.« less

  18. Cost-effectiveness of rosuvastatin 20 mg for the prevention of cardiovascular morbidity and mortality: a Swedish economic evaluation of the JUPITER trial.

    PubMed

    Ohsfeldt, Robert L; Olsson, Anders G; Jensen, Marie M; Gandhi, Sanjay K; Paulsson, Thomas

    2012-01-01

    This study estimated the long-term health outcomes, healthcare costs, and cost-effectiveness of rosuvastatin 20 mg therapy in primary prevention of major cardiovascular disease (CVD) in a Swedish population. Based on data from the JUPITER trial, long-term CVD outcomes with rosuvastatin vs no active treatment were estimated for patients with an elevated baseline CVD risk (Framingham CVD score >20%, sub-population of JUPITER population) and for a population similar to the total JUPITER population. Using a decision-analytic model, trial CVD event rates were combined with epidemiological and cost data specific for Sweden. First and subsequent CVD events and death were estimated over a lifetime perspective. The observed relative risk reduction was extrapolated beyond the trial duration. Incremental effectiveness was measured as life-years gained (LYG) and quality-adjusted life-years (QALYs) gained. Treating 100,000 patients with rosuvastatin 20 mg was estimated to avoid 14,692 CVD events over the lifetime (8021 non-fatal MIs, 3228 non-fatal strokes, and 4924 CVD deaths) compared to placebo. This translated into an estimated gain of 42,122 QALYs and 36,865 total life years (LYG). Rosuvastatin was both more effective and less costly over a lifetime perspective, and rosuvastatin is subsequently a dominant alternative compared to no treatment in the assessed population. Using the overall JUPITER population, rosuvastatin was dominant for the lifetime horizon. In the sensitivity analysis, rosuvastatin was the dominant treatment strategy over a 20-year time horizon, and cost-effective with an incremental cost-effectiveness ratio (cost per QALY) of SEK 1783 over a 10-year time horizon. Some model inputs were derived from literature or other data sources, but uncertainty was controlled by sensitivity analyses. Results indicate that rosuvastatin 20 mg treatment is a cost-effective option vs no-treatment in patients with Framingham CVD risk >20% in Sweden and might even be cost saving if taking a long-term perspective.

  19. The characterization of the distant blazar GB6 J1239+0443 from flaring and low activity periods

    DOE PAGES

    Pacciani, L.; Donnarumma, I.; Denney, K. D.; ...

    2012-08-27

    In 2008, AGILE and Fermi detected gamma-ray flaring activity from the unidentified EGRET source 3EG J1236+0457, recently associated with a flat spectrum radio quasar (GB6 J1239+0443) at z = 1.762. The optical counterpart of the gamma-ray source underwent a flux enhancement of a factor of 15–30 in six years, and of ~10 in six months. Here, we interpret this flare-up in terms of a transition from an accretion-disc-dominated emission to a synchrotron-jet-dominated one. We analysed a Sloan Digital Sky Survey (SDSS) archival optical spectrum taken during a period of low radio and optical activity of the source. We estimated themore » mass of the central black hole using the width of the C iv emission line. In our work, we have also investigated SDSS archival optical photometric data and ultraviolet GALEX observations to estimate the thermal disc emission contribution of GB6 J1239+0443. This analysis of the gamma-ray data taken during the flaring episodes indicates a flat gamma-ray spectrum, with an extension of up to 15 GeV, with no statistically relevant sign of absorption from the broad-line region, suggesting that the blazar zone is located beyond the broad-line region. Our result is confirmed by the modelling of the broad-band spectral energy distribution (well constrained by the available multiwavelength data) of the flaring activity periods and by the accretion disc luminosity and black hole mass estimated by us using archival data.« less

  20. Quantifying Organic Matter in Surface Waters of the United States and Delivery to the Coastal Zone

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Alexander, R. B.; Smith, R. A.; Shih, J.

    2012-12-01

    Organic carbon (OC) is a critical water quality characteristic in surface waters. It is an important component of the energy balance and food chains in freshwater and estuarine aquatic ecosystems, is significant in the mobilization and transport of contaminants along flow paths, and is associated with the formation of known carcinogens in drinking water supplies. The importance of OC dynamics on water quality has been recognized, but challenges remain in quantitatively addressing processes controlling OC fluxes over broad spatial scales in a hydrological context, and considering upstream-downstream linkages along flow paths. Here, we: 1) quantified lateral OC fluxes in rivers, streams, and reservoirs across the nation from headwaters to the coasts; 2) partitioned how much organic carbon that is stored in lakes, rivers and streams comes from allochthonous sources (produced in the terrestrial landscape) versus autochthonous sources (produced in-stream by primary production); 3) estimated the delivery of dissolved and total forms of organic carbon to coastal estuaries and embayments; and 4) considered seasonal factors affecting the temporal variation in OC responses. To accomplish this, we developed national-scale models of organic carbon in U.S. surface waters using the spatially referenced regression on watersheds (SPARROW) technique. The modeling approach uses mechanistic formulations, imposes mass balance constraints, and provides a formal parameter estimation structure to statistically estimate sources and fate of OC in terrestrial and aquatic ecosystems. We calibrated and evaluated the model with statistical estimates of OC loads that were observed at a network of monitoring stations across the nation, and further explored factors controlling seasonal dynamics of OC based on these long term monitoring data. Our results illustrate spatial patterns and magnitudes OC loadings in rivers, highlighting hot spots and suggesting origins of the OC to each location. Further, our results yield quantitative estimates of aquatic OC fluxes for large water regions and for the nation, providing a refined estimate of the role of surface water fluxes of OC in relationship to regional and national carbon budgets. Finally, we are using our simulations to explore the role of OC in relation to other nutrients in contributing to acidification and eutrophication of coastal waters.

  1. Parameter estimation for groundwater models under uncertain irrigation data

    USGS Publications Warehouse

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  2. Selection bias and patterns of confounding in cohort studies: the case of the NINFEA web-based birth cohort.

    PubMed

    Pizzi, Costanza; De Stavola, Bianca L; Pearce, Neil; Lazzarato, Fulvio; Ghiotti, Paola; Merletti, Franco; Richiardi, Lorenzo

    2012-11-01

    Several studies have examined the effects of sample selection on the exposure-outcome association estimates in cohort studies, but the reasons why this selection may induce bias have not been fully explored. To investigate how sample selection of the web-based NINFEA birth cohort may change the confounding patterns present in the source population. The characteristics of the NINFEA participants (n=1105) were compared with those of the wider source population-the Piedmont Birth Registry (PBR)-(n=36 092), and the association of two exposures (parity and educational level) with two outcomes (low birth weight and birth by caesarean section), while controlling for other risk factors, was studied. Specifically the associations among measured risk factors within each dataset were examined and the exposure-outcome estimates compared in terms of relative ORs. The associations of educational level with the other risk factors (alcohol consumption, folic acid intake, maternal age, pregnancy weight gain, previous miscarriages) partly differed between PBR and NINFEA. This was not observed for parity. Overall, the exposure-outcome estimates derived from NINFEA only differed moderately from those obtained in PBR, with relative ORs ranging between 0.74 and 1.03. Sample selection in cohort studies may alter the confounding patterns originally present in the general population. However, this does not necessarily introduce selection bias in the exposure-outcome estimates, as sample selection may reduce some of the residual confounding present in the general population.

  3. Common sources and estimated intake of plant sterols in the Spanish diet.

    PubMed

    Jiménez-Escrig, Antonio; Santos-Hidalgo, Ana B; Saura-Calixto, Fulgencio

    2006-05-03

    Plant sterols (PS) are minor lipid components of plants, which may have potential health benefits, mainly based in their cholesterol-lowering effect. The aim of this study was to determine the composition and content of PS in plant-based foods commonly consumed in Spain and to estimate the PS intake in the Spanish diet. For this purpose, the determination of PS content, using a modern methodology to measure free, esterified, and glycosidic sterol forms, was done. Second, an estimation of the intake of PS, using the Spanish National Food Consumption data, was made. The daily intake per person of PS--campesterol, beta-sitosterol, stigmasterol, and stigmastanol--in the Spanish diet was estimated at 276 mg, the largest component being beta-sitosterol (79.7%). Other unknown compounds, tentatively identified as PS, may constitute a considerable potential intake (99 mg). When the daily PS intake among European diets was compared in terms of campesterol, beta-sitosterol, stigmasterol, and stigmastanol, the PS intake in the Spanish diet was in the same range of other countries such as Finland (15.7% higher) or The Netherlands (equal). However, some qualitative differences in the PS sources were detected, that is, the predominant brown bread and vegetable fat consumption in the northern diets versus the white bread and vegetable oil consumption in the Spanish diet. These differences may help to provide a link between the consumption of PS and healthy effects of the diet.

  4. Estimating Domestic Values for EQ-5D Health States Using Survey Data From External Sources.

    PubMed

    Chuang, Ling-Hsiang; Zarate, Victor; Kind, Paul

    2009-02-01

    Health status measures used to quantify outcomes for economic evaluation must be capable of representing health gain in a single index, usually calibrated in terms of the social preferences elicited from "the relevant population." The general problem faced in the majority of countries where social preferences are required for cost-effectiveness analysis is the absence of a value set based on domestic data sources. This article establishes a methodology for estimating domestic visual analog scale (VAS)-based values for EQ-5D health states by adjusting data sets from countries where valuation studies have been carried out. building upon the relationship between the values for respondents' real health states and hypothetical health states, 2 models are investigated. One assumes that the link between VAS scores for real and hypothetical health state is constant across 2 countries (R1), whereas the other adopts the assumption that the relationship of VAS scores for hypothetical heath states between 2 countries functionally corresponds to variation in scores for real health states (R2). Data from national UK and US population surveys were selected to test both methods. The R2 model performed better in generating estimated scores that were closer to observed values. The R2 model seems to offer a viable method for estimating domestic values of health. Such a method could help to bridge the gap between countries as well as region within a country.

  5. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  6. Effect of Carbon-Cycle Uncertainty on Estimates of the 1.5oC Carbon Budget

    NASA Astrophysics Data System (ADS)

    Mengis, N.; Jalbert, J.; Partanen, A. I.; Matthews, D.

    2017-12-01

    In December 2015, the participants of the COP21 agreed to pursue efforts to limit global temperature increase to 1.5oC relative to the preindustrial level. A robust estimate of the carbon budget for this temperature target is one precondition for well-informed political discussions. These estimates, however, depend on Earth system models and need to account for model inherent uncertainties. Here, we quantify the effect of carbon cycle uncertainty within an intermediate complexity Earth system model. Using an Bayesian inversion approach we obtain a probabilistic estimate for the 1.5oC carbon budget of 66 PgC with a range of 20 to 112 PgC. This estimate is in good agreement with the IPCC's estimate, and additionally provides a probabilistic range accounting for uncertainties in the natural carbon sinks. Furthermore our results suggest, that for a long-term temperature stabilization at 1.5oC, negative fossil fuel emissions in the order of 1 PgC yr-1 would be needed. Two effects cause the fossil fuel emissions during temperature stabilization to turn negative: 1) The reduced uptake potential of the natural carbon sinks, which arises from increasing ocean temperatures, and the fact that the land turns from a net carbon sink to a source. 2) The residual positive anthropogenic forcing in the extended scenario, which remains as high as 2.5 W m-2, until the end of 2200. In contrast to previous studies our results suggest the need for negative fossil fuel emissions for a long term temperature stabilization to compensate for residual anthropogenic forcing and a decreasing natural carbon sink potential.

  7. Joint use of epidemiological and hospital medico-administrative data to estimate prevalence. Application to French data on breast cancer.

    PubMed

    Colonna, Marc; Mitton, Nicolas; Schott, Anne-Marie; Remontet, Laurent; Olive, Frédéric; Gomez, Frédéric; Iwaz, Jean; Polazzi, Stéphanie; Bossard, Nadine; Trombert, Béatrice

    2012-04-01

    Estimate complete, limited-duration, and hospital prevalence of breast cancer in a French Département covered by a population-based cancer registry and in whole France using complementary information sources. Incidence data from a cancer registry, national incidence estimations for France, mortality data, and hospital medico-administrative data were used to estimate the three prevalence indices. The methods included a modelling of epidemiological data and a specific process of data extraction from medico-administrative databases. Limited-duration prevalence at 33 years was a proxy for complete prevalence only in patients aged less than 70 years. In 2007 and in women older than 15 years, the limited-duration prevalence at 33 years rate per 100,000 women was estimated at 2372 for Département Isère and 2354 for whole France. The latter rate corresponded to 613,000 women. The highest rate corresponded to women aged 65-74 years (6161 per 100,000 in whole France). About one third of the 33-year limited-duration prevalence cases were diagnosed five years before and about one fourth were hospitalized for breast-cancer-related care (i.e., hospital prevalence). In 2007, the rate of hospitalized women was 557 per 100,000 in whole France. Among the 120,310 women hospitalized for breast-cancer-related care in 2007, about 13% were diagnosed before 2004. Limited-duration prevalence (long- and short-term), and hospital prevalence are complementary indices of cancer prevalence. Their efficient direct or indirect estimations are essential to reflect the burden of the disease and forecast median- and long-term medical, economic, and social patient needs, especially after the initial treatment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  9. Estimating and Testing the Sources of Evoked Potentials in the Brain.

    ERIC Educational Resources Information Center

    Huizenga, Hilde M.; Molenaar, Peter C. M.

    1994-01-01

    The source of an event-related brain potential (ERP) is estimated from multivariate measures of ERP on the head under several mathematical and physical constraints on the parameters of the source model. Statistical aspects of estimation are discussed, and new tests are proposed. (SLD)

  10. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of an atmospheric dispersion model with an improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2015-01-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (Modèle Lagrangien de Dispersion de Particules d'ordre zéro: MLDP0, Hybrid Single Particle Lagrangian Integrated Trajectory Model: HYSPLIT, and Met Office's Numerical Atmospheric-dispersion Modelling Environment: NAME) for regional and global calculations, and the calculated results showed good agreement with observed air concentration and surface deposition of 137Cs in eastern Japan.

  11. Integration versus apartheid in post-Roman Britain: a response to Thomas et Al. (2008).

    PubMed

    Pattison, John E

    2011-12-01

    The genetic surveys of the population of Britain conducted by Weale et al. and Capelli et al. produced estimates of the Germani immigration into Britain during the early Anglo-Saxon period, c.430-c.730. These estimates are considerably higher than the estimates of archaeologists. A possible explanation suggests that an apartheid-like social system existed in the early Anglo-Saxon kingdoms resulting in the Germani breeding more quickly than the Britons. Thomas et al. attempted to model this suggestion and showed that it was a possible explanation if all Anglo-Saxon kingdoms had such a system for up to 400 years. I noted that their explanation ignored the probability that Germani have been arriving in Britain for at least the past three millennia, including Belgae and Roman soldiers, and not only during the early Anglo-Saxon period. I produced a population model for Britain taking into account this long term, low level migration that showed that the estimates could be reconciled without the need for introducing an apartheid-like system. In turn, Thomas et al. responded, criticizing my model and arguments, which they considered persuasively written but wanting in terms of methodology, data sources, underlying assumptions, and application. Here, I respond in detail to those criticisms and argue that it is still unnecessary to introduce an apartheid-like system in order to reconcile the different estimates of Germani arrivals. A point of confusion is that geneticists are interested in ancestry, while archaeologists are interested in ethnicity: it is the bones, not the burial rites, which are important in the present context.

  12. Estimating the distribution of colored dissolved organic matter during the Southern Ocean Gas Exchange Experiment using four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.

    2017-03-01

    We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexafluoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefficient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM fields. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.

  13. Estimating the Distribution of Colored Dissolved Organic Matter During the Southern Ocean Gas Exchange Experiment Using Four-Dimensional Variational Data Assimilation

    NASA Technical Reports Server (NTRS)

    Del Castillo, C. E.; Dwivedi, S.; Haine, T. W. N.; Ho, D. T.

    2017-01-01

    We diagnosed the effect of various physical processes on the distribution of mixed-layer colored dissolved organic matter (CDOM) and a sulfur hexauoride (SF6) tracer during the Southern Ocean Gas Exchange Experiment (SO GasEx). The biochemical upper ocean state estimate uses in situ and satellite biochemical and physical data in the study region, including CDOM (absorption coefcient and spectral slope), SF6, hydrography, and sea level anomaly. Modules for photobleaching of CDOM and surface transport of SF6 were coupled with an ocean circulation model for this purpose. The observed spatial and temporal variations in CDOM were captured by the state estimate without including any new biological source term for CDOM, assuming it to be negligible over the 26 days of the state estimate. Thermocline entrainment and photobleaching acted to diminish the mixed-layer CDOM with time scales of 18 and 16 days, respectively. Lateral advection of CDOM played a dominant role and increased the mixed-layer CDOM with a time scale of 12 days, whereas lateral diffusion of CDOM was negligible. A Lagrangian view on the CDOM variability was demonstrated by using the SF6 as a weighting function to integrate the CDOM elds. This and similar data assimilation methods can be used to provide reasonable estimates of optical properties, and other physical parameters over the short-term duration of a research cruise, and help in the tracking of tracer releases in large-scale oceanographic experiments, and in oceanographic process studies.

  14. Examining effective use of data sources and modeling algorithms for improving biomass estimation in a moist tropical forest of the Brazilian Amazon

    Treesearch

    Yunyun Feng; Dengsheng Lu; Qi Chen; Michael Keller; Emilio Moran; Maiza Nara dos-Santos; Edson Luis Bolfe; Mateus Batistella

    2017-01-01

    Previous research has explored the potential to integrate lidar and optical data in aboveground biomass (AGB) estimation, but how different data sources, vegetation types, and modeling algorithms influence AGB estimation is poorly understood. This research conducts a comparative analysis of different data sources and modeling approaches in improving AGB estimation....

  15. Sources and contents of air pollution affecting term low birth weight in Los Angeles County, California, 2001-2008.

    PubMed

    Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Cockburn, Myles; Escobedo, Loraine; Kleeman, Michael J; Wu, Jun

    2014-10-01

    Low birth weight (LBW, <2500 g) has been associated with exposure to air pollution, but it is still unclear which sources or components of air pollution might be in play. The association between ultrafine particles and LBW has never been studied. To study the relationships between LBW in term born infants and exposure to particles by size fraction, source and chemical composition, and complementary components of air pollution in Los Angeles County (California, USA) over the period 2001-2008. Birth certificates (n=960,945) were geocoded to maternal residence. Primary particulate matter (PM) concentrations by source and composition were modeled. Measured fine PM, nitrogen dioxide and ozone concentrations were interpolated using empirical Bayesian kriging. Traffic indices were estimated. Associations between LBW and air pollution metrics were examined using generalized additive models, adjusting for maternal age, parity, race/ethnicity, education, neighborhood income, gestational age and infant sex. Increased LBW risks were associated with the mass of primary fine and ultrafine PM, with several major sources (especially gasoline, wood burning and commercial meat cooking) of primary PM, and chemical species in primary PM (elemental and organic carbon, potassium, iron, chromium, nickel, and titanium but not lead or arsenic). Increased LBW risks were also associated with total fine PM mass, nitrogen dioxide and local traffic indices (especially within 50 m from home), but not with ozone. Stronger associations were observed in infants born to women with low socioeconomic status, chronic hypertension, diabetes and a high body mass index. This study supports previously reported associations between traffic-related pollutants and LBW and suggests other pollution sources and components, including ultrafine particles, as possible risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Investigation of Magnetotelluric Source Effect Based on Twenty Years of Telluric and Geomagnetic Observation

    NASA Astrophysics Data System (ADS)

    Kis, A.; Lemperger, I.; Wesztergom, V.; Menvielle, M.; Szalai, S.; Novák, A.; Hada, T.; Matsukiyo, S.; Lethy, A. M.

    2016-12-01

    Magnetotelluric method is widely applied for investigation of subsurface structures by imaging the spatial distribution of electric conductivity. The method is based on the experimental determination of surface electromagnetic impedance tensor (Z) by surface geomagnetic and telluric registrations in two perpendicular orientation. In practical explorations the accurate estimation of Z necessitates the application of robust statistical methods for two reasons:1) the geomagnetic and telluric time series' are contaminated by man-made noise components and2) the non-homogeneous behavior of ionospheric current systems in the period range of interest (ELF-ULF and longer periods) results in systematic deviation of the impedance of individual time windows.Robust statistics manage both load of Z for the purpose of subsurface investigations. However, accurate analysis of the long term temporal variation of the first and second statistical moments of Z may provide valuable information about the characteristics of the ionospheric source current systems. Temporal variation of extent, spatial variability and orientation of the ionospheric source currents has specific effects on the surface impedance tensor. Twenty year long geomagnetic and telluric recordings of the Nagycenk Geophysical Observatory provides unique opportunity to reconstruct the so called magnetotelluric source effect and obtain information about the spatial and temporal behavior of ionospheric source currents at mid-latitudes. Detailed investigation of time series of surface electromagnetic impedance tensor has been carried out in different frequency classes of the ULF range. The presentation aims to provide a brief review of our results related to long term periodic modulations, up to solar cycle scale and about eventual deviations of the electromagnetic impedance and so the reconstructed equivalent ionospheric source effects.

  17. Ghrelin and cholecystokinin in term and preterm human breast milk.

    PubMed

    Kierson, Jennifer A; Dimatteo, Darlise M; Locke, Robert G; Mackley, Amy B; Spear, Michael L

    2006-08-01

    To determine whether ghrelin and cholecystokinin (CCK) are present in significant quantities in term and preterm human breast milk, and to identify their source. Samples were collected from 10 mothers who delivered term infants and 10 mothers who delivered preterm infants. Estimated fat content was measured. Ghrelin and CCK levels were measured in whole and skim breast milk samples using radioimmunoassays (RIA). Reverse transcriptase-polymerase chain reaction (RT-PCR) was performed using RNA from human mammary epithelial cells (hMECs) and mammary gland with primers specific to ghrelin. The median ghrelin level in whole breast milk was 2125 pg/ml, which is significantly higher than normal plasma levels. There was a direct correlation between whole milk ghrelin levels and estimated milk fat content (r=0.84, p<0.001). Both the mammary gland and hMECs produced ghrelin. While CCK was detected in some samples, levels were insignificant. Infant gestational age, birthweight, maternal age, and maternal pre-pregnancy body mass index did not significantly affect the results. Ghrelin, but not CCK, is present in breast milk. Since the mammary gland produces ghrelin message, and ghrelin levels in breast milk are higher than those found in plasma, we conclude that ghrelin is produced and secreted by the breast.

  18. Biomass burning contributions estimated by synergistic coupling of daily and hourly aerosol composition records.

    PubMed

    Nava, S; Lucarelli, F; Amato, F; Becagli, S; Calzolai, G; Chiari, M; Giannoni, M; Traversi, R; Udisti, R

    2015-04-01

    Biomass burning (BB) is a significant source of particulate matter (PM) in many parts of the world. Whereas numerous studies demonstrate the relevance of BB emissions in central and northern Europe, the quantification of this source has been assessed only in few cities in southern European countries. In this work, the application of Positive Matrix Factorisation (PMF) allowed a clear identification and quantification of an unexpected very high biomass burning contribution in Tuscany (central Italy), in the most polluted site of the PATOS project. In this urban background site, BB accounted for 37% of the mass of PM10 (particulate matter with aerodynamic diameter<10 μm) as annual average, and more than 50% during winter, being the main cause of all the PM10 limit exceedances. Due to the chemical complexity of BB emissions, an accurate assessment of this source contribution is not always easily achievable using just a single tracer. The present work takes advantage of the combination of a long-term daily data-set, characterized by an extended chemical speciation, with a short-term high time resolution (1-hour) and size-segregated data-set, obtained by PIXE analyses of streaker samples. The hourly time pattern of the BB source, characterised by a periodic behaviour with peaks starting at about 6 p.m. and lasting all the evening-night, and its strong seasonality, with higher values in the winter period, clearly confirmed the hypothesis of a domestic heating source (also excluding important contributions from wildfires and agricultural wastes burning). Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Preliminary investigation of processes that affect source term identification. Environmental Restoration Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.

    Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less

  20. Preliminary investigation of processes that affect source term identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wickliff, D.S.; Solomon, D.K.; Farrow, N.D.

    Solid Waste Storage Area (SWSA) 5 is known to be a significant source of contaminants, especially tritium ({sup 3}H), to the White Oak Creek (WOC) watershed. For example, Solomon et al. (1991) estimated the total {sup 3}H discharge in Melton Branch (most of which originates in SWSA 5) for the 1988 water year to be 1210 Ci. A critical issue for making decisions concerning remedial actions at SWSA 5 is knowing whether the annual contaminant discharge is increasing or decreasing. Because (1) the magnitude of the annual contaminant discharge is highly correlated to the amount of annual precipitation (Solomon etmore » al., 1991) and (2) a significant lag may exist between the time of peak contaminant release from primary sources (i.e., waste trenches) and the time of peak discharge into streams, short-term stream monitoring by itself is not sufficient for predicting future contaminant discharges. In this study we use {sup 3}H to examine the link between contaminant release from primary waste sources and contaminant discharge into streams. By understanding and quantifying subsurface transport processes, realistic predictions of future contaminant discharge, along with an evaluation of the effectiveness of remedial action alternatives, will be possible. The objectives of this study are (1) to characterize the subsurface movement of contaminants (primarily {sup 3}H) with an emphasis on the effects of matrix diffusion; (2) to determine the relative strength of primary vs secondary sources; and (3) to establish a methodology capable of determining whether the {sup 3}H discharge from SWSA 5 to streams is increasing or decreasing.« less

  1. Major 20th century changes of the content and chemical speciation of organic carbon archived in Alpine ice cores: Implications for the long-term change of organic aerosol over Europe

    NASA Astrophysics Data System (ADS)

    Legrand, M.; Preunkert, S.; May, B.; Guilhermet, J.; Hoffman, H.; Wagenbach, D.

    2013-05-01

    Dissolved organic carbon (DOC) and an extended array of organic compounds were investigated in an Alpine ice core covering the 1920-1988 time period. Based on this, a reconstruction was made of the long-term trends of water-soluble organic carbon (WSOC) aerosol in the European atmosphere. It is shown that light mono- and dicarboxylates, humic-like substances, and formaldehyde account together for more than half of the DOC content of ice. This extended chemical speciation of DOC is used to estimate the DOC fraction present in ice that is related to WSOC aerosol and its change over the past. It is suggested that after World War II, the WSOC levels have been enhanced by a factor of 2 and 3 in winter and summer, respectively. In summer, the fossil fuel contribution to the enhancement is estimated to be rather small, suggesting that it arises mainly from an increase in biogenic sources of WSOC.

  2. Functional Analysis in Long-Term Operation of High Power UV-LEDs in Continuous Fluoro-Sensing Systems for Hydrocarbon Pollution

    PubMed Central

    Arques-Orobon, Francisco Jose; Nuñez, Neftali; Vazquez, Manuel; Gonzalez-Posadas, Vicente

    2016-01-01

    This work analyzes the long-term functionality of HP (High-power) UV-LEDs (Ultraviolet Light Emitting Diodes) as the exciting light source in non-contact, continuous 24/7 real-time fluoro-sensing pollutant identification in inland water. Fluorescence is an effective alternative in the detection and identification of hydrocarbons. The HP UV-LEDs are more advantageous than classical light sources (xenon and mercury lamps) and helps in the development of a low cost, non-contact, and compact system for continuous real-time fieldwork. This work analyzes the wavelength, output optical power, and the effects of viscosity, temperature of the water pollutants, and the functional consistency for long-term HP UV-LED working operation. To accomplish the latter, an analysis of the influence of two types 365 nm HP UV-LEDs degradation under two continuous real-system working mode conditions was done, by temperature Accelerated Life Tests (ALTs). These tests estimate the mean life under continuous working conditions of 6200 h and for cycled working conditions (30 s ON & 30 s OFF) of 66,000 h, over 7 years of 24/7 operating life of hydrocarbon pollution monitoring. In addition, the durability in the face of the internal and external parameter system variations is evaluated. PMID:26927113

  3. Functional Analysis in Long-Term Operation of High Power UV-LEDs in Continuous Fluoro-Sensing Systems for Hydrocarbon Pollution.

    PubMed

    Arques-Orobon, Francisco Jose; Nuñez, Neftali; Vazquez, Manuel; Gonzalez-Posadas, Vicente

    2016-02-26

    This work analyzes the long-term functionality of HP (High-power) UV-LEDs (Ultraviolet Light Emitting Diodes) as the exciting light source in non-contact, continuous 24/7 real-time fluoro-sensing pollutant identification in inland water. Fluorescence is an effective alternative in the detection and identification of hydrocarbons. The HP UV-LEDs are more advantageous than classical light sources (xenon and mercury lamps) and helps in the development of a low cost, non-contact, and compact system for continuous real-time fieldwork. This work analyzes the wavelength, output optical power, and the effects of viscosity, temperature of the water pollutants, and the functional consistency for long-term HP UV-LED working operation. To accomplish the latter, an analysis of the influence of two types 365 nm HP UV-LEDs degradation under two continuous real-system working mode conditions was done, by temperature Accelerated Life Tests (ALTs). These tests estimate the mean life under continuous working conditions of 6200 h and for cycled working conditions (30 s ON & 30 s OFF) of 66,000 h, over 7 years of 24/7 operating life of hydrocarbon pollution monitoring. In addition, the durability in the face of the internal and external parameter system variations is evaluated.

  4. Antineutrino analysis for continuous monitoring of nuclear reactors: Sensitivity study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Christopher; Erickson, Anna

    This paper explores the various contributors to uncertainty on predictions of the antineutrino source term which is used for reactor antineutrino experiments and is proposed as a safeguard mechanism for future reactor installations. The errors introduced during simulation of the reactor burnup cycle from variation in nuclear reaction cross sections, operating power, and other factors are combined with those from experimental and predicted antineutrino yields, resulting from fissions, evaluated, and compared. The most significant contributor to uncertainty on the reactor antineutrino source term when the reactor was modeled in 3D fidelity with assembly-level heterogeneity was found to be the uncertaintymore » on the antineutrino yields. Using the reactor simulation uncertainty data, the dedicated observation of a rigorously modeled small, fast reactor by a few-ton near-field detector was estimated to offer reduction of uncertainty on antineutrino yields in the 3.0–6.5 MeV range to a few percent for the primary power-producing fuel isotopes, even with zero prior knowledge of the yields.« less

  5. A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Lafitte, Pauline; Melis, Ward; Samaey, Giovanni

    2017-07-01

    We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.

  6. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.

  7. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  8. Short-term international migration trends in England and Wales from 2004 to 2009.

    PubMed

    Whitworth, Simon; Loukas, Konstantinos; McGregor, Ian

    2011-01-01

    Short-term migration estimates for England and Wales are the latest addition to the Office for National Statistics (ONS) migration statistics. This article discusses definitions of short-term migration and the methodology that is used to produce the estimates. Some of the estimates and the changes in the estimates over time are then discussed. The article includes previously unpublished short-term migration statistics and therefore helps to give a more complete picture of the size and characteristics of short-term international migration for England and Wales than has previously been possible. ONS have identified a clear user requirement for short-term migration estimates at local authority (LA) level. Consequently, attention is also paid to the progress that has been made and future work that is planned to distribute England and Wales short-term migration estimates to LA level.

  9. General theory of remote gaze estimation using the pupil center and corneal reflections.

    PubMed

    Guestrin, Elias Daniel; Eizenman, Moshe

    2006-06-01

    This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.

  10. Head movement compensation in real-time magnetoencephalographic recordings.

    PubMed

    Little, Graham; Boe, Shaun; Bardouille, Timothy

    2014-01-01

    Neurofeedback- and brain-computer interface (BCI)-based interventions can be implemented using real-time analysis of magnetoencephalographic (MEG) recordings. Head movement during MEG recordings, however, can lead to inaccurate estimates of brain activity, reducing the efficacy of the intervention. Most real-time applications in MEG have utilized analyses that do not correct for head movement. Effective means of correcting for head movement are needed to optimize the use of MEG in such applications. Here we provide preliminary validation of a novel analysis technique, real-time source estimation (rtSE), that measures head movement and generates corrected current source time course estimates in real-time. rtSE was applied while recording a calibrated phantom to determine phantom position localization accuracy and source amplitude estimation accuracy under stationary and moving conditions. Results were compared to off-line analysis methods to assess validity of the rtSE technique. The rtSE method allowed for accurate estimation of current source activity at the source-level in real-time, and accounted for movement of the source due to changes in phantom position. The rtSE technique requires modifications and specialized analysis of the following MEG work flow steps.•Data acquisition•Head position estimation•Source localization•Real-time source estimation This work explains the technical details and validates each of these steps.

  11. Equations for determining aircraft motions for accident data

    NASA Technical Reports Server (NTRS)

    Bach, R. E., Jr.; Wingrove, R. C.

    1980-01-01

    Procedures for determining a comprehensive accident scenario from a limited data set are reported. The analysis techniques accept and process data from either an Air Traffic Control radar tracking system or a foil flight data recorder. Local meteorological information at the time of the accident and aircraft performance data are also utilized. Equations for the desired aircraft motions and forces are given in terms of elements of the measurement set and certain of their time derivatives. The principal assumption made is that aircraft side force and side-slip angle are negligible. An estimation procedure is outlined for use with each data source. For the foil case, a discussion of exploiting measurement redundancy is given. Since either formulation requires estimates of measurement time derivatives, an algorithm for least squares smoothing is provided.

  12. Prompt radiation, shielding and induced radioactivity in a high-power 160 MeV proton linac

    NASA Astrophysics Data System (ADS)

    Magistris, Matteo; Silari, Marco

    2006-06-01

    CERN is designing a 160 MeV proton linear accelerator, both for a future intensity upgrade of the LHC and as a possible first stage of a 2.2 GeV superconducting proton linac. A first estimate of the required shielding was obtained by means of a simple analytical model. The source terms and the attenuation lengths used in the present study were calculated with the Monte Carlo cascade code FLUKA. Detailed FLUKA simulations were performed to investigate the contribution of neutron skyshine and backscattering to the expected dose rate in the areas around the linac tunnel. An estimate of the induced radioactivity in the magnets, vacuum chamber, the cooling system and the concrete shield was performed. A preliminary thermal study of the beam dump is also discussed.

  13. Model documentation report: Residential sector demand module of the national energy modeling system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code. This reference document provides a detailed description for energy analysts, other users, and the public. The NEMS Residential Sector Demand Module is currently used for mid-term forecasting purposes and energy policy analysis over the forecast horizon of 1993 through 2020. The model generates forecasts of energy demand for the residential sector by service, fuel, and Census Division. Policy impacts resulting from new technologies,more » market incentives, and regulatory changes can be estimated using the module. 26 refs., 6 figs., 5 tabs.« less

  14. An Interactive Computer Package for Use with Simulation Models Which Performs Multidimensional Sensitivity Analysis by Employing the Techniques of Response Surface Methodology.

    DTIC Science & Technology

    1984-12-01

    total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni

  15. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  16. Regional Earthquake Shaking and Loss Estimation

    NASA Astrophysics Data System (ADS)

    Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.

  17. An evaluation of data-driven motion estimation in comparison to the usage of external-surrogates in cardiac SPECT imaging

    PubMed Central

    Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A

    2014-01-01

    Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in 1 of the 5 patient studies, and with external-surrogate based correction in 3 out of 5 patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. PMID:24107647

  18. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.

  19. Estimation of Attitude and External Acceleration Using Inertial Sensor Measurement During Various Dynamic Conditions

    PubMed Central

    Lee, Jung Keun; Park, Edward J.; Robinovitch, Stephen N.

    2012-01-01

    This paper proposes a Kalman filter-based attitude (i.e., roll and pitch) estimation algorithm using an inertial sensor composed of a triaxial accelerometer and a triaxial gyroscope. In particular, the proposed algorithm has been developed for accurate attitude estimation during dynamic conditions, in which external acceleration is present. Although external acceleration is the main source of the attitude estimation error and despite the need for its accurate estimation in many applications, this problem that can be critical for the attitude estimation has not been addressed explicitly in the literature. Accordingly, this paper addresses the combined estimation problem of the attitude and external acceleration. Experimental tests were conducted to verify the performance of the proposed algorithm in various dynamic condition settings and to provide further insight into the variations in the estimation accuracy. Furthermore, two different approaches for dealing with the estimation problem during dynamic conditions were compared, i.e., threshold-based switching approach versus acceleration model-based approach. Based on an external acceleration model, the proposed algorithm was capable of estimating accurate attitudes and external accelerations for short accelerated periods, showing its high effectiveness during short-term fast dynamic conditions. Contrariwise, when the testing condition involved prolonged high external accelerations, the proposed algorithm exhibited gradually increasing errors. However, as soon as the condition returned to static or quasi-static conditions, the algorithm was able to stabilize the estimation error, regaining its high estimation accuracy. PMID:22977288

  20. Assessing the short-term clock drift of early broadband stations with burst events of the 26 s persistent and localized microseism

    NASA Astrophysics Data System (ADS)

    Xie, J.; Ni, S.; Chu, R.; Xia, Y.

    2017-12-01

    Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 second, especially in early days of global seismic network. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC/TS in southern California, USA as an example, the 26 s PL signal can be easily observed in the ambient Noise Cross-correlation Function (NCF) between GSC/TS and a remote station. The variation of travel-time of this 26 s signal in the NCF is used to infer clock error. A drastic clock error is detected during June, 1992. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of ±25 s. Using 26 s PL source, the clock can be validated for historical records of sparsely distributed stations, where usual NCF of short period microseism (<20 s) might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. The location change of the 26 s PL source may influence the measured clock drift, using regional stations with stable clock, we estimate the possible location change of the source.

Top