Science.gov

Sample records for distributed source term

  1. Spatial distribution of HTO activity in unsaturated soil depth in the vicinity of long-term release source

    SciTech Connect

    Golubev, A.; Golubeva, V.; Mavrin, S.

    2015-03-15

    Previous studies reported about a correlation between HTO activity distribution in unsaturated soil layer and atmospheric long-term releases of HTO in the vicinity of Savannah River Site. The Tritium Working Group of BIOMASS Programme has performed a model-model intercomparison study of HTO transport from atmosphere to unsaturated soil and has evaluated HTO activity distribution in the unsaturated soil layer in the vicinity of permanent atmospheric sources. The Tritium Working Group has also reported about such a correlation, however the conclusion was that experimental data sets are needed to confirm this conclusion and also to validate appropriate computer models. (authors)

  2. Design parameters and source terms: Volume 3, Source terms

    SciTech Connect

    Not Available

    1987-10-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report by Stearns Catalytic Corporation (SCC), entitled ''Design Parameters and Source Terms for a Two-Phase Repository in Salt,'' 1985, to the level of the Site Characterization Plan - Conceptual Design Report. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible Salt Repository sites. 11 refs., 9 tabs.

  3. Long-term measurements of particle number size distributions and the relationships with air mass history and source apportionment in the summer of Beijing

    NASA Astrophysics Data System (ADS)

    Wang, Z. B.; Hu, M.; Wu, Z. J.; Yue, D. L.; He, L. Y.; Huang, X. F.; Liu, X. G.; Wiedensohler, A.

    2013-02-01

    A series of long-term and temporary measurements were conducted to study the improvement of air quality in Beijing during Olympic Games period (8-24 August 2008). To evaluate actions taken to improve the air quality, comparisons of particle number and volume size distributions of August 2008 and 2004-2007 were performed. The total particle number and volume concentrations were 14 000 cm-3 and 37 μm3 cm-3 in August of 2008, respectively. These were reductions of 41% and 35% compared with the mean values of August 2004-2007. A cluster analysis on air mass history and source apportionment were performed, exploring reasons of the reduction of particle concentrations. Back trajectories were classified into five major clusters. Air mass from south direction are always associated with pollution events during the summertime of Beijing. In August 2008, the frequency of air mass arriving from south has been twice higher compared to the average of the previous years, these southerly air masses did however not result in elevated particle volume concentrations in Beijing. This result implied that the air mass history was not the key factor, explaining reduced particle number and volume concentrations during the Beijing 2008 Olympic Games. Four factors were found influencing particle concentrations using a Positive matrix factorization (PMF) model. They were identified to local and remote traffic emissions, combustion sources as well as secondary transformation. The reductions of the four sources were calculated to 47%, 44%, 43% and 30%, respectively. The significant reductions of particle number and volume concentrations may attribute to actions taken, focusing on primary emissions, especially related to the traffic and combustion sources.

  4. Long-term measurements of particle number size distributions and the relationships with air mass history and source apportionment in the summer of Beijing

    NASA Astrophysics Data System (ADS)

    Wang, Z. B.; Hu, M.; Wu, Z. J.; Yue, D. L.; He, L. Y.; Huang, X. F.; Liu, X. G.; Wiedensohler, A.

    2013-10-01

    A series of long-term and temporary measurements were conducted to study the improvement of air quality in Beijing during the Olympic Games period (8-24 August 2008). To evaluate actions taken to improve the air quality, comparisons of particle number and volume size distributions of August 2008 and 2004-2007 were performed. The total particle number and volume concentrations were 14 000 cm-3 and 37 μm-3 cm-3 in August of 2008, respectively. These were reductions of 41% and 35% compared with mean values of August 2004-2007. A cluster analysis on air mass history and source apportionment were performed, exploring reasons for the reduction of particle concentrations. Back trajectories were classified into five major clusters. Air masses from the south direction are always associated with pollution events during the summertime in Beijing. In August 2008, the frequency of air mass arriving from the south was 1.3 times higher compared to the average of the previous years, which however did not result in elevated particle volume concentrations in Beijing. Therefore, the reduced particle number and volume concentrations during the 2008 Beijing Olympic Games cannot be only explained by meteorological conditions. Four factors were found influencing particle concentrations using a positive matrix factorization (PMF) model. They were identified as local and remote traffic emissions, combustion sources as well as secondary transformation. The reductions of the four sources were calculated to 47%, 44%, 43% and 30%, respectively. The significant reductions of particle number and volume concentrations may attribute to actions taken, focusing on primary emissions, especially related to the traffic and combustion sources.

  5. Infrared image processing devoted to thermal non-contact characterization-Applications to Non-Destructive Evaluation, Microfluidics and 2D source term distribution for multispectral tomography

    NASA Astrophysics Data System (ADS)

    Batsale, Jean-Christophe; Pradere, Christophe

    2015-11-01

    The cost of IR cameras is more and more decreasing. Beyond the preliminary calibration step and the global instrumentation, the infrared image processing is then one of the key step for achieving in very broad domains. Generally the IR images are coming from the transient temperature field related to the emission of a black surface in response to an external or internal heating (active IR thermography). The first applications were devoted to the so called thermal Non-Destructive Evaluation methods by considering a thin sample and 1D transient heat diffusion through the sample (transverse diffusion). With simplified assumptions related to the transverse diffusion, the in-plane diffusion and transport phenomena can be also considered. A general equation can be applied in order to balance the heat transfer at the pixel scale or between groups of pixels in order to estimate several fields of thermophysical properties (heterogeneous field of in-plane diffusivity, flow distributions, source terms). There is a lot of possible strategies to process the space and time distributed big amount of data (previous integral transformation of the images, compression, elimination of the non useful areas...), generally based on the necessity to analyse the derivative versus space and time of the temperature field. Several illustrative examples related to the Non-Destructive Evaluation of heterogeneous solids, the thermal characterization of chemical reactions in microfluidic channels and the design of systems for multispectral tomography, will be presented.

  6. Phase 1 immobilized low-activity waste operational source term

    SciTech Connect

    Burbank, D.A.

    1998-03-06

    This report presents an engineering analysis of the Phase 1 privatization feeds to establish an operational source term for storage and disposal of immobilized low-activity waste packages at the Hanford Site. The source term information is needed to establish a preliminary estimate of the numbers of remote-handled and contact-handled waste packages. A discussion of the uncertainties and their impact on the source term and waste package distribution is also presented. It should be noted that this study is concerned with operational impacts only. Source terms used for accident scenarios would differ due to alpha and beta radiation which were not significant in this study.

  7. HTGR Mechanistic Source Terms White Paper

    SciTech Connect

    Wayne Moe

    2010-07-01

    The primary purposes of this white paper are: (1) to describe the proposed approach for developing event specific mechanistic source terms for HTGR design and licensing, (2) to describe the technology development programs required to validate the design methods used to predict these mechanistic source terms and (3) to obtain agreement from the NRC that, subject to appropriate validation through the technology development program, the approach for developing event specific mechanistic source terms is acceptable

  8. SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS

    SciTech Connect

    K. L. Goluoglu

    2000-06-09

    The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.

  9. Source term calculations for assessing radiation dose to equipment

    SciTech Connect

    Denning, R.S.; Freeman-Kelly, R.; Cybulskis, P.; Curtis, L.A.

    1989-07-01

    This study examines results of analyses performed with the Source Term Code Package to develop updated source terms using NUREG-0956 methods. The updated source terms are to be used to assess the adequacy of current regulatory source terms used as the basis for equipment qualification. Time-dependent locational distributions of radionuclides within a containment following a severe accident have been developed. The Surry reactor has been selected in this study as representative of PWR containment designs. Similarly, the Peach Bottom reactor has been used to examine radionuclide distributions in boiling water reactors. The time-dependent inventory of each key radionuclide is provided in terms of its activity in curies. The data are to be used by Sandia National Laboratories to perform shielding analyses to estimate radiation dose to equipment in each containment design. See NUREG/CR-5175, Beta and Gamma Dose Calculations for PWR and BWR Containments.'' 6 refs., 11 tabs.

  10. Calculation of source terms for NUREG-1150

    SciTech Connect

    Breeding, R.J.; Williams, D.C.; Murfin, W.B.; Amos, C.N.; Helton, J.C.

    1987-10-01

    The source terms estimated for NUREG-1150 are generally based on the Source Term Code Package (STCP), but the actual source term calculations used in computing risk are performed by much smaller codes which are specific to each plant. This was done because the method of estimating the uncertainty in risk for NUREG-1150 requires hundreds of source term calculations for each accident sequence. This is clearly impossible with a large, detailed code like the STCP. The small plant-specific codes are based on simple algorithms and utilize adjustable parameters. The values of the parameters appearing in these codes are derived from the available STCP results. To determine the uncertainty in the estimation of the source terms, these parameters were varied as specified by an expert review group. This method was used to account for the uncertainties in the STCP results and the uncertainties in phenomena not considered by the STCP.

  11. SOURCE TERMS FOR HLW GLASS CANISTERS

    SciTech Connect

    J.S. Tang

    2000-08-15

    This calculation is prepared by the Monitored Geologic Repository (MGR) Waste Package Design Section. The objective of this calculation is to determine the source terms that include radionuclide inventory, decay heat, and radiation sources due to gamma rays and neutrons for the high-level radioactive waste (HLW) from the, West Valley Demonstration Project (WVDP), Savannah River Site (SRS), Hanford Site (HS), and Idaho National Engineering and Environmental Laboratory (INEEL). This calculation also determines the source terms of the canister containing the SRS HLW glass and immobilized plutonium. The scope of this calculation is limited to source terms for a time period out to one million years. The results of this calculation may be used to carry out performance assessment of the potential repository and to evaluate radiation environments surrounding the waste packages (WPs). This calculation was performed in accordance with the Development Plan ''Source Terms for HLW Glass Canisters'' (Ref. 7.24).

  12. Assessing sensitivity of source term estimation

    NASA Astrophysics Data System (ADS)

    Long, Kerrie J.; Haupt, Sue Ellen; Young, George S.

    2010-04-01

    Source term estimation algorithms compute unknown atmospheric transport and dispersion modeling variables from concentration observations made by sensors in the field. Insufficient spatial and temporal resolution in the meteorological data as well as inherent uncertainty in the wind field data make source term estimation and the prediction of subsequent transport and dispersion extremely difficult. This work addresses the question: how many sensors are necessary in order to successfully estimate the source term and meteorological variables required for atmospheric transport and dispersion modeling? The source term estimation system presented here uses a robust optimization technique - a genetic algorithm (GA) - to find the combination of source location, source height, source strength, surface wind direction, surface wind speed, and time of release that produces a concentration field that best matches the sensor observations. The approach is validated using the Gaussian puff as the dispersion model in identical twin numerical experiments. The limits of the system are tested by incorporating additive and multiplicative noise into the synthetic data. The minimum requirements for data quantity and quality are determined by an extensive grid sensitivity analysis. Finally, a metric is developed for quantifying the minimum number of sensors necessary to accurately estimate the source term and to obtain the relevant wind information.

  13. Dose distributions in regions containing beta sources: Irregularly shaped source distributions in homogeneous media

    SciTech Connect

    Werner, B.L. )

    1991-11-01

    Methods are introduced by which dose rate distributions due to nonuniform, irregularly shaped distributions of beta emitters can be calculated using dose rate distributions for uniform, spherical source distributions. The dose rate distributions can be written in the MIRD formalism.

  14. Implications for Long-Term Mantle History of the Restricted Distribution of Large Igneous Province (LIP) Plume Sources at the Core-Mantle Boundary (CMB)

    NASA Astrophysics Data System (ADS)

    Burke, K.; Steinberger, B.; Torsvik, T. H.; Smethurst, M. A.

    2008-12-01

    We have found, by rotation of LIPs of the past 300 My to their eruption sites in a paleomagnetic reference frame corrected for true polar wander, that those sites concentrate vertically above the margins at the CMB of the two Large Low Shear Wave Velocity Provinces(LLSVPs) of the deep mantle (Torsvik et al. 2006). This surprising discovery of narrow (< 200 wide) Plume Generation Zones stable for at least 300 My on the CMB at the LLSVP margins is consistent with the idea that the LLSVPs are compositionally (and probably also thermally) distinct dense bodies (each making up ca. 1 percent of mantle mass) rather than thermally buoyant "superplumes". The "centers of mass" of the two LLSVPs are antipodally disposed close to the equator, an intriguing possible further indication of long-term stability because the positively elevated part of the residual geoid, which matches the LLSVPs and therefore also appears also to have been stable for at least 300 My finds an analog in the aeroid of Mars of which the elevated regions are themselves antipodal on the equator. Because some volcanoes of Mars perhaps > 3.8 My in age are concentrated on the rims of the elevated aeroid it is worth considering the implications of the possible isolation of the LLSVPs from the rest of the mantle through most of Earth history. If the 2 percent of mantle mass that makes the LLSVPs has escaped being involved in making ocean floor it will be more Fe rich and denser than the average mantle. If it has also escaped being involved in making continent it will be richer in U,Th and K and hotter. It will have distinctive noble gas concentrations and could be the source (by diffusion) of the Earth's current 3He flux (Burke et al. 2008). If a velocity change attributable to a perovskite/post-perovskite transition can be mapped consistently both within and outside the LLSVPs it will help in testing the idea that the interiors of LLSVPs are hotter than the rest of the deep mantle.

  15. Subsurface Shielding Source Term Specification Calculation

    SciTech Connect

    S.Su

    2001-04-12

    The purpose of this calculation is to establish appropriate and defensible waste-package radiation source terms for use in repository subsurface shielding design. This calculation supports the shielding design for the waste emplacement and retrieval system, and subsurface facility system. The objective is to identify the limiting waste package and specify its associated source terms including source strengths and energy spectra. Consistent with the Technical Work Plan for Subsurface Design Section FY 01 Work Activities (CRWMS M&O 2001, p. 15), the scope of work includes the following: (1) Review source terms generated by the Waste Package Department (WPD) for various waste forms and waste package types, and compile them for shielding-specific applications. (2) Determine acceptable waste package specific source terms for use in subsurface shielding design, using a reasonable and defensible methodology that is not unduly conservative. This calculation is associated with the engineering and design activity for the waste emplacement and retrieval system, and subsurface facility system. The technical work plan for this calculation is provided in CRWMS M&O 2001. Development and performance of this calculation conforms to the procedure, AP-3.12Q, Calculations.

  16. BWR Source Term Generation and Evaluation

    SciTech Connect

    J.C. Ryman

    2003-07-31

    This calculation is a revision of a previous calculation (Ref. 7.5) that bears the same title and has the document identifier BBAC00000-01717-0210-00006 REV 01. The purpose of this revision is to remove TBV (to-be-verified) -41 10 associated with the output files of the previous version (Ref. 7.30). The purpose of this and the previous calculation is to generate source terms for a representative boiling water reactor (BWR) spent nuclear fuel (SNF) assembly for the first one million years after the SNF is discharged from the reactors. This calculation includes an examination of several ways to represent BWR assemblies and operating conditions in SAS2H in order to quantify the effects these representations may have on source terms. These source terms provide information characterizing the neutron and gamma spectra in particles per second, the decay heat in watts, and radionuclide inventories in curies. Source terms are generated for a range of burnups and enrichments (see Table 2) that are representative of the waste stream and stainless steel (SS) clad assemblies. During this revision, it was determined that the burnups used for the computer runs of the previous revision were actually about 1.7% less than the stated, or nominal, burnups. See Section 6.6 for a discussion of how to account for this effect before using any source terms from this calculation. The source term due to the activation of corrosion products deposited on the surfaces of the assembly from the coolant is also calculated. The results of this calculation support many areas of the Monitored Geologic Repository (MGR), which include thermal evaluation, radiation dose determination, radiological safety analyses, surface and subsurface facility designs, and total system performance assessment. This includes MGR items classified as Quality Level 1, for example, the Uncanistered Spent Nuclear Fuel Disposal Container (Ref. 7.27, page 7). Therefore, this calculation is subject to the requirements of the

  17. Hazardous constituent source term. Revision 2

    SciTech Connect

    Not Available

    1994-11-17

    The Department of Energy (DOE) has several facilities that either generate and/or store transuranic (TRU)-waste from weapons program research and production. Much of this waste also contains hazardous waste constituents as regulated under Subtitle C of the Resource Conservation and Recovery Act (RCRA). Toxicity characteristic metals in the waste principally include lead, occurring in leaded rubber gloves and shielding. Other RCRA metals may occur as contaminants in pyrochemical salt, soil, debris, and sludge and solidified liquids, as well as in equipment resulting from decontamination and decommissioning activities. Volatile organic compounds (VOCS) contaminate many waste forms as a residue adsorbed on surfaces or occur in sludge and solidified liquids. Due to the presence of these hazardous constituents, applicable disposal regulations include land disposal restrictions established by Hazardous and Solid Waste Amendments (HSWA). The DOE plans to dispose of TRU-mixed waste from the weapons program in the Waste Isolation Pilot Plant (WIPP) by demonstrating no-migration of hazardous constituents. This paper documents the current technical basis for methodologies proposed to develop a post-closure RCRA hazardous constituent source term. For the purposes of demonstrating no-migration, the hazardous constituent source term is defined as the quantities of hazardous constituents that are available for transport after repository closure. Development of the source term is only one of several activities that will be involved in the no-migration demonstration. The demonstration will also include uncertainty and sensitivity analyses of contaminant transport.

  18. Spiral arms as cosmic ray source distributions

    NASA Astrophysics Data System (ADS)

    Werner, M.; Kissmann, R.; Strong, A. W.; Reimer, O.

    2015-04-01

    The Milky Way is a spiral galaxy with (or without) a bar-like central structure. There is evidence that the distribution of suspected cosmic ray sources, such as supernova remnants, are associated with the spiral arm structure of galaxies. It is yet not clearly understood what effect such a cosmic ray source distribution has on the particle transport in our Galaxy. We investigate and measure how the propagation of Galactic cosmic rays is affected by a cosmic ray source distribution associated with spiral arm structures. We use the PICARD code to perform high-resolution 3D simulations of electrons and protons in galactic propagation scenarios that include four-arm and two-arm logarithmic spiral cosmic ray source distributions with and without a central bar structure as well as the spiral arm configuration of the NE2001 model for the distribution of free electrons in the Milky Way. Results of these simulation are compared to an axisymmetric radial source distribution. Also, effects on the cosmic ray flux and spectra due to different positions of the Earth relative to the spiral structure are studied. We find that high energy electrons are strongly confined to their sources and the obtained spectra largely depend on the Earth's position relative to the spiral arms. Similar finding have been obtained for low energy protons and electrons albeit at smaller magnitude. We find that even fractional contributions of a spiral arm component to the total cosmic ray source distribution influences the spectra on the Earth. This is apparent when compared to an axisymmetric radial source distribution as well as with respect to the Earth's position relative to the spiral arm structure. We demonstrate that the presence of a Galactic bar manifests itself as an overall excess of low energy electrons at the Earth. Using a spiral arm geometry as a cosmic ray source distributions offers a genuine new quality of modeling and is used to explain features in cosmic ray spectra at the Earth

  19. Source distribution dependent scatter correction for PVI

    SciTech Connect

    Barney, J.S.; Harrop, R.; Dykstra, C.J. . School of Computing Science TRIUMF, Vancouver, British Columbia )

    1993-08-01

    Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction.

  20. Design parameters and source terms: Volume 2, Source terms: Revision 0

    SciTech Connect

    Not Available

    1987-10-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report by Stearns Catalytic Corporation (SCC), entitled ''Design Parameters and Source Terms for a Two-Phase Repository Salt,'' 1985, to the level of the Site Characterization Plan - Conceptual Design Report. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible Salt Repository sites. 2 tabs.

  1. Design parameters and source terms: Volume 2, Source terms: Revision 0

    SciTech Connect

    Not Available

    1987-09-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report to the level of the Site Characterization Plan---Conceptual Design Report SCP-CDR. The previous study identifies the data needs for the Environmental Assessment effort for seven possible salt repository sites. Volume 2 contains tables of source terms.

  2. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  3. Improved source term estimation using blind outlier detection

    NASA Astrophysics Data System (ADS)

    Martinez-Camara, Marta; Bejar Haro, Benjamin; Vetterli, Martin; Stohl, Andreas

    2014-05-01

    Emissions of substances into the atmosphere are produced in situations such as volcano eruptions, nuclear accidents or pollutant releases. It is necessary to know the source term - how the magnitude of these emissions changes with time - in order to predict the consequences of the emissions, such as high radioactivity levels in a populated area or high concentration of volcanic ash in an aircraft flight corridor. However, in general, we know neither how much material was released in total, nor the relative variation of emission strength with time. Hence, estimating the source term is a crucial task. Estimating the source term generally involves solving an ill-posed linear inverse problem using datasets of sensor measurements. Several so-called inversion methods have been developed for this task. Unfortunately, objective quantitative evaluation of the performance of inversion methods is difficult due to the fact that the ground truth is unknown for practically all the available measurement datasets. In this work we use the European Tracer Experiment (ETEX) - a rare example of an experiment where the ground truth is available - to develop and to test new source estimation algorithms. Knowledge of the ground truth grants us access to the additive error term. We show that the distribution of this error is heavy-tailed, which means that some measurements are outliers. We also show that precisely these outliers severely degrade the performance of traditional inversion methods. Therefore, we develop blind outlier detection algorithms specifically suited to the source estimation problem. Then, we propose new inversion methods that combine traditional regularization techniques with blind outlier detection. Such hybrid methods reduce the error of reconstruction of the source term up to 45% with respect to previously proposed methods.

  4. TRIGA MARK-II source term

    SciTech Connect

    Usang, M. D. Hamzah, N. S. Abi, M. J. B. Rawi, M. Z. M. Rawi Abu, M. P.

    2014-02-12

    ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences of results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.

  5. TRIGA MARK-II source term

    NASA Astrophysics Data System (ADS)

    Usang, M. D.; Hamzah, N. S.; J. B., Abi M.; M. Z., M. Rawi; Abu, M. P.

    2014-02-01

    ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences of results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.

  6. Atmospheric distribution and sources of nonmethane hydrocarbons

    NASA Technical Reports Server (NTRS)

    Singh, Hanwant B.; Zimmerman, Patrick B.

    1992-01-01

    The paper discusses the atmospheric distribution of natural and man-made nonmethane hydrocarbons (NMHCs), the major species of airborne NMHCs, and their sources and sinks. Particular attention is given to the techniques for measuring atmospheric NMHCs; diurnal and seasonal variations of atmospheric NMHCs and differences between rural, urban, and marine environments; latitudinal and vertical distributions; and available stratospheric NMHC measurements. A formula defining the atmospheric lifetime of a NMHC from its reaction rates with OH and O3 is presented.

  7. Distributed transform coding via source-splitting

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2012-12-01

    Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

  8. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  9. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  10. Design parameters and source terms: Volume 3, Source terms: Revision 0

    SciTech Connect

    Not Available

    1987-09-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report to the level of the Site Characterization Plan /endash/ Conceptual Design Report, SCP-CDR. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible salt repository sites.

  11. Particle size distribution of indoor aerosol sources

    SciTech Connect

    Shah, K.B.

    1990-10-24

    As concern about Indoor Air Quality (IAQ) has grown in recent years, it has become necessary to determine the nature of particles produced by different indoor aerosol sources and the typical concentration that these sources tend to produce. These data are important in predicting the dose of particles to people exposed to these sources and it will also enable us to take effective mitigation procedures. Further, it will also help in designing appropriate air cleaners. A new state of the art technique, DMPS (Differential Mobility Particle Sizer) System is used to determine the particle size distributions of a number of sources. This system employs the electrical mobility characteristics of these particles and is very effective in the 0.01--1.0 {mu}m size range. A modified system that can measure particle sizes in the lower size range down to 3 nm was also used. Experimental results for various aerosol sources is presented in the ensuing chapters. 37 refs., 20 figs., 2 tabs.

  12. Bremsstrahlung source term estimation for high energy electron accelerators

    NASA Astrophysics Data System (ADS)

    Nayak, M. K.; Sahu, T. K.; Nair, H. G.; Nandedkar, R. V.; Bandyopadhyay, Tapas; Tripathi, R. M.; Hannurkar, P. R.; Sharma, D. N.

    2015-08-01

    Thick target bremsstrahlung source term for 450 MeV and 550 MeV electrons are experimentally determined using booster synchrotron of Indus facility at Raja Ramanna Centre for Advanced Technology, Indore, India. The source term is also simulated using EGSnrc Monte Carlo code. Results from experiment and simulation are found to be in very good agreement. Based on the agreement between experimental and simulated data, the source term is determined up to 3000 MeV by simulation. The paper also describes the studies carried out on the variation of source term when a thin target is considered in place of a thick target, used in earlier studies.

  13. State of the hydrologic source term

    SciTech Connect

    Kersting, A.

    1996-12-01

    The Underground Test Area (UGTA) Operable Unit was defined by the U.S. Department of energy, Nevada operations Office to characterize and potentially remediate groundwaters impacted by nuclear testing at the Nevada Test Site (NTS). Between 1955 and 1992, 828 nuclear devices were detonated underground at the NTS (DOE), 1994. Approximately one third of the nuclear tests were detonated at or below the standing water table and the remainder were located above the water table in the vadose zone. As a result, the distribution of radionuclides in the subsurface and, in particular, the availability of radionuclides for transport away from individual test cavities are major concerns at the NTS. The approach taken is to carry out field-based studies of both groundwaters and host rocks within the near-field in order to develop a detailed understanding of the present-day concentration and spatial distribution of constituent radionuclides. Understanding the current distribution of contamination within the near-field and the conditions under and processes by which the radionuclides were transported make it possible to predict future transport behavior. The results of these studies will be integrated with archival research, experiments and geochemical modeling for complete characterization.

  14. Constraining Source Redshift Distributions with Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Wittman, D.; Dawson, W. A.

    2012-09-01

    We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that ~40 massive (σ v = 1200 km s-1) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to ~11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20%-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as N_lens^{-{1\\over 2}}, but the improvement slows as one is forced to use lenses further down the mass function. Future large surveys contain enough clusters to reach 1% precision in the bin fractions if the tight lens-mass priors can be maintained for large samples of lenses. In practice this will be difficult to achieve, but the method may be valuable as a complement to other more precise methods because it is based on different physics and therefore has different systematic errors.

  15. Stochastic Models for the Distribution of Index Terms.

    ERIC Educational Resources Information Center

    Nelson, Michael J.

    1989-01-01

    Presents a probability model of the occurrence of index terms used to derive discrete distributions which are mixtures of Poisson and negative binomial distributions. These distributions give better fits than the simpler Zipf distribution, have the advantage of being more explanatory, and can incorporate a time parameter if necessary. (25…

  16. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-02-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  17. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-01-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  18. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-10-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  19. Revised accident source terms for light-water reactors

    SciTech Connect

    Soffer, L.

    1995-02-01

    This paper presents revised accident source terms for light-water reactors incorporating the severe accident research insights gained in this area over the last 15 years. Current LWR reactor accident source terms used for licensing date from 1962 and are contained in Regulatory Guides 1.3 and 1.4. These specify that 100% of the core inventory of noble gases and 25% of the iodine fission products are assumed to be instantaneously available for release from the containment. The chemical form of the iodine fission products is also assumed to be predominantly elemental iodine. These assumptions have strongly affected present nuclear air cleaning requirements by emphasizing rapid actuation of spray systems and filtration systems optimized to retain elemental iodine. A proposed revision of reactor accident source terms and some im implications for nuclear air cleaning requirements was presented at the 22nd DOE/NRC Nuclear Air Cleaning Conference. A draft report was issued by the NRC for comment in July 1992. Extensive comments were received, with the most significant comments involving (a) release fractions for both volatile and non-volatile species in the early in-vessel release phase, (b) gap release fractions of the noble gases, iodine and cesium, and (c) the timing and duration for the release phases. The final source term report is expected to be issued in late 1994. Although the revised source terms are intended primarily for future plants, current nuclear power plants may request use of revised accident source term insights as well in licensing. This paper emphasizes additional information obtained since the 22nd Conference, including studies on fission product removal mechanisms, results obtained from improved severe accident code calculations and resolution of major comments, and their impact upon the revised accident source terms. Revised accident source terms for both BWRS and PWRS are presented.

  20. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  1. Laser induced heat source distribution in bio-tissues

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxia; Fan, Shifu; Zhao, Youquan

    2006-09-01

    During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.

  2. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  3. Problem solving as intelligent retrieval from distributed knowledge sources

    NASA Technical Reports Server (NTRS)

    Chen, Zhengxin

    1987-01-01

    Distributed computing in intelligent systems is investigated from a different perspective. From the viewpoint that problem solving can be viewed as intelligent knowledge retrieval, the use of distributed knowledge sources in intelligent systems is proposed.

  4. Estimating the releasable source term for Type B packages

    SciTech Connect

    Anderson, B.L.; Carlson, R.W.; Osgood, N.

    1995-11-01

    The release rate criteria for Type B packages designed to transport radioactive materials is given in Title 10 of the Code of Federal Regulations (10 CFR 71). Before the maximum allowable volumetric leakage rate that corresponds to the regulatory release rate can be calculated, estimation of the releasable source term activity density (concentration of releasable radioactive material) is required. This work provides methods for estimating the releasable source term for packages holding various contents types. The contents types considered include: (1) radioactive liquids; (2) radioactive gases; (3) radioactive powders and dispersible solids; (4) non-dispersible radioactive solids and (5) irradiated nuclear fuel rods. The numbers given, especially as related to the source term for packages transporting irradiated fuel rods, are preliminary and are subject to change upon development of improved methods and/or upon review of additional experimental data.

  5. Source term and radiological consequences of the Chernobyl accident

    SciTech Connect

    Mourad, R.; Snell, V.

    1987-01-01

    The objective of this work is to assess the source term and to evaluate the maximum hypothetical individual doses in European countries (including the Soviet Union) from the Chernobyl accident through the analyses of measurements of meteorological data, radiation fields, and airborne and deposited activity in these countries. Applying this information to deduce the source term involves a reversal of the techniques of nuclear accident analysis, which estimate the off-site consequences of postulated accidents. In this study the authors predict the quantities of radionuclides that, if released at Chernobyl and following the calculated trajectories, would explain and unify the observed radiation levels and radionuclide concentrations as measured by European countries and the Soviet Union. The simulation uses the PEAR microcomputer program following the methodology described in Canadian Standards Association standard N288.2. The study was performed before the Soviets published their estimate of the source term and the two results are compared.

  6. Flowsheets and source terms for radioactive waste projections

    SciTech Connect

    Forsberg, C.W.

    1985-03-01

    Flowsheets and source terms used to generate radioactive waste projections in the Integrated Data Base (IDB) Program are given. Volumes of each waste type generated per unit product throughput have been determined for the following facilities: uranium mining, UF/sub 6/ conversion, uranium enrichment, fuel fabrication, boiling-water reactors (BWRs), pressurized-water reactors (PWRs), and fuel reprocessing. Source terms for DOE/defense wastes have been developed. Expected wastes from typical decommissioning operations for each facility type have been determined. All wastes are also characterized by isotopic composition at time of generation and by general chemical composition. 70 references, 21 figures, 53 tables.

  7. Spallation Neutron Source Accident Terms for Environmental Impact Statement Input

    SciTech Connect

    Devore, J.R.; Harrington, R.M.

    1998-08-01

    This report is about accidents with the potential to release radioactive materials into the environment surrounding the Spallation Neutron Source (SNS). As shown in Chap. 2, the inventories of radioactivity at the SNS are dominated by the target facility. Source terms for a wide range of target facility accidents, from anticipated events to worst-case beyond-design-basis events, are provided in Chaps. 3 and 4. The most important criterion applied to these accident source terms is that they should not underestimate potential release. Therefore, conservative methodology was employed for the release estimates. Although the source terms are very conservative, excessive conservatism has been avoided by basing the releases on physical principles. Since it is envisioned that the SNS facility may eventually (after about 10 years) be expanded and modified to support a 4-MW proton beam operational capability, the source terms estimated in this report are applicable to a 4-MW operating proton beam power unless otherwise specified. This is bounding with regard to the 1-MW facility that will be built and operated initially. See further discussion below in Sect. 1.2.

  8. Common Calibration Source for Monitoring Long-term Ozone Trends

    NASA Technical Reports Server (NTRS)

    Kowalewski, Matthew

    2004-01-01

    Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.

  9. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  10. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... analyzed in the safety analysis report. 1 The fission product release assumed for these calculations should... meltdown of the core with subsequent release of appreciable quantities of fission products. (2) The NRC...

  11. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... analyzed in the safety analysis report. 1 The fission product release assumed for these calculations should... meltdown of the core with subsequent release of appreciable quantities of fission products. (2) The NRC...

  12. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... analyzed in the safety analysis report. 1 The fission product release assumed for these calculations should... meltdown of the core with subsequent release of appreciable quantities of fission products. (2) The NRC...

  13. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... analyzed in the safety analysis report. 1 The fission product release assumed for these calculations should... meltdown of the core with subsequent release of appreciable quantities of fission products. (2) The NRC...

  14. Disposal Unit Source Term (DUST) data input guide

    SciTech Connect

    Sullivan, T.M.

    1993-05-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). The computer code DUST (Disposal Unit Source Term) has been developed to model these processes. This document presents the models used to calculate release from a disposal facility, verification of the model, and instructions on the use of the DUST code. In addition to DUST, a preprocessor, DUSTIN, which helps the code user create input decks for DUST and a post-processor, GRAFXT, which takes selected output files and plots them on the computer terminal have been written. Use of these codes is also described.

  15. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... analyzed in the safety analysis report. 1 The fission product release assumed for these calculations should... meltdown of the core with subsequent release of appreciable quantities of fission products. (2) The NRC...

  16. Open source portal to distributed image repositories

    NASA Astrophysics Data System (ADS)

    Tao, Wenchao; Ratib, Osman M.; Kho, Hwa; Hsu, Yung-Chao; Wang, Cun; Lee, Cason; McCoy, J. M.

    2004-04-01

    In large institution PACS, patient data may often reside in multiple separate systems. While most systems tend to be DICOM compliant, none of them offer the flexibility of seamless integration of multiple DICOM sources through a single access point. We developed a generic portal system with a web-based interactive front-end as well as an application programming interface (API) that allows both web users and client applications to query and retrieve image data from multiple DICOM sources. A set of software tools was developed to allow accessing several DICOM archives through a single point of access. An interactive web-based front-end allows user to search image data seamlessly from the different archives and display the results or route the image data to another DICOM compliant destination. An XML-based API allows other software programs to easily benefit from this portal to query and retrieve image data as well. Various techniques are employed to minimize the performance overhead inherent in the DICOM. The system is integrated with a hospital-wide HIPAA-compliant authentication and auditing service that provides centralized management of access to patient medical records. The system is provided under open source free licensing and developed using open-source components (Apache Tomcat for web server, MySQL for database, OJB for object/relational data mapping etc.). The portal paradigm offers a convenient and effective solution for accessing multiple image data sources in a given healthcare enterprise and can easily be extended to multi-institution through appropriate security and encryption mechanisms.

  17. Pressure distribution in unsteady sink and source flows.

    PubMed

    Voropayev, S I

    2015-05-01

    Basic flow generated in a viscous incompressible fluid by a "point" sink (source) of mass is revised. In practice, such flow can be modeled by sucking (pushing) fluid from a thin tube with a small porous sphere at one end. Intuitively, by sucking (pushing) fluid, one creates low (high) pressure near the origin and a positive (negative) radial pressure gradient drives the fluid to (from) the origin. A simple analysis, however, shows that the pressure distribution for both steady flows is the same. Then a question arises: How does the fluid "know" in what direction to flow? To explain this "paradox" an unsteady flow is considered and the pressure terms responsible for the flow direction are derived. PMID:26066255

  18. Experimental quantum-key distribution with an untrusted source.

    PubMed

    Peng, Xiang; Jiang, Hao; Xu, Bingjie; Ma, Xiongfeng; Guo, Hong

    2008-09-15

    The photon statistics of a quantum-key-distribution (QKD) source are crucial for security analysis. We propose a practical method, with only a beam splitter and a photodetector, to monitor the photon statistics of a QKD source. By implementing in a plug and play QKD system, we show that the method is highly practical. The final secure key rate is 52 bit/s, compared to 78 bit/s when the source is treated as a trusted source.

  19. Dose distributions in regions containing beta sources: Uniform spherical source regions in homogeneous media

    SciTech Connect

    Werner, B.L.; Rahman, M.; Salk, W.N. ); Kwok, C.S. )

    1991-11-01

    The energy-averaged transport model for the calculation of dose rate distributions is applied to uniform, spherical source distributions in homogeneous media for radii smaller than the electron range. The model agrees well with Monte Carlo based calculations for source distributions with radii greater than half the continuous slowing down approximation range. The dose rate distributions can be written in the medical internal radiation dose (MIRD) formalism.

  20. Phonatory sound sources in terms of Lagrangian Coherent Structures

    NASA Astrophysics Data System (ADS)

    McPhail, Michael; Krane, Michael

    2015-11-01

    Lagrangian Coherent Structures (LCS) are used to identify sound sources in phonation. Currently, it is difficult to causally relate changes in airflow topology from voice disorders to changes in voiced sound production. LCS reveals a flow's topology by decomposing the flow into regions of distinct dynamics. The aeroacoustic sources can be written in terms of the motion of these regions in terms of the motion of the boundaries of the distinct regions. Breaking down the flow into constituent parts shows how each distinct region contributes to sound production. This approach provides a framework to connect changes in anatomy from a voice disorder to measurable changes in the resulting sound. This approach is presented for simulations of some canonical cases of vortex sound generation, and a two-dimensional simulation of phonation. Acknowledge NIH grant 2R01 2R01DC005642.

  1. A nuclear source term analysis for spacecraft power systems

    SciTech Connect

    McCulloch, W.H.

    1998-12-01

    All US space missions involving on board nuclear material must be approved by the Office of the President. To be approved the mission and the hardware systems must undergo evaluations of the associated nuclear health and safety risk. One part of these evaluations is the characterization of the source terms, i.e., the estimate of the amount, physical form, and location of nuclear material, which might be released into the environment in the event of credible accidents. This paper presents a brief overview of the source term analysis by the Interagency Nuclear Safety Review Panel for the NASA Cassini Space Mission launched in October 1997. Included is a description of the Energy Interaction Model, an innovative approach to the analysis of potential releases from high velocity impacts resulting from launch aborts and reentries.

  2. Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection

    PubMed Central

    Bane, Vaishali; Lehane, Mary; Dikshit, Madhurima; O’Riordan, Alan; Furey, Ambrose

    2014-01-01

    Tetrodotoxin (TTX) is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection. PMID:24566728

  3. Sources and distributions of dark matter

    SciTech Connect

    Sikivie, P. |

    1995-12-31

    In the first section, the author tries to convey a sense of the variety of observational inputs that tell about the existence and the spatial distribution of dark matter in the universe. In the second section, he briefly reviews the four main dark matter candidates, taking note of each candidate`s status in the world of particle physics, its production in the early universe, its effect upon large scale structure formation and the means by which it may be detected. Section 3 concerns the energy spectrum of (cold) dark matter particles on earth as may be observed some day in a direct detection experiment. It is a brief account of work done in collaboration with J. Ipser and, more recently, with I. Tkachev and Y. Wang.

  4. Apparent LFE Magnitude-Frequency Distributions and the Tremor Source

    NASA Astrophysics Data System (ADS)

    Rubin, A. M.; Bostock, M. G.

    2015-12-01

    Over a decade since its discovery, it is disconcerting that we know so little about the kinematics of the tremor source. One could say we are hampered by low signal-to-noise ratio, but often the LFE signal is large and the "noise" is just other LFEs, often nearly co-located. Here we exploit this feature to better characterize the tremor source. A quick examination of LFE catalogs shows, unsurprisingly, that detected magnitudes are large when the background tremor amplitude is large. A simple interpretation is that small LFEs are missed when tremor is loud. An unanswered question is whether, in addition, there is a paucity of small LFEs when tremor is loud. Because we have both the LFE Green's function (from stacks) and some minimum bound on the overall LFE rate (from our catalogs), tremor waveforms provide a consistency check on any assumed magnitude-frequency (M-f) distribution. Beneath southern Vancouver Island, the magnitudes of >10^5 LFEs range from about 1.2-2.4 (Bostock et al. 2015). Interpreted in terms of a power-law distribution, the b-value is >5. But missed small events make even this large value only a lower bound. Binning by background tremor amplitude, and assuming a time-invariant M-f distribution, the b-value increases to >7, implying (e.g.) more than 10 million M>1.2 events for every M=2.2 event. Such numbers are inconsistent with the observed modest increase in tremor amplitude with LFE magnitude, as well as with geodetically-allowable slips. Similar considerations apply to exponential and log-normal moment-frequency distributions. Our preliminary interpretation is that when LFE magnitudes are large, the same portion of the fault is producing larger LFEs, rather than a greater rate of LFEs pulled from the same distribution. If correct, this distinguishes LFEs from repeating earthquakes, where larger background fault slip rates lead not to larger earthquakes but to more frequent earthquakes of similar magnitude. One possible explanation, that LFEs

  5. Actinide Source Term Program, position paper. Revision 1

    SciTech Connect

    Novak, C.F.; Papenguth, H.W.; Crafts, C.C.; Dhooge, N.J.

    1994-11-15

    The Actinide Source Term represents the quantity of actinides that could be mobilized within WIPP brines and could migrate with the brines away from the disposal room vicinity. This document presents the various proposed methods for estimating this source term, with a particular focus on defining these methods and evaluating the defensibility of the models for mobile actinide concentrations. The conclusions reached in this document are: the 92 PA {open_quotes}expert panel{close_quotes} model for mobile actinide concentrations is not defensible; and, although it is extremely conservative, the {open_quotes}inventory limits{close_quotes} model is the only existing defensible model for the actinide source term. The model effort in progress, {open_quotes}chemical modeling of mobile actinide concentrations{close_quotes}, supported by a laboratory effort that is also in progress, is designed to provide a reasonable description of the system and be scientifically realistic and supplant the {open_quotes}Inventory limits{close_quotes} model.

  6. Short and long term representation of an unfamiliar tone distribution

    PubMed Central

    Diercks, Charlette; Troje, Nikolaus F.; Cuddy, Lola L.

    2016-01-01

    We report on a study conducted to extend our knowledge about the process of gaining a mental representation of music. Several studies, inspired by research on the statistical learning of language, have investigated statistical learning of sequential rules underlying tone sequences. Given that the mental representation of music correlates with distributional properties of music, we tested whether participants are able to abstract distributional information contained in tone sequences to form a mental representation. For this purpose, we created an unfamiliar music genre defined by an underlying tone distribution, to which 40 participants were exposed. Our stimuli allowed us to differentiate between sensitivity to the distributional properties contained in test stimuli and long term representation of the distributional properties of the music genre overall. Using a probe tone paradigm and a two-alternative forced choice discrimination task, we show that listeners are able to abstract distributional properties of music through mere exposure into a long term representation of music. This lends support to the idea that statistical learning is involved in the process of gaining musical knowledge. PMID:27635355

  7. Short and long term representation of an unfamiliar tone distribution.

    PubMed

    Cui, Anja X; Diercks, Charlette; Troje, Nikolaus F; Cuddy, Lola L

    2016-01-01

    We report on a study conducted to extend our knowledge about the process of gaining a mental representation of music. Several studies, inspired by research on the statistical learning of language, have investigated statistical learning of sequential rules underlying tone sequences. Given that the mental representation of music correlates with distributional properties of music, we tested whether participants are able to abstract distributional information contained in tone sequences to form a mental representation. For this purpose, we created an unfamiliar music genre defined by an underlying tone distribution, to which 40 participants were exposed. Our stimuli allowed us to differentiate between sensitivity to the distributional properties contained in test stimuli and long term representation of the distributional properties of the music genre overall. Using a probe tone paradigm and a two-alternative forced choice discrimination task, we show that listeners are able to abstract distributional properties of music through mere exposure into a long term representation of music. This lends support to the idea that statistical learning is involved in the process of gaining musical knowledge. PMID:27635355

  8. Short and long term representation of an unfamiliar tone distribution

    PubMed Central

    Diercks, Charlette; Troje, Nikolaus F.; Cuddy, Lola L.

    2016-01-01

    We report on a study conducted to extend our knowledge about the process of gaining a mental representation of music. Several studies, inspired by research on the statistical learning of language, have investigated statistical learning of sequential rules underlying tone sequences. Given that the mental representation of music correlates with distributional properties of music, we tested whether participants are able to abstract distributional information contained in tone sequences to form a mental representation. For this purpose, we created an unfamiliar music genre defined by an underlying tone distribution, to which 40 participants were exposed. Our stimuli allowed us to differentiate between sensitivity to the distributional properties contained in test stimuli and long term representation of the distributional properties of the music genre overall. Using a probe tone paradigm and a two-alternative forced choice discrimination task, we show that listeners are able to abstract distributional properties of music through mere exposure into a long term representation of music. This lends support to the idea that statistical learning is involved in the process of gaining musical knowledge.

  9. Contamination on LDEF: Sources, distribution, and history

    NASA Technical Reports Server (NTRS)

    Pippin, Gary; Crutcher, Russ

    1993-01-01

    An introduction to contamination effects observed on the Long Duration Exposure Facility (LDEF) is presented. The activities reported are part of Boeing's obligation to the LDEF Materials Special Investigation Group. The contamination films and particles had minimal influence on the thermal performance of the LDEF. Some specific areas did have large changes in optical properties. Films also interfered with recession rate determination by reacting with the oxygen or physically shielding underlying material. Generally, contaminant films lessen the measured recession rate relative to 'clean' surfaces. On orbit generation of particles may be an issue for sensitive optics. Deposition on lenses may lead to artifacts on photographic images or cause sensors to respond inappropriately. Particles in the line of sight of sensors can cause stray light to be scattered into sensors. Particles also represent a hazard for mechanisms in that they can physically block and/or increase friction or wear on moving surfaces. LDEF carried a rather complex mixture of samples and support hardware into orbit. The experiments were assembled under a variety of conditions and time constraints and stored for up to five years before launch. The structure itself was so large that it could not be baked after the interior was painted with chemglaze Z-306 polyurethane based black paint. Any analysis of the effects of molecular and particulate contamination must account for a complex array of sources, wide variation in processes over time, and extreme variation in environment from ground to launch to flight. Surface conditions at certain locations on LDEF were established by outgassing of molecular species from particular materials onto adjacent surfaces, followed by alteration of those species due to exposure to atomic oxygen and/or solar radiation.

  10. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  11. An approach to distribution short-term load forecasting

    SciTech Connect

    Stratton, R.C.; Gaustad, K.L.

    1995-03-01

    This paper reports on the developments and findings of the Distribution Short-Term Load Forecaster (DSTLF) research activity. The objective of this research is to develop a distribution short-term load forecasting technology consisting of a forecasting method, development methodology, theories necessary to support required technical components, and the hardware and software tools required to perform the forecast The DSTLF consists of four major components: monitored endpoint load forecaster (MELF), nonmonitored endpoint load forecaster (NELF), topological integration forecaster (TIF), and a dynamic tuner. These components interact to provide short-term forecasts at various points in the, distribution system, eg., feeder, line section, and endpoint. This paper discusses the DSTLF methodology and MELF component MELF, based on artificial neural network technology, predicts distribution endpoint loads for an hour, a day, and a week in advance. Predictions are developed using time, calendar, historical load, and weather data. The overall DSTLF architecture and a prototype MELF module for retail endpoints have been developed. Future work will be focused on refining and extending MELF and developing NELF and TIF capabilities.

  12. Trace Metal Source Terms in Carbon Sequestration Environments

    SciTech Connect

    Karamalidis, Athanasios; Torres, Sharon G.; Hakala, Jacqueline A.; Shao, Hongbo; Cantrell, Kirk J.; Carroll, Susan A.

    2013-01-01

    ABSTRACT: Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising; however, possible CO2 or CO2-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define to provide a range of concentrations that can be used as the trace element source term for reservoirs and leakage pathways in risk simulations. Storage source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from cements and sandstones, shales, carbonates, evaporites, and basalts from the Frio, In Salah, Illinois Basin, Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands, and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution was tracked by measuring solution concentrations over time under conditions (e.g., pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for maximum contaminant levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments because of the presence of CO2. Results indicate that Cr and Pb released from sandstone reservoir and shale cap rocks exceed the MCLs byan order of magnitude, while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the trace element source term for reservoirs and leakage pathways in risk simulations to further evaluate the impact of leakage on groundwater quality.

  13. Trace metal source terms in carbon sequestration environments.

    PubMed

    Karamalidis, Athanasios K; Torres, Sharon G; Hakala, J Alexandra; Shao, Hongbo; Cantrell, Kirk J; Carroll, Susan

    2013-01-01

    Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising; however, possible CO(2) or CO(2)-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define a range of concentrations that can be used as the trace element source term for reservoirs and leakage pathways in risk simulations. Storage source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from cements and sandstones, shales, carbonates, evaporites, and basalts from the Frio, In Salah, Illinois Basin, Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands, and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution was tracked by measuring solution concentrations over time under conditions (e.g., pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for maximum contaminant levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments because of the presence of CO(2). Results indicate that Cr and Pb released from sandstone reservoir and shale cap rocks exceed the MCLs by an order of magnitude, while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the trace element source term for reservoirs and leakage pathways in risk simulations to further evaluate the impact of leakage on groundwater quality. PMID:23215015

  14. Trace Metal Source Terms in Carbon Sequestration Environments

    SciTech Connect

    Karamalidis, Athanasios K; Torres, Sharon G; Hakala, J Alexandra; Shao, Hongbo; Cantrell, Kirk J; Carroll, Susan

    2012-02-05

    Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising, however, possible CO₂ or CO₂-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define trace metal source terms from the reaction of supercritical CO₂, storage reservoir brines, reservoir and cap rocks. Storage reservoir source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from sandstones, shales, carbonates, evaporites, basalts and cements from the Frio, In Salah, Illinois Basin – Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution is tracked by measuring solution concentrations over time under conditions (e.g. pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for Maximum Contaminant Levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments due to the presence of CO₂. Results indicate that Cr and Pb released from sandstone reservoir and shale cap rock exceed the MCLs by an order of magnitude while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the reservoir and caprock source term to further evaluate the impact of leakage on groundwater quality.

  15. Trace metal source terms in carbon sequestration environments.

    PubMed

    Karamalidis, Athanasios K; Torres, Sharon G; Hakala, J Alexandra; Shao, Hongbo; Cantrell, Kirk J; Carroll, Susan

    2013-01-01

    Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising; however, possible CO(2) or CO(2)-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define a range of concentrations that can be used as the trace element source term for reservoirs and leakage pathways in risk simulations. Storage source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from cements and sandstones, shales, carbonates, evaporites, and basalts from the Frio, In Salah, Illinois Basin, Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands, and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution was tracked by measuring solution concentrations over time under conditions (e.g., pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for maximum contaminant levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments because of the presence of CO(2). Results indicate that Cr and Pb released from sandstone reservoir and shale cap rocks exceed the MCLs by an order of magnitude, while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the trace element source term for reservoirs and leakage pathways in risk simulations to further evaluate the impact of leakage on groundwater quality.

  16. Methodology for a bounding estimate of activation source-term.

    PubMed

    Culp, Todd

    2013-02-01

    Sandia National Laboratories' Z-Machine is the world's most powerful electrical device, and experiments have been conducted that make it the world's most powerful radiation source. Because Z-Machine is used for research, an assortment of materials can be placed into the machine; these materials can be subjected to a range of nuclear reactions, producing an assortment of activation products. A methodology was developed to provide a systematic approach to evaluate different materials to be introduced into the machine as wire arrays. This methodology is based on experiment specific characteristics, physical characteristics of specific radionuclides, and experience with Z-Machine. This provides a starting point for bounding calculations of radionuclide source-term that can be used for work planning, development of work controls, and evaluating materials for introduction into the machine.

  17. Development of alternate methods of determining integrated SMR source terms

    SciTech Connect

    Barry, Kenneth

    2014-06-10

    The Nuclear Energy Institute (NEI) Small Modular Reactor (SMR) Licensing Task Force (TF) has been evaluating licensing issues unique and important to iPWRs, ranking these issues, and developing NEI position papers for submittal to the U.S. Nuclear Regulatory Commission (NRC) during the past three years. Papers have been developed and submitted to the NRC in a range of areas including: Price-Anderson Act, NRC annual fees, security, modularity, and staffing. In December, 2012, NEI completed a draft position paper on SMR source terms and participated in an NRC public meeting presenting a summary of this paper, which was subsequently submitted to the NRC. One important conclusion of the source term paper was the evaluation and selection of high importance areas where additional research would have a significant impact on source terms. The highest ranked research area was iPWR containment aerosol natural deposition. The NRC accepts the use of existing aerosol deposition correlations in Regulatory Guide 1.183, but these were developed for large light water reactor (LWR) containments. Application of these correlations to an iPWR design has resulted in greater than a ten-fold reduction of containment airborne aerosol inventory as compared to large LWRs. Development and experimental justification of containment aerosol natural deposition correlations specifically for the unique iPWR containments is expected to result in a large reduction of design basis and beyond-design-basis accident source terms with concomitantly smaller dose to workers and the public. Therefore, NRC acceptance of iPWR containment aerosol natural deposition correlations will directly support the industry’s goal of reducing the Emergency Planning Zone (EPZ) for SMRs. Based on the results in this work, it is clear that thermophoresis is relatively unimportant for iPWRs. Gravitational settling is well understood, and may be the dominant process for a dry environment. Diffusiophoresis and enhanced

  18. Near term climate projections for invasive species distributions

    USGS Publications Warehouse

    Jarnevich, C.S.; Stohlgren, T.J.

    2009-01-01

    Climate change and invasive species pose important conservation issues separately, and should be examined together. We used existing long term climate datasets for the US to project potential climate change into the future at a finer spatial and temporal resolution than the climate change scenarios generally available. These fine scale projections, along with new species distribution modeling techniques to forecast the potential extent of invasive species, can provide useful information to aide conservation and invasive species management efforts. We created habitat suitability maps for Pueraria montana (kudzu) under current climatic conditions and potential average conditions up to 30 years in the future. We examined how the potential distribution of this species will be affected by changing climate, and the management implications associated with these changes. Our models indicated that P. montana may increase its distribution particularly in the Northeast with climate change and may decrease in other areas. ?? 2008 Springer Science+Business Media B.V.

  19. Carbon-14 Source Terms and Generation in Fusion Power Cores

    NASA Astrophysics Data System (ADS)

    Khripunov, V. I.; Kurbatov, D. K.; Subbotin, M. L.

    2008-12-01

    A consecutive study of the source terms of 14C as the major contributor to the external costs of fusion and its production rate was performed by system and neutron activation analysis. It shows that the specific 14C activity induced in the low activation structural materials, coolants and breeders suggested for future fusion power reactor cores is significantly dependent upon the assumption for nitrogen content. The determined range of the specific 14C activity ˜2-20 TBq/GW(e)a induced by the near-term water-cooled, gas-cooled and advanced liquid lithium and lithium-lead self-cooled fusion power reactors is given in the paper regarding the values for natural 14C background and artificial 14C sources as fission power reactors and nuclear tests. It is definitely recommended to minimize the nitrogen content below 0.01 wt.% in the beryllium multipliers and in the structural materials, SiC/SiC composite including. Then due to environmental and waste disposal reasons the 14C generation in fusion power blankets will have negligible impact on the cost.

  20. Long-Term Stability of the NIST Standard Ultrasonic Source.

    PubMed

    Fick, Steven E

    2008-01-01

    The National Institute of Standards and Technology (NIST) Standard Ultrasonic Source (SUS) is a system comprising a transducer capable of output power levels up to 1 W at multiple frequencies between 1 MHz and 30 MHz, and an electrical impedance-matching network that allows the system to be driven by a conventional 50 Ω rf (radio-frequency) source. It is designed to allow interlaboratory replication of ultrasonic power levels with high accuracy using inexpensive readily available ancillary equipment. The SUS was offered for sale for 14 years (1985 to 1999). Each system was furnished with data for the set of calibration points (combinations of power level and frequency) specified by the customer. Of the systems that had been ordered with some calibration points in common, three were returned more than once to NIST for recalibration. Another system retained at NIST has been recalibrated periodically since 1984. The collective data for these systems comprise 9 calibration points and 102 measurements spanning a 17 year interval ending in 2001, the last year NIST ultrasonic power measurement services were available to the public. These data have been analyzed to compare variations in output power with frequency, power level, and time elapsed since the first calibration. The results verify the claim, made in the instruction sheet furnished with every SUS, that "long-term drift, if any, in the calibration of NIST Standard Sources is insignificant compared to the uncertainties associated with a single measurement of ultrasonic power by any method available at NIST." PMID:27096127

  1. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases

  2. An experimental and calculated dose distribution in water around CDC K-type caesium-137 sources.

    PubMed

    Diffey, B L; Klevenhagen, S C

    1975-05-01

    The radiation distribution in water around caesium-137 K-type sources has been measured and the experimental results used to provide data for an expression for dose calculations which may be conveniently applied in computer programs. The calculated absorbed dose rate obtained in this manner is estimated to be within 3% of the actual dose rate for any point in water up to 8 cm from the source. It is also suggested that the strength of a brachytherapy source be expressed in terms of an experimental exposure rate at some well-defined distance since this quantity may be determined more precisely and with less ambiguity than source activity.

  3. Long-term staff scheduling with regular temporal distribution.

    PubMed

    Carrasco, Rafael C

    2010-11-01

    Although optimal staff scheduling often requires elaborate computational methods, those cases which are not highly constrained can be efficiently solved using simpler approaches. This paper describes how a simple procedure, combining random and greedy strategies with heuristics, has been successfully applied in a Spanish hospital to assign guard shifts to the physicians in a department. In this case, the employees prefer that their guard duties are regularly distributed in time. The workload distribution must also satisfy some constraints: in particular, the distribution of duties among the staff must be uniform when a number of tasks and shift types (including some unfrequent and aperiodic types, such as those scheduled during long weekends) are considered. Furthermore, the composition of teams should be varied, in the sense that no particular pairing should dominate the assignments. The procedure proposed is able to find suitable solutions when the number of employees available for every task is not small compared to the number required at every shift. The software is distributed under the terms of the GNU General Public License.

  4. The brightness and spatial distributions of terrestrial radio sources

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; de Bruyn, A. G.; Zaroubi, S.; Koopmans, L. V. E.; Wijnholds, S. J.; Abdalla, F. B.; Brouw, W. N.; Ciardi, B.; Iliev, I. T.; Harker, G. J. A.; Mellema, G.; Bernardi, G.; Zarka, P.; Ghosh, A.; Alexov, A.; Anderson, J.; Asgekar, A.; Avruch, I. M.; Beck, R.; Bell, M. E.; Bell, M. R.; Bentum, M. J.; Best, P.; Bîrzan, L.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Fallows, R. A.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Hassall, T. E.; Horneffer, A.; Iacobelli, M.; Juette, E.; Karastergiou, A.; Klijn, W.; Kondratiev, V. I.; Kuniyoshi, M.; Kuper, G.; van Leeuwen, J.; Loose, M.; Maat, P.; Macario, G.; Mann, G.; McKean, J. P.; Meulman, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pizzo, R.; Polatidis, A. G.; Rafferty, D.; Reich, W.; van Nieuwpoort, R.; Röttgering, H.; Scaife, A. M. M.; Sluman, J.; Smirnov, O.; Sobey, C.; Tagger, M.; Tang, Y.; Tasse, C.; Veen, S. ter; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wise, M. W.; Wucknitz, O.

    2013-10-01

    Faint undetected sources of radio-frequency interference (RFI) might become visible in long radio observations when they are consistently present over time. Thereby, they might obstruct the detection of the weak astronomical signals of interest. This issue is especially important for Epoch of Reionization (EoR) projects that try to detect the faint redshifted H I signals from the time of the earliest structures in the Universe. We explore the RFI situation at 30-163 MHz by studying brightness histograms of visibility data observed with Low-Frequency Array (LOFAR), similar to radio-source-count analyses that are used in cosmology. An empirical RFI distribution model is derived that allows the simulation of RFI in radio observations. The brightness histograms show an RFI distribution that follows a power-law distribution with an estimated exponent around -1.5. With several assumptions, this can be explained with a uniform distribution of terrestrial radio sources whose radiation follows existing propagation models. Extrapolation of the power law implies that the current LOFAR EoR observations should be severely RFI limited if the strength of RFI sources remains strong after time integration. This is in contrast with actual observations, which almost reach the thermal noise and are thought not to be limited by RFI. Therefore, we conclude that it is unlikely that there are undetected RFI sources that will become visible in long observations. Consequently, there is no indication that RFI will prevent an EoR detection with LOFAR.

  5. Continuous-variable quantum key distribution with Gaussian source noise

    SciTech Connect

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-05-15

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  6. Sources and distribution of silt, south Texas shelf

    SciTech Connect

    Mazzullo, J.; Crisp, J.

    1984-09-01

    Fourier grain shape and mineralogic analyses were conducted on the coarse silt fraction of the surficial sediments on the south Texas continental shelf to determine the sources and distribution of the silt. The distribution patterns were evaluated in light of the late Pleistocene paleogeography and modern hydrodynamic conditions prevailing on the shelf to determine whether the coarse silt fraction was relict, palimpsest, or modern in origin. The results of the authors findings are given.

  7. Electric Transport Traction Power Supply System With Distributed Energy Sources

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.

    2016-04-01

    The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.

  8. Tank waste source term inventory validation. Volume 1. Letter report

    SciTech Connect

    Brevick, C.H.; Gaddis, L.A.; Johnson, E.D.

    1995-04-28

    The sample data for selection of 11 radionuclides and 24 chemical analytes were extracted from six separate sample data sets, were arranged in a tabular format and were plotted on scatter plots for all of the 149 single-shell tanks, the 24 double-shell tanks and the four aging waste tanks. The solid and liquid sample data was placed in separate tables and plots. The sample data and plots were compiled from the following data sets: characterization raw sample data, recent core samples, D. Braun data base, Wastren (Van Vleet) data base, TRAC and HTCE inventories. This document is Volume I of the Letter Report entitled Tank Waste Source Term Inventory Validation.

  9. A comparison of world-wide uses of severe reactor accident source terms

    SciTech Connect

    Ang, M.L.; Frid, W.; Kersting, E.J.; Friederichs, H.G.; Lee, R.Y.; Meyer-Heine, A.; Powers, D.A.; Soda, K.; Sweet, D.

    1994-09-01

    The definitions of source terms to reactor containments and source terms to the environment are discussed. A comparison is made between the TID-14844 example source term and the alternative source term described in NUREG-1465. Comparisons of these source terms to the containments and those used in France, Germany, Japan, Sweden, and the United Kingdom are made. Source terms to the environment calculated in NUREG-1500 and WASH-1400 are discussed. Again, these source terms are compared to those now being used in France, Germany, Japan, Sweden, and the United Kingdom. It is concluded that source terms to the containment suggested in NUREG-1465 are not greatly more conservative than those used in other countries. Technical bases for the source terms are similar. The regulatory use of the current understanding of radionuclide behavior varies among countries.

  10. Accident source terms for light-water nuclear power plants using high-burnup or MOX fuel.

    SciTech Connect

    Salay, Michael; Gauntt, Randall O.; Lee, Richard Y.; Powers, Dana Auburn; Leonard, Mark Thomas

    2011-01-01

    Representative accident source terms patterned after the NUREG-1465 Source Term have been developed for high burnup fuel in BWRs and PWRs and for MOX fuel in a PWR with an ice-condenser containment. These source terms have been derived using nonparametric order statistics to develop distributions for the timing of radionuclide release during four accident phases and for release fractions of nine chemical classes of radionuclides as calculated with the MELCOR 1.8.5 accident analysis computer code. The accident phases are those defined in the NUREG-1465 Source Term - gap release, in-vessel release, ex-vessel release, and late in-vessel release. Important differences among the accident source terms derived here and the NUREG-1465 Source Term are not attributable to either fuel burnup or use of MOX fuel. Rather, differences among the source terms are due predominantly to improved understanding of the physics of core meltdown accidents. Heat losses from the degrading reactor core prolong the process of in-vessel release of radionuclides. Improved understanding of the chemistries of tellurium and cesium under reactor accidents changes the predicted behavior characteristics of these radioactive elements relative to what was assumed in the derivation of the NUREG-1465 Source Term. An additional radionuclide chemical class has been defined to account for release of cesium as cesium molybdate which enhances molybdenum release relative to other metallic fission products.

  11. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  12. Depositional controls, distribution, and effectiveness of world's petroleum source rocks

    SciTech Connect

    Klemme, H.D.; Ulmishek, G.F.

    1989-03-01

    Six stratigraphic intervals representing one-third of Phanerozoic time contain source rocks that have provided more than 90% of the world's discovered oil and gas reserves (in barrels of oil equivalent). The six intervals include (1) Silurian (generated 9% of the world's reserves); (2) Upper Devonian-Tournaisian (8% of reserves); (3) Pennsylvanian-Lower Permian (8% of reserves); (4) Upper Jurassic (25% of reserves); (5) middle Cretaceous (29% of reserves); and (6) Oligocene-Miocene (12.5% of reserves). This uneven distribution of source rocks in time has no immediately obvious cyclicity, nor are the intervals exactly repeatable in the commonality of factors that controlled the formation of source rocks. In this study, source rocks of the six intervals have been mapped worldwide together with oil and gas reserves generated by these rocks. Analysis of the maps shows that the main factors affecting deposition of these source rocks and their spatial distribution and effectiveness in generating hydrocarbon reserves are geologic age, global and regional tectonics, paleogeography, climate, and biologic evolution. The effect of each of the factors on geologic setting and quality of source rocks has been analyzed. Compilation of data on maturation time for these source rocks demonstrated that the majority of discovered oil and gas is very young, more than 80% of the world's oil and gas reserves have been generated since Aptian time, and nearly half of the world's hydrocarbons have been generated and trapped since the Oligocene.

  13. Distribution of airborne particles from multi-emission source.

    PubMed

    Kemppainen, Sari; Tervahattu, Heikki; Kikuchi, Ryunosuke

    2003-06-01

    The purpose of this work was to study the distribution of airborne particles in the surroundings of an iron and steel factory in southern Finland. Several sources of particulate emissions are lying side by side, causing heavy dust loading to the environment. This complicated multi-pollutant situation was studied mainly by SEM/EDX methodology. Particles accumulated on Scots pine bark were identified and quantitatively measured according to their element content, size and shape. As a result, distribution maps of particulate elements were drawn and the amount of different particle types along the study lines was plotted. Particulate emissions from the industrial or energy production processes were not the main dust source. Most emissions were produced from the clinker crusher. Numerous stockpiles of the industrial wastes and raw materials also gave rise to particulate emissions as a result of wind erosion. It was concluded that SEM/EDX methodology is a useful tool for studying the distribution of particulate pollutants.

  14. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  15. Production, Distribution, and Applications of Californium-252 Neutron Sources

    SciTech Connect

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-10-03

    The radioisotope {sup 252}Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10{sup 11} neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells The radioisotope {sup 252}Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6- year half-life. A source the size of a person's little finger can emit up to 10 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory(ORNL). DOE sells {sup 252}Cf to commercial

  16. Production, distribution and applications of californium-252 neutron sources.

    PubMed

    Martin, R C; Knauer, J B; Balo, P A

    2000-01-01

    The radioisotope 252Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-yr half-life. A source the size of a person's little finger can emit up to 10(11) neutrons s(-1). Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement and minerals, as well as for detection and identification of explosives, land mines and unexploded military ordinance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 yr of experience and by US Bureau of Mines tests of source survivability during explosions. The production and distribution center for the US Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252Cf to commercial reencapsulators domestically and internationally. Sealed 252Cf sources are also available for loan to agencies and subcontractors of the US government and to universities for educational, research and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments and irradiation of rice to induce genetic mutations.

  17. Security of quantum key distribution with light sources that are not independently and identically distributed

    NASA Astrophysics Data System (ADS)

    Nagamatsu, Yuichi; Mizutani, Akihiro; Ikuta, Rikizo; Yamamoto, Takashi; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2016-04-01

    Although quantum key distribution (QKD) is theoretically secure, there is a gap between the theory and practice. In fact, real-life QKD may not be secure because component devices in QKD systems may deviate from the theoretical models assumed in security proofs. To solve this problem, it is necessary to construct the security proof under realistic assumptions on the source and measurement unit. In this paper, we prove the security of a QKD protocol under practical assumptions on the source that accommodate fluctuation of the phase and intensity modulations. As long as our assumptions hold, it does not matter at all how the phase and intensity distribute or whether or not their distributions over different pulses are independently and identically distributed. Our work shows that practical sources can be safely employed in QKD experiments.

  18. Verification test calculations for the Source Term Code Package

    SciTech Connect

    Denning, R S; Wooton, R O; Alexander, C A; Curtis, L A; Cybulskis, P; Gieseke, J A; Jordan, H; Lee, K W; Nicolosi, S L

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs.

  19. Source terms for plutonium aerosolization from nuclear weapon accidents

    SciTech Connect

    Stephens, D.R.

    1995-07-01

    The source term literature was reviewed to estimate aerosolized and respirable release fractions for accidents involving plutonium in high-explosive (HE) detonation and in fuel fires. For HE detonation, all estimates are based on the total amount of Pu. For fuel fires, all estimates are based on the amount of Pu oxidized. I based my estimates for HE detonation primarily upon the results from the Roller Coaster experiment. For hydrocarbon fuel fire oxidation of plutonium, I based lower bound values on laboratory experiments which represent accident scenarios with very little turbulence and updraft of a fire. Expected values for aerosolization were obtained from the Vixen A field tests, which represent a realistic case for modest turbulence and updraft, and for respirable fractions from some laboratory experiments involving large samples of Pu. Upper bound estimates for credible accidents are based on experiments involving combustion of molten plutonium droplets. In May of 1991 the DOE Pilot Safety Study Program established a group of experts to estimate the fractions of plutonium which would be aerosolized and respirable for certain nuclear weapon accident scenarios.

  20. Understanding the electrical behavior of the action potential in terms of elementary electrical sources.

    PubMed

    Rodriguez-Falces, Javier

    2015-03-01

    A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity.

  1. Challenges in defining a radiologic and hydrologic source term for underground nuclear test centers, Nevada Test Site, Nye County, Nevada

    SciTech Connect

    Smith, D.K.

    1995-06-01

    The compilation of a radionuclide inventory for long-lived radioactive contaminants residual from nuclear testing provides a partial measure of the radiologic source term at the Nevada Test Site. The radiologic source term also includes potentially mobile short-lived radionuclides excluded from the inventory. The radiologic source term for tritium is known with accuracy and is equivalent to the hydrologic source term within the saturated zone. Definition of the total hydrologic source term for fission and activation products that have high activities for decades following underground testing involves knowledge and assumptions which are presently unavailable. Systematic investigation of the behavior of fission products, activation products and actinides under saturated or Partially saturated conditions is imperative to define a representative total hydrologic source term. This is particularly important given the heterogeneous distribution of radionuclides within testing centers. Data quality objectives which emphasize a combination of measurements and credible estimates of the hydrologic source term are a priority for near-field investigations at the Nevada Test Site.

  2. Secure quantum key distribution with an uncharacterized source.

    PubMed

    Koashi, Masato; Preskill, John

    2003-02-01

    We prove the security of the Bennett-Brassard (BB84) quantum key distribution protocol for an arbitrary source whose averaged states are basis independent, a condition that is automatically satisfied if the source is suitably designed. The proof is based on the observation that, to an adversary, the key extraction process is equivalent to a measurement in the sigma(x) basis performed on a pure sigma(z)-basis eigenstate. The dependence of the achievable key length on the bit error rate is the same as that established by Shor and Preskill [Phys. Rev. Lett. 85, 441 (2000)

  3. Space distribution of extragalactic sources - Cosmology versus evolution

    NASA Technical Reports Server (NTRS)

    Cavaliere, A.; Maccacaro, T.

    1990-01-01

    Alternative cosmologies have been recurrently invoked to explain in terms of global spacetime structure the apparent large increase, with increasing redshift, in the average luminosity of active galactic nuclei. These models interestingly seek to avoid the complexities of the canonical interpretation in terms of intrinsic population evolutions in a Friedmann universe. However, a problem of consistency for these cosmologies is pointed out, since they have to include also other classes of extragalactic sources, such as clusters of galaxies and BL Lac objects, for which there is preliminary evidence of a different behavior.

  4. Nonpoint source pollution: a distributed water quality modeling approach.

    PubMed

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2001-03-01

    A distributed water quality model for nonpoint source pollution modeling in agricultural watersheds is described in this paper. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. With data extracted using Geographical Information Systems (GIS) technology for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. The transferability of model parameters to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse modeling. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve nonpoint source modeling at the watershed-scale level.

  5. Distributed policy based access to networked heterogeneous ISR data sources

    NASA Astrophysics Data System (ADS)

    Bent, G.; Vyvyan, D.; Wood, David; Zerfos, Petros; Calo, Seraphin

    2010-04-01

    Within a coalition environment, ad hoc Communities of Interest (CoI's) come together, perhaps for only a short time, with different sensors, sensor platforms, data fusion elements, and networks to conduct a task (or set of tasks) with different coalition members taking different roles. In such a coalition, each organization will have its own inherent restrictions on how it will interact with the others. These are usually stated as a set of policies, including security and privacy policies. The capability that we want to enable for a coalition operation is to provide access to information from any coalition partner in conformance with the policies of all. One of the challenges in supporting such ad-hoc coalition operations is that of providing efficient access to distributed sources of data, where the applications requiring the data do not have knowledge of the location of the data within the network. To address this challenge the International Technology Alliance (ITA) program has been developing the concept of a Dynamic Distributed Federated Database (DDFD), also know as a Gaian Database. This type of database provides a means for accessing data across a network of distributed heterogeneous data sources where access to the information is controlled by a mixture of local and global policies. We describe how a network of disparate ISR elements can be expressed as a DDFD and how this approach enables sensor and other information sources to be discovered autonomously or semi-autonomously and/or combined, fused formally defined local and global policies.

  6. Modeling contaminant concentration distributions in China's centralized source waters.

    PubMed

    Wu, Rui; Qian, Song S; Hao, Fanghua; Cheng, Hongguang; Zhu, Dangsheng; Zhang, Jianyong

    2011-07-15

    Characterizing contaminant occurrences in China's centralized source waters can provide an understanding of source water quality for stakeholders. The single-factor (i.e., worst contaminant) water-quality assessment method, commonly used in Chinese official analysis and publications, provides a qualitative summary of the country's water-quality status but does not specify the extent and degree of specific contaminant occurrences at the national level. Such information is needed for developing scientifically sound management strategies. This article presents a Bayesian hierarchical modeling approach for estimating contaminant concentration distributions in China's centralized source waters using arsenic and fluoride as examples. The data used are from the most recent national census of centralized source waters in 2006. The article uses three commonly used source water stratification methods to establish alternative hierarchical structures reflecting alternative model assumptions as well as competing management needs in characterizing pollutant occurrences. The results indicate that the probability of arsenic exceeding the standard of 0.05 mg/L is about 0.96-1.68% and the probability of fluoride exceeding 1 mg/L is about 9.56-9.96% nationally, both with strong spatial patterns. The article also discusses the use of the Bayesian approach for establishing a source water-quality information management system as well as other applications of our methods. PMID:21692445

  7. Distribution of convective sources and their contribution to the TTL

    NASA Astrophysics Data System (ADS)

    Tzella, Alexandra; Legras, Bernard

    2010-05-01

    The tropical tropopause layer (TTL) is a key region that controls the exchanges between the troposphere and the stratosphere. Although it is well-known that deep convection is driving the renewal of compounds in the TTL, there exists no detailed study of the spatial and temporal distribution of these sources and their relative contribution. This study combines the Lagrangian point of view with high-resolution data from cloud tops (CLAUS dataset). We obtain the ensemble of convective sources by determining the location in both time and space where each TTL parcel has been detrained from. By examining the relative importance of the sources, we find that these are not only highly-localized but also that a small sub-ensemble exhibits a strong signature lasting for a whole season. As parcels rise within the TTL, they also experience strong horizontal mixing within the tropical latitude band. The transport between the time of detrainment and the altitudes where the parcels are well-mixed is determined by a transit function that characterizes the distribution of life times of parcels within the TTL. We will discuss how the sources and transit function change with the seasonal cycle and interannual variability by ENSO.

  8. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  9. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.

  10. Atmospheric PAHs in North China: Spatial distribution and sources.

    PubMed

    Zhang, Yanjun; Lin, Yan; Cai, Jing; Liu, Yue; Hong, Linan; Qin, Momei; Zhao, Yifan; Ma, Jin; Wang, Xuesong; Zhu, Tong; Qiu, Xinghua; Zheng, Mei

    2016-09-15

    Polycyclic aromatic hydrocarbons (PAHs), formed through incomplete combustion process, have adverse health effects. To investigate spatial distribution and sources of PAHs in North China, PAHs with passive sampling in 90 gridded sites during June to September in 2011 were analyzed. The average concentration of the sum of fifteen PAHs in North China is 220±14ng/m(3), with the highest in Shanxi, followed by Shandong and Hebei, and then the Beijing-Tianjin area. Major sources of PAHs are identified for each region of North China, coke process for Shanxi, biomass burning for Hebei and Shandong, and coal combustion for Beijing-Tianjin area, respectively. Emission inventory is combined with back trajectory analysis to study the influence of emissions from surrounding areas at receptor sites. Shanxi and Beijing-Tianjin areas are more influenced by sources nearby while regional sources have more impact on Hebei and Shandong areas. Results from this study suggest the areas where local emission should be the major target for control and areas where both local and regional sources should be considered for PAH abatement in North China.

  11. Atmospheric PAHs in North China: Spatial distribution and sources.

    PubMed

    Zhang, Yanjun; Lin, Yan; Cai, Jing; Liu, Yue; Hong, Linan; Qin, Momei; Zhao, Yifan; Ma, Jin; Wang, Xuesong; Zhu, Tong; Qiu, Xinghua; Zheng, Mei

    2016-09-15

    Polycyclic aromatic hydrocarbons (PAHs), formed through incomplete combustion process, have adverse health effects. To investigate spatial distribution and sources of PAHs in North China, PAHs with passive sampling in 90 gridded sites during June to September in 2011 were analyzed. The average concentration of the sum of fifteen PAHs in North China is 220±14ng/m(3), with the highest in Shanxi, followed by Shandong and Hebei, and then the Beijing-Tianjin area. Major sources of PAHs are identified for each region of North China, coke process for Shanxi, biomass burning for Hebei and Shandong, and coal combustion for Beijing-Tianjin area, respectively. Emission inventory is combined with back trajectory analysis to study the influence of emissions from surrounding areas at receptor sites. Shanxi and Beijing-Tianjin areas are more influenced by sources nearby while regional sources have more impact on Hebei and Shandong areas. Results from this study suggest the areas where local emission should be the major target for control and areas where both local and regional sources should be considered for PAH abatement in North China. PMID:27241206

  12. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    NASA Astrophysics Data System (ADS)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  13. CMP reflection imaging via interferometry of distributed subsurface sources

    NASA Astrophysics Data System (ADS)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  14. Mapping the source distribution of microseisms using noise covariogram envelopes

    NASA Astrophysics Data System (ADS)

    Sadeghisorkhani, Hamzeh; Gudmundsson, Ólafur; Roberts, Roland; Tryggvason, Ari

    2016-06-01

    We introduce a method for mapping the noise-source distribution of microseisms which uses information from the full length of covariograms (cross-correlations). We derive a forward calculation based on the plane-wave assumption in 2-D, to formulate an iterative, linearized inversion of covariogram envelopes in the time domain. The forward calculation involves bandpass filtering of the covariograms. The inversion exploits the well-known feature of noise cross-correlation, that is, an anomaly in the noise field that is oblique to the interstation direction appears as cross-correlation amplitude at a smaller time lag than the in-line, surface wave arrival. Therefore, the inversion extracts more information from the covariograms than that contained at the expected surface wave arrival, and this allows us to work with few stations to find the propagation directions of incoming energy. The inversion is naturally applied to data that retain physical units that are not amplitude normalized in any way. By dividing a network into groups of stations, we can constrain the source location by triangulation. We demonstrate results of the method with synthetic data and one year (2012) of data from the Swedish National Seismic Network and also look at the seasonal variation of source distribution around Scandinavia. After preprocessing and cross-correlation, the stations are divided into five groups of 9-12 stations. We invert the envelopes of each group in eight period ranges between 2 and 25 s. Results show that the noise sources at short periods (less than 12 s) lie predominantly in the North Atlantic Ocean and the Barents Sea, and at longer periods the energy appears to have a broader distribution. The strongly anisotropic source distribution in this area is estimated to cause significant biases of velocity measurements compared to the level of heterogeneity in the region. The amplitude of the primary microseisms varies little over the year, but secondary microseisms are much

  15. The Impact of Source Distribution on Scalar Transport over Forested Hills

    NASA Astrophysics Data System (ADS)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  16. Extending Marine Species Distribution Maps Using Non-Traditional Sources

    PubMed Central

    Moretzsohn, Fabio; Gibeaut, James

    2015-01-01

    Abstract Background Traditional sources of species occurrence data such as peer-reviewed journal articles and museum-curated collections are included in species databases after rigorous review by species experts and evaluators. The distribution maps created in this process are an important component of species survival evaluations, and are used to adapt, extend and sometimes contract polygons used in the distribution mapping process. New information During an IUCN Red List Gulf of Mexico Fishes Assessment Workshop held at The Harte Research Institute for Gulf of Mexico Studies, a session included an open discussion on the topic of including other sources of species occurrence data. During the last decade, advances in portable electronic devices and applications enable 'citizen scientists' to record images, location and data about species sightings, and submit that data to larger species databases. These applications typically generate point data. Attendees of the workshop expressed an interest in how that data could be incorporated into existing datasets, how best to ascertain the quality and value of that data, and what other alternate data sources are available. This paper addresses those issues, and provides recommendations to ensure quality data use. PMID:25941453

  17. Long-term Trend of Solar Coronal Hole Distribution from 1975 to 2014

    NASA Astrophysics Data System (ADS)

    Fujiki, K.; Tokumaru, M.; Hayashi, K.; Satonaka, D.; Hakamada, K.

    2016-08-01

    We developed an automated prediction technique for coronal holes using potential magnetic field extrapolation in the solar corona to construct a database of coronal holes appearing from 1975 February to 2015 July (Carrington rotations from 1625 to 2165). Coronal holes are labeled with the location, size, and average magnetic field of each coronal hole on the photosphere and source surface. As a result, we identified 3335 coronal holes and found that the long-term distribution of coronal holes shows a similar pattern known as the magnetic butterfly diagram, and polar/low-latitude coronal holes tend to decrease/increase in the last solar minimum relative to the previous two minima.

  18. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds. (a... security interest of the Government. (c) Source of funds. Loans under this subpart will be made as...

  19. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 12 2012-01-01 2012-01-01 false Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds. (a... security interest of the Government. (c) Source of funds. Loans under this subpart will be made as...

  20. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  1. Long-term optical behavior of 114 extragalactic sources

    NASA Astrophysics Data System (ADS)

    Pica, A. J.; Pollock, J. T.; Smith, A. G.; Leacock, R. J.; Edwards, P. L.; Scott, R. L.

    1980-11-01

    Photographic observations of over 200 quasars and related objects have been obtained at the Rosemary Hill Observatory since 1968. Twenty that are optically violent variables were reported on by Pollock et al. (1979). This paper presents data for 114 less active sources, 58 of which exhibit optical variations at a confidence level of 95% or greater. Light curves are given for the 26 most active sources. In addition, the overall monitoring program at the Observatory is reviewed, and information on the status of 206 objects is provided.

  2. Local tsunamis and distributed slip at the source

    USGS Publications Warehouse

    Geist, E.L.; Dmowska, R.

    1999-01-01

    Variations in the local tsunami wave field are examined in relation to heterogeneous slip distributions that are characteristic of many shallow subduction zone earthquakes. Assumptions inherent in calculating the coseismic vertical displacement field that defines the initial condition for tsunami propagation are examined. By comparing the seafloor displacement from uniform slip to that from an ideal static crack, we demonstrate that dip-directed slip variations significantly affect the initial cross-sectional wave profile. Because of the hydrodynamic stability of tsunami wave forms, these effects directly impact estimates of maximum runup from the local tsunami. In most cases, an assumption of uniform slip in the dip direction significantly underestimates the maximum amplitude and leading wave steepness of the local tsunami. Whereas dip-directed slip variations affect the initial wave profile, strike-directed slip variations result in wavefront-parallel changes in amplitude that are largely preserved during propagation from the source region toward shore, owing to the effects of refraction. Tests of discretizing slip distributions indicate that small fault surface elements of dimensions similar to the source depth can acceptably approximate the vertical displacement field in comparison to continuous slip distributions. Crack models for tsunamis generated by shallow subduction zone earthquakes indicate that a rupture intersecting the free surface results in approximately twice the average slip. Therefore, the observation of higher slip associated with tsunami earthquakes relative to typical subduction zone earthquakes of the same magnitude suggests that tsunami earthquakes involve rupture of the seafloor, whereas rupture of deeper subduction zone earthquakes may be imbedded and not reach the seafloor.

  3. Reservoir, seal, and source rock distribution in Essaouira Rift Basin

    SciTech Connect

    Ait Salem, A. )

    1994-07-01

    The Essaouira onshore basin is an important hydrocarbon generating basin, which is situated in western Morocco. There are seven oil and gas-with-condensate fields; six are from Jurassic reservoirs and one from a Triassic reservoir. As a segment of the Atlantic passive continental margin, the Essaouira basin was subjected to several post-Hercynian basin deformation phases, which resulted in distribution, in space and time, of reservoir, seal, and source rock. These basin deformations are synsedimentary infilling of major half grabens with continental red buds and evaporite associated with the rifting phase, emplacement of a thick postrifting Jurassic and Cretaceous sedimentary wedge during thermal subsidence, salt movements, and structural deformations in relation to the Atlas mergence. The widely extending lower Oxfordian shales are the only Jurassic shale beds penetrated and recognized as potential and mature source rocks. However, facies analysis and mapping suggested the presence of untested source rocks in Dogger marine shales and Triassic to Liassic lacustrine shales. Rocks with adequate reservoir characteristics were encountered in Triassic/Liassic fluvial sands, upper Liassic dolomites, and upper Oxfordian sandy dolomites. The seals are provided by Liassic salt for the lower reservoirs and Middle to Upper Jurassic anhydrite for the upper reservoirs. Recent exploration studies demonstrate that many prospective structure reserves remain untested.

  4. CHALLENGES IN SOURCE TERM MODELING OF DECONTAMINATION AND DECOMMISSIONING WASTES.

    SciTech Connect

    SULLIVAN, T.M.

    2006-08-01

    Development of real-time predictive modeling to identify the dispersion and/or source(s) of airborne weapons of mass destruction including chemical, biological, radiological, and nuclear material in urban environments is needed to improve response to potential releases of these materials via either terrorist or accidental means. These models will also prove useful in defining airborne pollution dispersion in urban environments for pollution management/abatement programs. Predicting gas flow in an urban setting on a scale of less than a few kilometers is a complicated and challenging task due to the irregular flow paths that occur along streets and alleys and around buildings of different sizes and shapes, i.e., ''urban canyons''. In addition, air exchange between the outside and buildings and subway areas further complicate the situation. Transport models that are used to predict dispersion of WMD/CBRN materials or to back track the source of the release require high-density data and need defensible parameterizations of urban processes. Errors in the data or any of the parameter inputs or assumptions will lead to misidentification of the airborne spread or source release location(s). The need for these models to provide output in a real-time fashion if they are to be useful for emergency response provides another challenge. To improve the ability of New York City's (NYC's) emergency management teams and first response personnel to protect the public during releases of hazardous materials, the New York City Urban Dispersion Program (UDP) has been initiated. This is a four year research program being conducted from 2004 through 2007. This paper will discuss ground level and subway Perfluorocarbon tracer (PFT) release studies conducted in New York City. The studies released multiple tracers to study ground level and vertical transport of contaminants. This paper will discuss the results from these tests and how these results can be used for improving transport models

  5. Radiological and chemical source terms for Solid Waste Operations Complex. Revision 1

    SciTech Connect

    Boothe, G.F.

    1994-06-03

    The purpose of this document is to describe the radiological and chemical source terms for the major projects of the Solid Waste Operations Complex (SWOC), including Project W-112, Project W-133 and Project W-100 (WRAP 2A). For purposes of this document, the term ``source term`` means the design basis inventory. All of the SWOC source terms involve the estimation of the radiological and chemical contents of various waste packages from different waste streams, and the inventories of these packages within facilities or within a scope of operations. The composition of some of the waste is not known precisely; consequently, conservative assumptions were made to ensure that the source term represents a bounding case (i.e., it is expected that the source term would not be exceeded). As better information is obtained on the radiological and chemical contents of waste packages and more accurate facility specific models are developed, this document should be revised as appropriate. Radiological source terms are needed to perform shielding and external dose calculations, to estimate routine airborne releases, to perform release calculations and dose estimates for safety documentation, to calculate the maximum possible fire loss and specific source terms for individual fire areas, etc. Chemical source terms (i.e., inventories of combustible, flammable, explosive or hazardous chemicals) are used to determine combustible loading, fire protection requirements, personnel exposures to hazardous chemicals from routine and accident conditions, and a wide variety of other safety and environmental requirements.

  6. Time-dependent plane-wave spectrum representations for radiation from volume source distributions

    NASA Astrophysics Data System (ADS)

    Heyman, Ehud

    1996-02-01

    A new time-domain spectral theory for radiation from a time-dependent source distribution, is presented. The full spectral representation is based on a Radon transform of the source distribution in the four-dimensional space-time domain and consists of time-dependent plane waves that propagate in all space directions and with all (spectral) propagation speeds vκ. This operation, termed the slant stack transform, involves projection of the time-dependent source distribution along planes normal to the spectral propagation direction and stacking them with a progressive delay corresponding to the spectral propagation speed vκ along this direction. Outside the source domain, this three-fold representation may be contracted into a two-fold representation consisting of time-dependent plane waves that satisfy the spectral constraint vκ=c with c being the medium velocity. In the two-fold representation, however, the complete spectral representation involves both propagating time-dependent plane waves and evanescent time-dependent plane waves. We explore the separate role of these spectral constituents in establishing the causal field, and determine the space-time regions where the field is described only by the propagating spectrum. The spectral theory is presented here for scalar wave fields, but it may readily be extended to vector electromagnetic fields.

  7. [Spatial distribution and pollution source identification of agricultural non-point source pollution in Fujiang watershed].

    PubMed

    Ding, Xiao-Wen; Shen, Zhen-Yao

    2012-11-01

    In order to provide regulatory support for management and control of non-point source (NPS) pollution in Fujiang watershed, agricultural NPS pollution is simulated, spatial distribution characteristics of NPS pollution are analyzed, and the primary pollution sources are also identified, by export coefficient model (ECM) and geographic information system (GIS). Agricultural NPS total nitrogen (TN) loading was of research area was 9.11 x 10(4) t in 2010, and the average loading was intensity was 3.10 t x km(-2). Agricultural NPS TN loading mainly distributed over dry lands, Mianyang city and gentle slope areas; high loading intensity areas were dry lands, Deyang city and gentle slope areas. Agricultural land use, of which contribution rate was 62. 12%, was the most important pollution source; fertilizer loss in dry lands, of which contribution rate was 50.49%, was the prominent. Improving methods of agricultural cultivation, implementing "farm land returning to woodland" policy, and enhancing treatment efficiency of domestic sewage and livestock waster wate are effective measures.

  8. Reference-frame-independent quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Wang, Can; Sun, Shi-Hai; Ma, Xiang-Chun; Tang, Guang-Zhao; Liang, Lin-Mei

    2015-10-01

    Compared with the traditional protocols of quantum key distribution (QKD), the reference-frame-independent (RFI)-QKD protocol has been generally proved to be very useful and practical, since its experimental implementation can be simplified without the alignment of a reference frame. In most RFI-QKD systems, the encoding states are always taken to be perfect, which, however, is not practical in realizations. In this paper, we consider the security of RFI QKD with source flaws based on the loss-tolerant method proposed by Tamaki et al. [Phys. Rev. A 90, 052314 (2014), 10.1103/PhysRevA.90.052314]. As the six-state protocol can be realized with four states, we show that the RFI-QKD protocol can also be performed with only four encoding states instead of six encoding states in its standard version. Furthermore, the numerical simulation results show that the source flaws in the key-generation basis (Z basis) will reduce the key rate but are loss tolerant, while the ones in X and Y bases almost have no effect and the key rate remains almost the same even when they are very large. Hence, our method and results will have important significance in practical experiments, especially in earth-to-satellite or chip-to-chip quantum communications.

  9. Source-rock distribution model of the periadriatic region

    SciTech Connect

    Zappaterra, E. )

    1994-03-01

    The Periadriatic area is a mosaic of geological provinces comprised of spatially and temporally similar tectonic-sedimentary cycles. Tectonic evolution progressed from a Triassic-Early Jurassic (Liassic) continental rifting stage on the northern edge of the African craton, through an Early Jurassic (Middle Liassic)-Late Cretaceous/Eocene oceanic rifting stage and passive margin formation, to a final continental collision and active margin deformation stage in the Late Cretaceous/Eocene to Holocene. Extensive shallow-water carbonate platform deposits covered large parts of the Periadriatic region in the Late Triassic. Platform breakup and development of a platform-to-basin carbonate shelf morphology began in the Late Triassic and extended through the Cretaceous. On the basis of this paleogeographic evolution, the regional geology of the Periadriatic region can be expressed in terms of three main Upper Triassic-Paleogene sedimentary sequences: (A), the platform sequence; (B), the platform to basin sequence; and (C), the basin sequence. These sequences developed during the initial rifting and subsequent passive-margin formation tectonic stages. The principal Triassic source basins and most of the surface hydrocarbon indications and economically important oil fields of the Periadriatic region are associated with sequence B areas. No major hydrocarbon accumulations can be directly attributed to the Jurassic-Cretaceous epioceanic and intraplatform source rock sequences. The third episode of source bed deposition characterizes the final active margin deformation stage and is represented by Upper Tertiary organic-rich terrigenous units, mostly gas-prone. These are essentially associated with turbiditic and flysch sequences of foredeep basins and have generated the greater part of the commercial biogenic gases of the Periadriatic region. 82 refs., 11 figs., 2 tabs.

  10. Human Term Placenta as a Source of Hematopoietic Cells

    PubMed Central

    Serikov, Vladimir; Hounshell, Catherine; Larkin, Sandra; Green, William; Ikeda, Hirokazu; Walters, Mark C.

    2012-01-01

    The main barrier to a broader clinical application of umbilical cord blood (UCB) transplantation is its limiting cellular content. Thus, the discovery of hematopoietic progenitor cells in murine placental tissue led us investigate whether the human placenta contains hematopoietic cells, sites of hematopoiesis, and to develop a procedure of processing and storing placental hematopoietic cells for transplantation. Here we show that the human placenta contains large numbers of CD34-expressing hematopoietic cells, with the potential to provide a cellular yield several-fold greater than that of a typical UCB harvest. Cells from fresh or cryopreserved placental tissue generated erythroid and myeloid colonies in culture, and also produced lymphoid cells after transplantation in immunodeficient mice. These results suggest that human placenta could become an important new source of hematopoietic cells for allogeneic transplantation. PMID:19429852

  11. Environmental radiation safety: source term modification by soil aerosols. Interim report

    SciTech Connect

    Moss, O.R.; Allen, M.D.; Rossignol, E.J.; Cannon, W.C.

    1980-08-01

    The goal of this project is to provide information useful in estimating hazards related to the use of a pure refractory oxide of /sup 238/Pu as a power source in some of the space vehicles to be launched during the next few years. Although the sources are designed and built to withstand re-entry into the earth's atmosphere, and to impact with the earth's surface without releasing any plutonium, the possibility that such an event might produce aerosols composed of soil and /sup 238/PuO/sub 2/ cannot be absolutely excluded. This report presents the results of our most recent efforts to measure the degree to which the plutonium aerosol source term might be modified in a terrestrial environment. The five experiments described represent our best effort to use the original experimental design to study the change in the size distribution and concentration of a /sup 238/PuO/sub 2/ aerosol due to coagulation with an aerosol of clay or sandy loam soil.

  12. Source term estimation during incident response to severe nuclear power plant accidents

    SciTech Connect

    McKenna, T.J.; Glitter, J.G.

    1988-10-01

    This document presents a method of source term estimation that reflects the current understanding of source term behavior and that can be used during an event. The various methods of estimating radionuclide release to the environment (source terms) as a result of an accident at a nuclear power reactor are discussed. The major factors affecting potential radionuclide releases off site (source terms) as a result of nuclear power plant accidents are described. The quantification of these factors based on plant instrumentation also is discussed. A range of accident conditions from those within the design basis to the most severe accidents possible are included in the text. A method of gross estimation of accident source terms and their consequences off site is presented. 39 refs., 48 figs., 19 tabs.

  13. 78 FR 6318 - SourceGas Distribution LLC; Notice of Petition for Rate Approval

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Petition for Rate Approval Take notice that on January 15, 2013, SourceGas Distribution LLC (SourceGas) filed a rate election pursuant to section 284.123(b)(1) of the Commissions regulations. SourceGas states the rate election...

  14. 78 FR 41398 - SourceGas Distribution LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Filing Take notice that on June 27, 2013, SourceGas Distribution LLC (SourceGas) filed a Rate Election and revised Statement of Operating... and 284.224). SourceGas proposes to revise its fuel reimbursement quantity percentage to reflect...

  15. Processes driving short-term temporal dynamics of small mammal distribution in human-disturbed environments.

    PubMed

    Martineau, Julie; Pothier, David; Fortin, Daniel

    2016-07-01

    As the impact of anthropogenic activities intensifies worldwide, an increasing proportion of landscape is converted to early successional stages every year. To understand and anticipate the global effects of the human footprint on wildlife, assessing short-term changes in animal populations in response to disturbance events is becoming increasingly important. We used isodar habitat selection theory to reveal the consequences of timber harvesting on the ecological processes that control the distribution dynamics of a small mammal, the red-backed vole (Myodes gapperi). The abundance of voles was estimated in pairs of cut and uncut forest stands, prior to logging and up to 2 years afterwards. A week after logging, voles did not display any preference between cut and uncut stands, and a non-significant isodar indicated that their distribution was not driven by density-dependent habitat selection. One month after harvesting, however, juvenile abundance increased in cut stands, whereas the highest proportions of reproductive females were observed in uncut stands. This distribution pattern appears to result from interference competition, with juveniles moving into cuts where there was weaker competition with adults. In fact, the emergence of source-sink dynamics between uncut and cut stands, driven by interference competition, could explain why the abundance of red-backed voles became lower in cut (the sink) than uncut (the source) stands 1-2 years after logging. Our study demonstrates that the influences of density-dependent habitat selection and interference competition in shaping animal distribution can vary frequently, and for several months, following anthropogenic disturbance. PMID:27003700

  16. Spatial distribution and migration of nonylphenol in groundwater following long-term wastewater irrigation.

    PubMed

    Wang, Shiyu; Wu, Wenyong; Liu, Fei; Yin, Shiyang; Bao, Zhe; Liu, Honglu

    2015-01-01

    Seen as a solution to water shortages, wastewater reuse for crop irrigation does however poses a risk owing to the potential release of organic contaminants into soil and water. The frequency of detection (FOD), concentration, and migration of nonylphenol (NP) isomers in reclaimed water (FODRW), surface water (FODSW), and groundwater (FODGW) were investigated in a long-term wastewater irrigation area in Beijing. The FODRW, FODSW and FODGW of any or all of 12 NP isomers were 66.7% to 100%, 76.9% to 100% and 13.3% to 60%, respectively. The mean (±standard deviation) NP concentrations of the reclaimed water, surface water, and groundwater (NPRW, NPSW, NPGW, repectively) were 469.4±73.4 ng L(-1), 694.6±248.7 ng(-1) and 244.4±230.8 ng(-1), respectively. The existence of external pollution sources during water transmission and distribution resulted in NPSW exceeding NPRW. NP distribution in groundwater was related to the duration and quantity of wastewater irrigation, the sources of aquifer recharge, and was seen to decrease with increasing aquifer depth. Higher riverside infiltration rate nearby leads to higher FODGW values. The migration rate of NP isomers was classified as high, moderate or low.

  17. Spatial distribution and migration of nonylphenol in groundwater following long-term wastewater irrigation

    NASA Astrophysics Data System (ADS)

    Wang, Shiyu; Wu, Wenyong; Liu, Fei; Yin, Shiyang; Bao, Zhe; Liu, Honglu

    2015-06-01

    Seen as a solution to water shortages, wastewater reuse for crop irrigation does however poses a risk owing to the potential release of organic contaminants into soil and water. The frequency of detection (FOD), concentration, and migration of nonylphenol (NP) isomers in reclaimed water (FODRW), surface water (FODSW), and groundwater (FODGW) were investigated in a long-term wastewater irrigation area in Beijing. The FODRW, FODSW and FODGW of any or all of 12 NP isomers were 66.7% to 100%, 76.9% to 100% and 13.3% to 60%, respectively. The mean (± standard deviation) NP concentrations of the reclaimed water, surface water, and groundwater (NPRW, NPSW, NPGW, repectively) were 469.4 ± 73.4 ng L- 1, 694.6 ± 248.7 ng- 1 and 244.4 ± 230.8 ng- 1, respectively. The existence of external pollution sources during water transmission and distribution resulted in NPSW exceeding NPRW. NP distribution in groundwater was related to the duration and quantity of wastewater irrigation, the sources of aquifer recharge, and was seen to decrease with increasing aquifer depth. Higher riverside infiltration rate nearby leads to higher FODGW values. The migration rate of NP isomers was classified as high, moderate or low.

  18. Spatial Distribution of Soil Fauna In Long Term No Tillage

    NASA Astrophysics Data System (ADS)

    Corbo, J. Z. F.; Vieira, S. R.; Siqueira, G. M.

    2012-04-01

    The soil is a complex system constituted by living beings, organic and mineral particles, whose components define their physical, chemical and biological properties. Soil fauna plays an important role in soil and may reflect and interfere in its functionality. These organisms' populations may be influenced by management practices, fertilization, liming and porosity, among others. Such changes may reduce the composition and distribution of soil fauna community. Thus, this study aimed to determine the spatial variability of soil fauna in consolidated no-tillage system. The experimental area is located at Instituto Agronômico in Campinas (São Paulo, Brazil). The sampling was conducted in a Rhodic Eutrudox, under no tillage system and 302 points distributed in a 3.2 hectare area in a regular grid of 10.00 m x 10.00 m were sampled. The soil fauna was sampled with "Pitfall Traps" method and traps remained in the area for seven days. Data were analyzed using descriptive statistics to determine the main statistical moments (mean variance, coefficient of variation, standard deviation, skewness and kurtosis). Geostatistical tools were used to determine the spatial variability of the attributes using the experimental semivariogram. For the biodiversity analysis, Shannon and Pielou indexes and richness were calculated for each sample. Geostatistics has proven to be a great tool for mapping the spatial variability of groups from the soil epigeal fauna. The family Formicidae proved to be the most abundant and dominant in the study area. The parameters of descriptive statistics showed that all attributes studied showed lognormal frequency distribution for groups from the epigeal soil fauna. The exponential model was the most suited for the obtained data, for both groups of epigeal soil fauna (Acari, Araneae, Coleoptera, Formicidae and Coleoptera larva), and the other biodiversity indexes. The sampling scheme (10.00 m x 10.00 m) was not sufficient to detect the spatial

  19. Identification of aerospace acoustic sources using sparse distributed associative memory

    NASA Technical Reports Server (NTRS)

    Scott, E. A.; Fuller, C. R.; O'Brien, W. F.

    1990-01-01

    A pattern recognition system has been developed to classify five different aerospace acoustic sources. In this paper the performance of two new classifiers, an associative memory classifier and a neural network classifier, is compared to the performance of a previously designed system. Sources are classified using features calculated from the time and frequency domain. Each classifier undergoes a training period where it learns to classify sources correctly based on a set of known sources. After training the classifier is tested with unknown sources. Results show that over 96 percent of sources were identified correctly with the new associative memory classifier. The neural network classifier identified over 81 percent of the sources correctly.

  20. Source term model evaluations for the low-level waste facility performance assessment

    SciTech Connect

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  1. Source-term reevaluation for US commercial nuclear power reactors: a status report

    SciTech Connect

    Herzenberg, C.L.; Ball, J.R.; Ramaswami, D.

    1984-12-01

    Only results that had been discussed publicly, had been published in the open literature, or were available in preliminary reports as of September 30, 1984, are included here. More than 20 organizations are participating in source-term programs, which have been undertaken to examine severe accident phenomena in light-water power reactors (including the chemical and physical behavior of fission products under accident conditions), update and reevaluate source terms, and resolve differences between predictions and observations of radiation releases and related phenomena. Results from these source-term activities have been documented in over 100 publications to date.

  2. Source terms and attenuation lengths for estimating shielding requirements or dose analyses of proton therapy accelerators.

    PubMed

    Sheu, Rong-Jiun; Lai, Bo-Lun; Lin, Uei-Tyng; Jiang, Shiang-Huei

    2013-08-01

    Proton therapy accelerators in the energy range of 100-300 MeV could potentially produce intense secondary radiation, which must be carefully evaluated and shielded for the purpose of radiation safety in a densely populated hospital. Monte Carlo simulations are generally the most accurate method for accelerator shielding design. However, simplified approaches such as the commonly used point-source line-of-sight model are usually preferable on many practical occasions, especially for scoping shielding design or quick sensitivity studies. This work provides a set of reliable shielding data with reasonable coverage of common target and shielding materials for 100-300 MeV proton accelerators. The shielding data, including source terms and attenuation lengths, were derived from a consistent curve fitting process of a number of depth-dose distributions within the shield, which were systematically calculated by using MCNPX for various beam-target shield configurations. The general characteristics and qualities of this data set are presented. Possible applications in cases of single- and double-layer shielding are considered and demonstrated.

  3. Open Source assimilation tool for distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Richard, Julien; Giangola-Murzyn, Agathe; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2013-04-01

    An advanced GIS data assimilation interface is a requisite to obtain a distributed hydrological model that is both transportable from catchment to catchment and is easily adaptable to data resolution. This tool is achieved for the cartographic data as well as the linked information data. In the case of the Multi-Hydro-Version2 model (A. Giangola-Murzyn et al. 2012), several types of information are distributed on a regular grid. The grid cell size has to be chosen by the user and each cell has to be filled up with information. In order to be the most realistic as possible, the Multi-Hydro model takes into account several data. For that, the assimilation tool (MH-AssimTool) has to be able to import all these different information. The needed flexibility of the studied area and grid size requires that the GIS interface must be easy to take in hand and also practical. The solution of a main window for the geographical visualisation and hierarchical menus coupled with checkboxes was chosen. For example, the geographical information, like the topography or the land use can be visualized in the main window. For the other data, like the soil conductivity, the geology or the initial moisture, the information is demanded through several pop-up windows. Once the needed information imported, MH-AssimTool prepares automatically the data. For the topography data conversion, if the resolution is too small, an interpolation is done during the processing. As a result, all the converted data is in a good resolution for the modelling. As Multi-Hydro, MH-AssimTool is open source. It's coded in Visual Basic language coupled with a GIS library. The interface is built in such a way then it can be used by a non specialist. We will illustrate the efficiency of the tool with some case studies of peri-urban catchments of widely different sizes and characteristics. We will also explain some parts of the coding of the interface.

  4. Source terms: an investigation of uncertainties, magnitudes, and recommendations for research. [PWR; BWR

    SciTech Connect

    Levine, S.; Kaiser, G. D.; Arcieri, W. C.; Firstenberg, H.; Fulford, P. J.; Lam, P. S.; Ritzman, R. L.; Schmidt, E. R.

    1982-03-01

    The purpose of this document is to assess the state of knowledge and expert opinions that exist about fission product source terms from potential nuclear power plant accidents. This is so that recommendations can be made for research and analyses which have the potential to reduce the uncertainties in these estimated source terms and to derive improved methods for predicting their magnitudes. The main reasons for writing this report are to indicate the major uncertainties involved in defining realistic source terms that could arise from severe reactor accidents, to determine which factors would have the most significant impact on public risks and emergency planning, and to suggest research and analyses that could result in the reduction of these uncertainties. Source terms used in the conventional consequence calculations in the licensing process are not explicitly addressed.

  5. Accident source terms for boiling water reactors with high burnup cores.

    SciTech Connect

    Gauntt, Randall O.; Powers, Dana Auburn; Leonard, Mark Thomas

    2007-11-01

    The primary objective of this report is to provide the technical basis for development of recommendations for updates to the NUREG-1465 Source Term for BWRs that will extend its applicability to accidents involving high burnup (HBU) cores. However, a secondary objective is to re-examine the fundamental characteristics of the prescription for fission product release to containment described by NUREG-1465. This secondary objective is motivated by an interest to understand the extent to which research into the release and behaviors of radionuclides under accident conditions has altered best-estimate calculations of the integral response of BWRs to severe core damage sequences and the resulting radiological source terms to containment. This report, therefore, documents specific results of fission product source term analyses that will form the basis for the HBU supplement to NUREG-1465. However, commentary is also provided on observed differences between the composite results of the source term calculations performed here and those reflected NUREG-1465 itself.

  6. Future prospects for ECR ion sources with improved charge state distributions

    SciTech Connect

    Alton, G.D.

    1995-12-31

    Despite the steady advance in the technology of the ECR ion source, present art forms have not yet reached their full potential in terms of charge state and intensity within a particular charge state, in part, because of the narrow band width. single-frequency microwave radiation used to heat the plasma electrons. This article identifies fundamentally important methods which may enhance the performances of ECR ion sources through the use of: (1) a tailored magnetic field configuration (spatial domain) in combination with single-frequency microwave radiation to create a large uniformly distributed ECR ``volume`` or (2) the use of broadband frequency domain techniques (variable-frequency, broad-band frequency, or multiple-discrete-frequency microwave radiation), derived from standard TWT technology, to transform the resonant plasma ``surfaces`` of traditional ECR ion sources into resonant plasma ``volume``. The creation of a large ECR plasma ``volume`` permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, thereby producing higher charge state ions and much higher intensities within a particular charge state than possible in present forms of` the source. The ECR ion source concepts described in this article offer exciting opportunities to significantly advance the-state-of-the-art of ECR technology and as a consequence, open new opportunities in fundamental and applied research and for a variety of industrial applications.

  7. Sources and distribution of carbon within the Yangtze River system

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Zhang, J.; Liu, S. M.; Zhang, Z. F.; Yao, Q. Z.; Hong, G. H.; Cooper, L.

    2007-01-01

    Dissolved, particulate, soil and plant samples were collected from the Yangtze River (Changjiang) system in May 1997 and May 2003 to determine the sources and distribution of organic and inorganic matter within the river system. Average dissolved organic carbon (DOC) concentrations within the main stream were 105 μM C in 1997 and 108 μM C in 2003. Particulate organic carbon (POC) ranged from 0.5% to 2.5% of total suspended matter (TSM). Both dissolved inorganic carbon (DIC) and particulate inorganic carbon (PIC) concentrations decreased from upper to lower reaches of the river, within the ranges 1.2-2.7 mM and 0.08-4.3% of TSM, respectively. δ13C and δ15N values for tributaries and the main stream varied from -26.8‰ to -25.1‰ and 2.8‰ to 6.0‰, respectively. A large spatial variation in particulate organic matter (POM) is recorded along the main stream, probably due to the contributions of TSM from major tributaries and POM input from local vegetation sources. The dominance of C-3 plants throughout the entire basin is indicated by δ13C and δ15N values, which range from -28.8‰ to -24.3‰ and from -0.9‰ to 5.5‰, respectively. The δ13C and δ15N values of organic matter within surface soil from alongside tributaries and the main stream vary from -28.9‰ to -24.3‰ and 2.7‰ to 4.5‰, respectively. Although these differences are subtle, there is a slight enrichment of 15N in soils along the main stream. Various approaches, such as C/N and stable isotopes, were used to trace the sources of organic matter within the river. Riverine POM is mostly derived from soil; the contribution from phytoplankton is minor and difficult to trace via the composition of particles. POC flux has decreased from >5 × 10 6 t yr -1 during the period 1960-1980 to about 2 × 10 6 t yr -1 in 1997. This trend can be explained by decreasing sediment load within the Yangtze River. The export of TOC from the Yangtze River at the end of the 20th Century is approximately

  8. 77 FR 28374 - SourceGas Distribution LLC; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Compliance Filing Take notice that on April 30, 2012, SourceGas Distribution LLC (SourceGas) filed a revised Statement of Operating...

  9. 78 FR 56685 - SourceGas Distribution LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Application Take notice that on August 27, 2013, SourceGas Distribution LLC (SourceGas), 600 12th Street, Suite 300, Golden, Colorado 80401, filed in Docket No. CP13-540-000 an application pursuant to section 7(f) of the Natural Gas Act...

  10. Chemotaxis Increases the Residence Time Distribution of Bacteria in Granular Media Containing Distributed Contaminant Sources

    NASA Astrophysics Data System (ADS)

    Adadevoh, J.; Triolo, S.; Ramsburg, C. A.; Ford, R.

    2015-12-01

    The use of chemotactic bacteria in bioremediation has the potential to increase access to, and biotransformation of, contaminant mass within the subsurface environment. This laboratory-scale study aimed to understand and quantify the influence of chemotaxis on residence times of pollutant-degrading bacteria within homogeneous treatment zones. Focus was placed on a continuous flow sand-packed column system in which a uniform distribution of naphthalene crystals created distributed sources of dissolved phase contaminant. A 10 mL pulse of Pseudomonas putida G7, which is chemotactic to naphthalene, and Pseudomonas putida G7 Y1, a non-chemotactic mutant strain, were simultaneously introduced into the sand-packed column at equal concentrations. Breakthrough curves obtained for the bacteria from column experiments conducted with and without naphthalene were used to quantify the effect of chemotaxis on transport parameters. In the presence of the chemoattractant, longitudinal dispersivity of PpG7 increased by a factor of 3 and percent recovery decreased from 21% to 12%. The results imply that pore-scale chemotaxis responses are evident at an interstitial fluid velocity of 1.7 m/d, which is within the range of typical groundwater flow. Within the context of bioremediation, chemotaxis may work to enhance bacterial residence times in zones of contamination thereby improving treatment.

  11. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    PubMed

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators. PMID:25811254

  12. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    PubMed

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators.

  13. [Distribution and sources of arsenic in Yangzonghai Lake, China].

    PubMed

    Zhang, Yu-Xi; Xiang, Xiao-Ping; Zhang, Ying; Chen, Xi; Liu, Jing-Tao; Wang, Jin-Cui; Zhang, Yuan-Jing; Sun, Ji-Chao

    2012-11-01

    By collecting water and sediment samples from Yangzonghai Lake and analyzing the total amount and speciation of arsenic, the spatial distribution of arsenic in surface water and sediments was analyzed, the current status of arsenic pollution were estimated, the anthropogenic contribution rate and the arsenic reserve in the lake were calculated respectively. Meanwhile, the sources of arsenic were investigated. The results indicated that the total arsenic content in Yangzonghai Lake was 71.96-101.2 microg x L(-1) in April, 2010, and increased slightly with depth. Dissolved arsenic content was 68.14-96.72 microg x L(-1), with As (III) accounting for 32%. The health risk level of arsenic in the water was 4.77 x 10(-4) - 6.66 x 10(-4) a(-1), posing a considerable threat to the surrounding environment. Arsenic content in sediments lied between 6.05-396.49 mg x kg(-1). In sediments at the depths of 0-2, 2-4, 4-6, 6-8 and 8-10 cm, the average arsenic contents were 155.66, 52.01, 29.78, 19.22 and 17.52 mg x kg(-1) respectively. Arsenic in sediments at 0-2 cm had the highest accumulation degree, with the maximum geoaccumulation index up to 5. At the deeper depths, the accumulation degree of arsenic significantly lowered. The sequence of arsenic average contents of seven forms in sediments in the descending order is residual fraction, humic acids fraction, oxide fraction, strong organic fraction, ion exchange fraction, water soluble fraction and carbonate fraction. With increase of sediments depths, the percentage of bioavailable arsenic decreased, and the percentage of residual fraction arsenic increased rapidly. The anthropogenic contribution rate of arsenic in sediments was the highest at 0-2 cm depth, with average of 81.94%. This rate was much lower at the deeper depths. Currently, the total arsenic reserve in water and sediments of Yangzonghai Lake was 70.65 t, of which 82.68% was contributed by human activities. The phosphate fertilizer plant on the south bank made the

  14. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve

  15. WATER QUALITY IN SOURCE WATER, TREATMENT, AND DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Most drinking water utilities practice the multiple-barrier concept as the guiding principle for providing safe water. This chapter discusses multiple barriers as they relate to the basic criteria for selecting and protecting source waters, including known and potential sources ...

  16. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  17. Distribution, sources and health risk assessment of mercury in kindergarten dust

    NASA Astrophysics Data System (ADS)

    Sun, Guangyi; Li, Zhonggen; Bi, Xiangyang; Chen, Yupeng; Lu, Shuangfang; Yuan, Xin

    2013-07-01

    Mercury (Hg) contamination in urban area is a hot issue in environmental research. In this study, the distribution, sources and health risk of Hg in dust from 69 kindergartens in Wuhan, China, were investigated. In comparison with most other cities, the concentrations of total mercury (THg) and methylmercury (MeHg) were significantly elevated, ranging from 0.15 to 10.59 mg kg-1 and from 0.64 to 3.88 μg kg-1, respectively. Among the five different urban areas, the educational area had the highest concentrations of THg and MeHg. The GIS mapping was used to identify the hot-spot areas and assess the potential pollution sources of Hg. The emissions of coal-power plants and coking plants were the main sources of THg in the dust, whereas the contributions of municipal solid waste (MSW) landfills and iron and steel smelting related industries were not significant. However, the emission of MSW landfills was considered to be an important source of MeHg in the studied area. The result of health risk assessment indicated that there was a high adverse health effect of the kindergarten dust in terms of Hg contamination on the children living in the educational area (Hazard index (HI) = 6.89).

  18. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  19. Acetone in the atmosphere: Distribution, sources, and sinks

    NASA Technical Reports Server (NTRS)

    Singh, H. B.; O'Hara, D.; Herlth, D.; Sachse, W.; Blake, D. R.; Bradshaw, J. D.; Kanakidou, M.; Crutzen, P. J.

    1994-01-01

    Acetone (CH3COCH3) was found to be the dominant nonmethane organic species present in the atmosphere sampled primarily over eastern Canada (0-6 km, 35 deg-65 deg N) during ABLE3B (July to August 1990). A concentration range of 357 to 2310 ppt (= 10(exp -12) v/v) with a mean value of 1140 +/- 413 ppt was measured. Under extremely clean conditions, generally involving Arctic flows, lowest (background) mixing ratios of 550 +/- 100 ppt were present in much of the troposphere studied. Correlations between atmospheric mixing ratios of acetone and select species such as C2H2, CO, C3H8, C2Cl4 and isoprene provided important clues to its possible sources and to the causes of its atmospheric variability. Biomass burning as a source of acetone has been identified for the first time. By using atmospheric data and three-dimensional photochemical models, a global acetone source of 40-60 Tg (= 10(exp 12) g)/yr is estimated to be present. Secondary formation from the atmospheric oxidation of precursor hydrocarbons (principally propane, isobutane, and isobutene) provides the single largest source (51%). The remainder is attributable to biomass burning (26%), direct biogenic emissions (21%), and primary anthropogenic emissions (3%). Atmospheric removal of acetone is estimated to be due to photolysis (64%), reaction with OH radicals (24%), and deposition (12%). Model calculations also suggest that acetone photolysis contributed significantly to PAN formation (100-200 ppt) in the middle and upper troposphere of the sampled region and may be important globally. While the source-sink equation appears to be roughly balanced, much more atmospheric and source data, especially from the southern hemisphere, are needed to reliably quantify the atmospheric budget of acetone.

  20. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1988-01-01

    The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.

  1. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Where do moneys distributed from the Fund and other sources go? 872.12 Section 872.12 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... AND INDIAN TRIBES § 872.12 Where do moneys distributed from the Fund and other sources go? (a)...

  2. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Where do moneys distributed from the Fund and other sources go? 872.12 Section 872.12 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... AND INDIAN TRIBES § 872.12 Where do moneys distributed from the Fund and other sources go? (a)...

  3. 77 FR 10490 - SourceGas Distribution LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-22

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Filing Take notice that on February 14, 2012, SourceGas Distribution LLC submitted a revised baseline filing of their Statement of...

  4. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  5. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  6. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  7. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  8. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  9. Uncertainties associated with the definition of a hydrologic source term for the Nevada Test Site

    SciTech Connect

    Smith, D.K.; Esser, B.K.; Thompson, J.L.

    1995-05-01

    The U.S. Department of Energy, Nevada Operations Office (DOE/NV) Environmental Restoration Division is seeking to evaluate groundwater contamination resulting from 30 years of underground nuclear testing at the Nevada Test Site (NTS). This evaluation requires knowledge about what radioactive materials are in the groundwater and how they are transported through the underground environment. This information coupled with models of groundwater flow (flow paths and flow rates) will enable predictions of the arrival of each radionuclide at a selected receptor site. Risk assessment models will then be used to calculate the expected environmental and human doses. The accuracy of our predictions depends on the validity of our hydrologic and risk assessment models and on the quality of the data for radionuclide concentrations in ground water at each underground nuclear test site. This paper summarizes what we currently know about radioactive material in NTS groundwater and suggests how we can best use our limited knowledge to proceed with initial modeling efforts. The amount of a radionuclide available for transport in groundwater at the site of an underground nuclear test is called the hydrologic source term. The radiologic source term is the total amount of residual radionuclides remaining after an underground nuclear test. The hydrologic source term is smaller than the radiologic source term because some or most of the radionuclide residual cannot be transported by groundwater. The radiologic source term has been determined for each of the underground nuclear tests fired at the NTS; however, the hydrologic source term has been estimated from measurements at only a few sites.

  10. The planetary distribution of heat sources and sinks during FGGE

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Wei, M. Y.

    1985-01-01

    Heating distributions from analysis of the National Meteorological Center and European Center for Medium Range Weather Forecasts data sets; methods used and problems involved in the inference of diabatic heating; the relationship between differential heating and energy transport; and recommendations on the inference of heat soruces and heat sinks for the planetary show are discussed.

  11. Monitoring Design for Source Identification in Water Distribution Systems

    EPA Science Inventory

    The design of sensor networks for the purpose of monitoring for contaminants in water distribution systems is currently an active area of research. Much of the effort has been directed at the contamination detection problem and the expression of public health protection objective...

  12. The long-term problems of contaminated land: Sources, impacts and countermeasures

    SciTech Connect

    Baes, C.F. III

    1986-11-01

    This report examines the various sources of radiological land contamination; its extent; its impacts on man, agriculture, and the environment; countermeasures for mitigating exposures; radiological standards; alternatives for achieving land decontamination and cleanup; and possible alternatives for utilizing the land. The major potential sources of extensive long-term land contamination with radionuclides, in order of decreasing extent, are nuclear war, detonation of a single nuclear weapon (e.g., a terrorist act), serious reactor accidents, and nonfission nuclear weapons accidents that disperse the nuclear fuels (termed ''broken arrows'').

  13. Design parameters and source terms: Volume 1, Design parameters: Revision 0

    SciTech Connect

    Not Available

    1987-10-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report by Stearns Catalytic Corporation (SCC), entitled ''Design Parameters and Source Terms for a Two-Phase Repository in Salt,'' 1985, to the level of the Site Characterization Plan - Conceptual Design Report. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible Salt Repository sites.

  14. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1990-01-01

    In the present study of the behavior of typical numerical methods in the case of a model advection equation having a parameter-dependent source term, two approaches to the incorporation of the source terms are used: MacCormack-type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. The latter are found to perform slightly better. The model scalar equation is used to show that the incorrectness of the propagation speeds of discontinuities observed in the stiff case is due to the introduction of nonequilibrium values through numerical dissipation in the advection step.

  15. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in

  16. Acoustic Source Localization via Distributed Sensor Networks using Tera-scale Optical-Core Devices

    SciTech Connect

    Imam, Neena; Barhen, Jacob; Wardlaw, Michael

    2008-01-01

    For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. The complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot be met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on an optical-core digital processing platform recently introduced by Lenslet Inc. They investigate key concepts of threat-detection algorithms such as Time Difference Of Arrival (TDOA) estimation via sensor data correlation in the time domain with the purpose of implementation on the optical-core processor. they illustrate their results with the aid of numerical simulation and actual optical hardware runs. The major accomplishments of this research, in terms of computational speedup and numerical accurcy achieved via the deployment of optical processing technology, should be of substantial interest to the acoustic signal processing community.

  17. An Alternative Treatment of Trace Chemical Constituents in Calculated Chemical Source Terms for Hanford Tank Farms Safety Analsyes

    SciTech Connect

    Huckaby, James L.

    2006-09-26

    Hanford Site high-level radioactive waste tank accident analyses require chemical waste toxicity source terms to assess potential accident consequences. Recent reviews of the current methodology used to generate source terms and the need to periodically update the sources terms has brought scrutiny to the manner in which trace waste constituents are included in the source terms. This report examines the importance of trace constituents to the chemical waste source terms, which are calculated as sums of fractions (SOFs), and recommends three changes to the manner in which trace constituents are included in the calculation SOFs.

  18. Using natural archives to track sources and long-term trends of pollution: an introduction

    USGS Publications Warehouse

    Jules Blais,; Rosen, Michael R.; John Smol,

    2015-01-01

    This book explores the myriad ways that environmental archives can be used to study the distribution and long-term trajectories of contaminants. The volume first focuses on reviews that examine the integrity of the historic record, including factors related to hydrology, post-depositional diffusion, and mixing processes. This is followed by a series of chapters dealing with the diverse archives available for long-term studies of environmental pollution.

  19. The distribution and source of boulders on asteroid 4179 Toutatis

    NASA Astrophysics Data System (ADS)

    Jiang, Yun; Ji, Jianghui; Huang, Jiangchuan; Marchi, Simone; Li, Yuan; Ip, Wing-Huen

    2016-01-01

    Boulders are ubiquitous on the surfaces of asteroids and their spatial and size distributions provide information for the geological evolution and collisional history of parent bodies. We identify more than 200 boulders on near-Earth asteroid 4179 Toutatis based on images obtained by Chang'e-2 flyby. The cumulative boulder size frequency distribution (SFD) gives a power-index of -4.4 +/- 0.1, which is clearly steeper than those of boulders on Itokawa and Eros, indicating much high degree of fragmentation. Correlation analyses with craters suggest that most boulders cannot solely be produced as products of cratering, but are probably survived fragments from the parent body of Toutatis, accreted after its breakup. Similar to Itokawa, Toutatis probably has a rubble-pile structure, but owns a different preservation state of boulders.

  20. Atomic clouds as distributed sources for the io plasma torus.

    PubMed

    Brown, R A; Ip, W H

    1981-09-25

    Several recent developments have implications for the neutral particle environment of Jupiter. Very hot sulfur ions have been detected in the Io torus with gyrospeeds comparable to the corotation speed, a phenomenon that would result from a neutral sulfur cloud. Current evidence supports the hypothesis that extensive neutral clouds of oxygen and sulfur exist in the Jupiter magnetosphere and that they are important sources of ions and energy for the Io torus.

  1. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  2. Experimental Investigation and 3D Finite Element Prediction of Temperature Distribution during Travelling Heat Sourced from Oxyacetylene Flame

    NASA Astrophysics Data System (ADS)

    Umar Alkali, Adam; Lenggo Ginta, Turnad; Majdi Abdul-Rani, Ahmad

    2015-04-01

    This paper presents a 3D transient finite element modelling of the workpiece temperature field produced during the travelling heat sourced from oxyacetylene flame. The proposed model was given in terms of preheat-only test applicable during thermally enhanced machining using the oxyacetylene flame as a heat source. The FEA model as well as the experimental test investigated the surface temperature distribution on 316L stainless steel at scanning speed of 100mm/min, 125mm/min 160mm/min, 200mm/min and 250mm/min. The parametric properties of the heat source maintained constant are; lead distance Ld =10mm, focus height Fh=7.5mm, oxygen gas pressure Poxy=15psi and acetylene gas pressure Pacty=25psi. An experimental validation of the temperature field induced on type 316L stainless steel reveal that temperature distribution increases when the travelling speed decreases.

  3. Occurrence of arsenic contamination in Canada: sources, behavior and distribution.

    PubMed

    Wang, Suiling; Mulligan, Catherine N

    2006-08-01

    Recently there has been increasing anxieties concerning arsenic related problems. Occurrence of arsenic contamination has been reported worldwide. In Canada, the main natural arsenic sources are weathering and erosion of arsenic-containing rocks and soil, while tailings from historic and recent gold mine operations and wood preservative facilities are the principal anthropogenic sources. Across Canada, the 24-h average concentration of arsenic in the atmosphere is generally less than 0.3 microg/m3. Arsenic concentrations in natural uncontaminated soil and sediments range from 4 to 150 mg/kg. In uncontaminated surface and ground waters, the arsenic concentration ranges from 0.001 to 0.005 mg/L. As a result of anthropogenic inputs, elevated arsenic levels, above ten to thousand times the Interim Maximum Acceptable Concentration (IMAC), have been reported in air, soil and sediment, surface water and groundwater, and biota in several regions. Most arsenic is of toxic inorganic forms. It is critical to recognize that such contamination imposes serious harmful effects on various aquatic and terrestrial organisms and human health ultimately. Serious incidences of acute and chronic arsenic poisonings have been revealed. Through examination of the available literature, screening and selecting existing data, this paper provides an analysis of the currently available information on recognized problem areas, and an overview of current knowledge of the principal hydrogeochemical processes of arsenic transportation and transformation. However, a more detailed understanding of local sources of arsenic and mechanisms of arsenic release is required. More extensive studies will be required for building practical guidance on avoiding and reducing arsenic contamination. Bioremediation and hyperaccumulation are emerging innovative technologies for the remediation of arsenic contaminated sites. Natural attenuation may be utilized as a potential in situ remedial option. Further

  4. High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array

    PubMed Central

    Qian, Xin; Tucker, Andrew; Gidcumb, Emily; Shan, Jing; Yang, Guang; Calderon-Colon, Xiomara; Sultana, Shabana; Lu, Jianping; Zhou, Otto; Spronk, Derrek; Sprenger, Frank; Zhang, Yiheng; Kennedy, Don; Farbizio, Tom; Jing, Zhenxue

    2012-01-01

    binning, the projection resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images. Conclusions: A prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years. PMID:22482630

  5. High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array

    SciTech Connect

    Qian Xin; Tucker, Andrew; Gidcumb, Emily; Shan Jing; Yang Guang; Calderon-Colon, Xiomara; Sultana, Shabana; Lu Jianping; Zhou, Otto; Spronk, Derrek; Sprenger, Frank; Zhang Yiheng; Kennedy, Don; Farbizio, Tom; Jing Zhenxue

    2012-04-15

    , the projection resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images. Conclusions: A prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years.

  6. Parameterization of unresolved obstacles in wave modelling: A source term approach

    NASA Astrophysics Data System (ADS)

    Mentaschi, L.; Pérez, J.; Besio, G.; Mendez, F. J.; Menendez, M.

    2015-12-01

    In the present work we introduce two source terms for the parameterization of energy dissipation due to unresolved obstacles in spectral wave models. The proposed approach differs from the classical one based on spatial propagation schemes because it provides a local representation of phenomena such as unresolved wave energy dissipation. This source term-based approach presents the advantage of decoupling unresolved obstacles parameterization from the spatial propagation scheme, allowing not to reformulate, reimplement and revalidate the parameterization of unresolved obstacles for each propagation scheme. Furthermore it opens the way to parameterizations of other unresolved sheltering effects like rotation and redistribution of wave energy over frequencies. Proposed source terms estimate respectively local energy dissipation and shadow effect due to unresolved obstacles. Source terms validation through synthetic case studies has been carried out, showing their ability in reproducing wave dynamics comparable to those of high resolution models. The analysis of high resolution stationary wave simulations may help to better diagnose and study the effects of unresolved obstacles, providing estimations of transparency coefficients for each spectral component and allowing to understand and model unresolved effects of rotation and redistribution of wave energy over frequencies.

  7. ACT-ARA: Code System for the Calculation of Changes in Radiological Source Terms with Time

    1988-02-01

    The program calculates the source term activity as a function of time for parent isotopes as well as daughters. Also, at each time, the "probable release" is produced. Finally, the program determines the time integrated probable release for each isotope over the time period of interest.

  8. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    ERIC Educational Resources Information Center

    Hall, Matthew L.; Bavelier, Daphne

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory--perception, encoding, and recall--in this effect. The present study…

  9. Sensitivities to source-term parameters of emergency planning zone boundaries for waste management facilities

    SciTech Connect

    Mueller, C.J.

    1995-07-01

    This paper reviews the key parameters comprising airborne radiological and chemical release source terms, discusses the ranges over which values of these parameters occur for plausible but severe waste management facility accidents, and relates the concomitant sensitivities of emergency planning zone boundaries predicted on calculated distances to early severe health effects.

  10. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  11. Laboratory experiments designed to provide limits on the radionuclide source term for the NNWSI Project

    SciTech Connect

    Oversby, V.M.; McCright, R.D.

    1984-11-01

    The Nevada Nuclear Waste Storage Investigations Project is investigating the suitability of the tuffaceous rocks at Yucca Mountain Nevada for potential use as a high-level nuclear waste repository. The horizon under investigation lies above the water table, and therefore offers a setting that differs substantially from other potential repository sites. The unsaturated zone environment allows a simple, but effective, waste package design. The source term for radionuclide release from the waste package will be based on laboratory experiments that determine the corrosion rates and mechanisms for the metal container and the dissolution rate of the waste form under expected long term conditions. This paper describes the present status of laboratory results and outlines the approach to be used in combining the data to develop a realistic source term for release of radionuclides from the waste package. 16 refs., 3 figs., 1 tab.

  12. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  13. Low-level radioactive waste source terms for the 1992 integrated data base

    SciTech Connect

    Loghry, S L; Kibbey, A H; Godbee, H W; Icenhour, A S; DePaoli, S M

    1995-01-01

    This technical manual presents updated generic source terms (i.e., unitized amounts and radionuclide compositions) which have been developed for use in the Integrated Data Base (IDB) Program of the U.S. Department of Energy (DOE). These source terms were used in the IDB annual report, Integrated Data Base for 1992: Spent Fuel and Radioactive Waste Inventories, Projections, and Characteristics, DOE/RW-0006, Rev. 8, October 1992. They are useful as a basis for projecting future amounts (volume and radioactivity) of low-level radioactive waste (LLW) shipped for disposal at commercial burial grounds or sent for storage at DOE solid-waste sites. Commercial fuel cycle LLW categories include boiling-water reactor, pressurized-water reactor, fuel fabrication, and uranium hexafluoride (UF{sub 6}) conversion. Commercial nonfuel cycle LLW includes institutional/industrial (I/I) waste. The LLW from DOE operations is category as uranium/thorium fission product, induced activity, tritium, alpha, and {open_quotes}other{close_quotes}. Fuel cycle commercial LLW source terms are normalized on the basis of net electrical output [MW(e)-year], except for UF{sub 6} conversion, which is normalized on the basis of heavy metal requirement [metric tons of initial heavy metal ]. The nonfuel cycle commercial LLW source term is normalized on the basis of volume (cubic meters) and radioactivity (curies) for each subclass within the I/I category. The DOE LLW is normalized in a manner similar to that for commercial I/I waste. The revised source terms are based on the best available historical data through 1992.

  14. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    DOE PAGES

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    2016-01-18

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/137Cs versus 134Cs/137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain

  15. Simulation of dose distribution for iridium-192 brachytherapy source type-H01 using MCNPX

    NASA Astrophysics Data System (ADS)

    Purwaningsih, Anik

    2014-09-01

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result of calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.

  16. Simulation of dose distribution for iridium-192 brachytherapy source type-H01 using MCNPX

    SciTech Connect

    Purwaningsih, Anik

    2014-09-30

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result of calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.

  17. Fusion of chemical, biological, and meteorological observations for agent source term estimation and hazard refinement

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Sykes, Ian; Hurst, Jonathan; Vandenberghe, Francois; Weil, Jeffrey; Bieberbach, George, Jr.; Parker, Steve; Cabell, Ryan

    2011-05-01

    Chemical and biological (CB) agent detection and effective use of these observations in hazard assessment models are key elements of our nation's CB defense program that seeks to ensure that Department of Defense (DoD) operations are minimally affected by a CB attack. Accurate hazard assessments rely heavily on the source term parameters necessary to characterize the release in the transport and dispersion (T&D) simulation. Unfortunately, these source parameters are often not known and based on rudimentary assumptions. In this presentation we describe an algorithm that utilizes variational data assimilation techniques to fuse CB and meteorological observations to characterize agent release source parameters and provide a refined hazard assessment. The underlying algorithm consists of a combination of modeling systems, including the Second order Closure Integrated PUFF model (SCIPUFF), its corresponding Source Term Estimation (STE) model, a hybrid Lagrangian-Eulerian Plume Model (LEPM), its formal adjoint, and the software infrastructure necessary to link them. SCIPUFF and its STE model are used to calculate a "first guess" source estimate. The LEPM and corresponding adjoint are then used to iteratively refine this release source estimate using variational data assimilation techniques. This algorithm has undergone preliminary testing using virtual "single realization" plume release data sets from the Virtual THreat Response Emulation and Analysis Testbed (VTHREAT) and data from the FUSION Field Trials 2007 (FFT07). The end-to-end prototype of this system that has been developed to illustrate its use within the United States (US) Joint Effects Model (JEM) will be demonstrated.

  18. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    NASA Astrophysics Data System (ADS)

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; Mayo, Jackson R.; Armstrong, Robert C.

    2016-07-01

    We present a characterization of short-term stability of Kauffman's N K (random) Boolean networks under arbitrary distributions of transfer functions. Given such a Boolean network where each transfer function is drawn from the same distribution, we present a formula that determines whether short-term chaos (damage spreading) will happen. Our main technical tool which enables the formal proof of this formula is the Fourier analysis of Boolean functions, which describes such functions as multilinear polynomials over the inputs. Numerical simulations on mixtures of threshold functions and nested canalyzing functions demonstrate the formula's correctness.

  19. Characterizing short-term stability for Boolean networks over any distribution of transfer functions.

    PubMed

    Seshadhri, C; Smith, Andrew M; Vorobeychik, Yevgeniy; Mayo, Jackson R; Armstrong, Robert C

    2016-07-01

    We present a characterization of short-term stability of Kauffman's NK (random) Boolean networks under arbitrary distributions of transfer functions. Given such a Boolean network where each transfer function is drawn from the same distribution, we present a formula that determines whether short-term chaos (damage spreading) will happen. Our main technical tool which enables the formal proof of this formula is the Fourier analysis of Boolean functions, which describes such functions as multilinear polynomials over the inputs. Numerical simulations on mixtures of threshold functions and nested canalyzing functions demonstrate the formula's correctness. PMID:27575142

  20. Characterizing short-term stability for Boolean networks over any distribution of transfer functions.

    PubMed

    Seshadhri, C; Smith, Andrew M; Vorobeychik, Yevgeniy; Mayo, Jackson R; Armstrong, Robert C

    2016-07-01

    We present a characterization of short-term stability of Kauffman's NK (random) Boolean networks under arbitrary distributions of transfer functions. Given such a Boolean network where each transfer function is drawn from the same distribution, we present a formula that determines whether short-term chaos (damage spreading) will happen. Our main technical tool which enables the formal proof of this formula is the Fourier analysis of Boolean functions, which describes such functions as multilinear polynomials over the inputs. Numerical simulations on mixtures of threshold functions and nested canalyzing functions demonstrate the formula's correctness.

  1. Prediction of short-term distributions of load extremes of offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Wang, Ying-guang

    2016-09-01

    This paper proposes a new methodology to select an optimal threshold level to be used in the peak over threshold (POT) method for the prediction of short-term distributions of load extremes of offshore wind turbines. Such an optimal threshold level is found based on the estimation of the variance-to-mean ratio for the occurrence of peak values, which characterizes the Poisson assumption. A generalized Pareto distribution is then fitted to the extracted peaks over the optimal threshold level and the distribution parameters are estimated by the method of the maximum spacing estimation. This methodology is applied to estimate the short-term distributions of load extremes of the blade bending moment and the tower base bending moment at the mudline of a monopile-supported 5MW offshore wind turbine as an example. The accuracy of the POT method using the optimal threshold level is shown to be better, in terms of the distribution fitting, than that of the POT methods using empirical threshold levels. The comparisons among the short-term extreme response values predicted by using the POT method with the optimal threshold levels and with the empirical threshold levels and by using direct simulation results further substantiate the validity of the proposed new methodology.

  2. A Systematic Search for Short-term Variability of EGRET Sources

    NASA Technical Reports Server (NTRS)

    Wallace, P. M.; Griffis, N. J.; Bertsch, D. L.; Hartman, R. C.; Thompson, D. J.; Kniffen, D. A.; Bloom, S. D.

    2000-01-01

    The 3rd EGRET Catalog of High-energy Gamma-ray Sources contains 170 unidentified sources, and there is great interest in the nature of these sources. One means of determining source class is the study of flux variability on time scales of days; pulsars are believed to be stable on these time scales while blazers are known to be highly variable. In addition, previous work has demonstrated that 3EG J0241-6103 and 3EG J1837-0606 are candidates for a new gamma-ray source class. These sources near the Galactic plane display transient behavior but cannot be associated with any known blazers. Although, many instances of flaring AGN have been reported, the EGRET database has not been systematically searched for occurrences of short-timescale (approximately 1 day) variability. These considerations have led us to conduct a systematic search for short-term variability in EGRET data, covering all viewing periods through proposal cycle 4. Six 3EG catalog sources are reported here to display variability on short time scales; four of them are unidentified. In addition, three non-catalog variable sources are discussed.

  3. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-01-01

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges. PMID:27275822

  4. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  5. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    PubMed Central

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W.; Dong, Fengzhong

    2016-01-01

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges. PMID:27275822

  6. Search for correlated radio and optical events in long-term studies of extragalactic sources

    NASA Technical Reports Server (NTRS)

    Pomphrey, R. B.; Smith, A. G.; Leacock, R. J.; Olsson, C. N.; Scott, R. L.; Pollock, J. T.; Edwards, P.; Dent, W. A.

    1976-01-01

    For the first time, long-term records of radio and optical fluxes of a large sample of variable extragalactic sources have been assembled and compared, with linear cross-correlation analysis being used to reinforce the visual comparisons. Only in the case of the BL Lac object OJ 287 is the correlation between radio and optical records strong. In the majority of cases there is no evidence of significant correlation, although nine sources show limited or weak evidence of correlation. The results do not support naive extrapolation of the expanding source model. The general absence of strong correlation between the radio and optical regions has important implications for the energetics of events occurring in such sources.

  7. Efficiency of core light injection from sources in the cladding - Bulk distribution

    NASA Astrophysics Data System (ADS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1992-04-01

    The behavior of the power efficiency of an optical fiber with bulk distribution of sources in its cladding is analyzed. Marcuse's (1988) results for weakly guiding cylindrical fibers with fluorescent sources uniformly distributed in the cladding are confirmed for the bulk distribution case. It is found that power efficiency increases with wavelength and with difference in refractive indices. A new independent variable for the bulk distribution is found, and it is shown that the power efficiency does not always increase with the V number.

  8. Efficiency of core light injection from sources in the cladding - Bulk distribution

    NASA Technical Reports Server (NTRS)

    Egalon, Claudio O.; Rogowski, Robert S.

    1991-01-01

    The behavior of the power efficiency of an optical fiber with bulk distribution of sources in its cladding is analyzed. Marcuse's (1988) results for weakly guiding cylindrical fibers with fluorescent sources uniformly distributed in the cladding are confirmed for the bulk distribution case. It is found that power efficiency increases with wavelength and with difference in refractive indices. A new independent variable for the bulk distribution is found, and it is shown that the power efficiency does not always increase with the V number.

  9. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  10. Reconstruction of Far-Field Tsunami Amplitude Distributions from Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2016-04-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  11. Measurements of Infrared and Acoustic Source Distributions in Jet Plumes

    NASA Technical Reports Server (NTRS)

    Agboola, Femi A.; Bridges, James; Saiyed, Naseem

    2004-01-01

    The aim of this investigation was to use the linear phased array (LPA) microphones and infrared (IR) imaging to study the effects of advanced nozzle-mixing techniques on jet noise reduction. Several full-scale engine nozzles were tested at varying power cycles with the linear phased array setup parallel to the jet axis. The array consisted of 16 sparsely distributed microphones. The phased array microphone measurements were taken at a distance of 51.0 ft (15.5 m) from the jet axis, and the results were used to obtain relative overall sound pressure levels from one nozzle design to the other. The IR imaging system was used to acquire real-time dynamic thermal patterns of the exhaust jet from the nozzles tested. The IR camera measured the IR radiation from the nozzle exit to a distance of six fan diameters (X/D(sub FAN) = 6), along the jet plume axis. The images confirmed the expected jet plume mixing intensity, and the phased array results showed the differences in sound pressure level with respect to nozzle configurations. The results show the effects of changes in configurations to the exit nozzles on both the flows mixing patterns and radiant energy dissipation patterns. By comparing the results from these two measurements, a relationship between noise reduction and core/bypass flow mixing is demonstrated.

  12. Decoy-state quantum key distribution with a leaky source

    NASA Astrophysics Data System (ADS)

    Tamaki, Kiyoshi; Curty, Marcos; Lucamarini, Marco

    2016-06-01

    In recent years, there has been a great effort to prove the security of quantum key distribution (QKD) with a minimum number of assumptions. Besides its intrinsic theoretical interest, this would allow for larger tolerance against device imperfections in the actual implementations. However, even in this device-independent scenario, one assumption seems unavoidable, that is, the presence of a protected space devoid of any unwanted information leakage in which the legitimate parties can privately generate, process and store their classical data. In this paper we relax this unrealistic and hardly feasible assumption and introduce a general formalism to tackle the information leakage problem in most of existing QKD systems. More specifically, we prove the security of optical QKD systems using phase and intensity modulators in their transmitters, which leak the setting information in an arbitrary manner. We apply our security proof to cases of practical interest and show key rates similar to those obtained in a perfectly shielded environment. Our work constitutes a fundamental step forward in guaranteeing implementation security of quantum communication systems.

  13. An altitude and distance correction to the source fluence distribution of TGFs

    PubMed Central

    Nisi, R S; Østgaard, N; Gjesteland, T; Collier, A B

    2014-01-01

    The source fluence distribution of terrestrial gamma ray flashes (TGFs) has been extensively discussed in recent years, but few have considered how the TGF fluence distribution at the source, as estimated from satellite measurements, depends on the distance from satellite foot point and assumed production altitude. As the absorption of the TGF photons increases significantly with lower source altitude and larger distance between the source and the observing satellite, these might be important factors. We have addressed the issue by using the tropopause pressure distribution as an approximation of the TGF production altitude distribution and World Wide Lightning Location Network spheric measurements to determine the distance. The study is made possible by the increased number of Ramaty High Energy Solar Spectroscopic Imager (RHESSI) TGFs found in the second catalog of the RHESSI data. One find is that the TGF/lightning ratio for the tropics probably has an annual variability due to an annual variability in the Dobson-Brewer circulation. The main result is an indication that the altitude distribution and distance should be considered when investigating the source fluence distribution of TGFs, as this leads to a softening of the inferred distribution of source brightness. PMID:26167434

  14. Controlling temporal solitary waves in the generalized inhomogeneous coupled nonlinear Schrödinger equations with varying source terms

    NASA Astrophysics Data System (ADS)

    Yang, Yunqing; Yan, Zhenya; Mihalache, Dumitru

    2015-05-01

    In this paper, we study the families of solitary-wave solutions to the inhomogeneous coupled nonlinear Schrödinger equations with space- and time-modulated coefficients and source terms. By means of the similarity reduction method and Möbius transformations, many types of novel temporal solitary-wave solutions of this nonlinear dynamical system are analytically found under some constraint conditions, such as the bright-bright, bright-dark, dark-dark, periodic-periodic, W-shaped, and rational wave solutions. In particular, we find that the localized rational-type solutions can exhibit both bright-bright and bright-dark wave profiles by choosing different families of free parameters. Moreover, we analyze the relationships among the group-velocity dispersion profiles, gain or loss distributions, external potentials, and inhomogeneous source profiles, which provide the necessary constraint conditions to control the emerging wave dynamics. Finally, a series of numerical simulations are performed to show the robustness to propagation of some of the analytically obtained solitary-wave solutions. The vast class of exact solutions of inhomogeneous coupled nonlinear Schrödinger equations with source terms might be used in the study of the soliton structures in twin-core optical fibers and two-component Bose-Einstein condensates.

  15. Accident source terms for Light-Water Nuclear Power Plants. Final report

    SciTech Connect

    Soffer, L.; Burson, S.B.; Ferrell, C.M.; Lee, R.Y.; Ridgely, J.N.

    1995-02-01

    In 1962 tile US Atomic Energy Commission published TID-14844, ``Calculation of Distance Factors for Power and Test Reactors`` which specified a release of fission products from the core to the reactor containment for a postulated accident involving ``substantial meltdown of the core``. This ``source term``, tile basis for tile NRC`s Regulatory Guides 1.3 and 1.4, has been used to determine compliance with tile NRC`s reactor site criteria, 10 CFR Part 100, and to evaluate other important plant performance requirements. During the past 30 years substantial additional information on fission product releases has been developed based on significant severe accident research. This document utilizes this research by providing more realistic estimates of the ``source term`` release into containment, in terms of timing, nuclide types, quantities and chemical form, given a severe core-melt accident. This revised ``source term`` is to be applied to the design of future light water reactors (LWRs). Current LWR licensees may voluntarily propose applications based upon it.

  16. Source and Distribution of Calcium in Mercury's Exosphere

    NASA Astrophysics Data System (ADS)

    Killen, Rosemary M.; Hahn, Joseph M.

    2014-11-01

    Mercury is surrounded by a surface-bounded exosphere with six known components: H, He, Na, K, Ca, and Mg. Observations of the Ca exosphere by MESSENGER show a source concentrated on the dawn side that varies in a periodic way with that planet's true anomaly. The time variation in that Ca signal repeats every Mercury year (Burger et al., Icarus, 2014). We show that this pattern can be explained by impact vaporization by interplanetary dust. Our models of this scenario show that much of the seasonal variation in Ca is due to Mercury's substantial radial motion through the interplanetary dust cloud that results from Mercury's large orbital eccentricity (e=0.2). The seasonal Ca variation is enhanced further by Mercury's large orbital inclination (7° relative to the ecliptic), which causes additional periodic variations in the dust infall rate as Mercury's vertical motion carries it repeatedly across the dust-disk's midplane. However, an additional contribution near true anomaly 20° is required in addition to the contribution from the interplanetary dust disk. This anomaly is close to but not coincident with Mercury's true anomaly as it crosses comet 2P/Encke's orbital plane. The lack of exact correspondence may indicate the width of the potential stream or a previous cometary orbit. We note that the Encke meteor storms hit Earth at true anomaly angles ±20 degrees before and after where these two orbit planes cross. The temperature of the atomic calcium cannot be due to the impact vapor but must be imparted by an additional mechanism such as dissociation of a calcium-bearing molecule or ionization followed by recombination.

  17. Analytic solutions of the time-dependent quasilinear diffusion equation with source and loss terms

    SciTech Connect

    Hassan, M.H.A. ); Hamza, E.A. )

    1993-08-01

    A simplified one-dimensional quasilinear diffusion equation describing the time evolution of collisionless ions in the presence of ion-cyclotron-resonance heating, sources, and losses is solved analytically for all harmonics of the ion cyclotron frequency. Simple time-dependent distribution functions which are initially Maxwellian and vanish at high energies are obtained and calculated numerically for the first four harmonics of resonance heating. It is found that the strongest ion tail of the resulting anisotropic distribution function is driven by heating at the second harmonic followed by heating at the fundamental frequency.

  18. Differential dose contributions on total dose distribution of (125)I brachytherapy source.

    PubMed

    Camgöz, B; Yeğin, G; Kumru, M N

    2010-01-01

    This work provides an improvement of the approach using Monte Carlo simulation for the Amersham Model 6711 (125)I brachytherapy seed source, which is well known by many theoretical and experimental studies. The source which has simple geometry was researched with respect to criteria of AAPM Tg-43 Report. The approach offered by this study involves determination of differential dose contributions that come from virtual partitions of a massive radioactive element of the studied source to a total dose at analytical calculation point. Some brachytherapy seeds contain multi-radioactive elements so the dose at any point is a total of separate doses from each element. It is momentous to know well the angular and radial dose distributions around the source that is located in cancerous tissue for clinical treatments. Interior geometry of a source is effective on dose characteristics of a distribution. Dose information of inner geometrical structure of a brachytherapy source cannot be acquired by experimental methods because of limits of physical material and geometry in the healthy tissue, so Monte Carlo simulation is a required approach of the study. EGSnrc Monte Carlo simulation software was used. In the design of a simulation, the radioactive source was divided into 10 rings, partitioned but not separate from each other. All differential sources were simulated for dose calculation, and the shape of dose distribution was determined comparatively distribution of a single-complete source. In this work anisotropy function was examined also mathematically.

  19. Differential dose contributions on total dose distribution of (125)I brachytherapy source.

    PubMed

    Camgöz, B; Yeğin, G; Kumru, M N

    2010-01-01

    This work provides an improvement of the approach using Monte Carlo simulation for the Amersham Model 6711 (125)I brachytherapy seed source, which is well known by many theoretical and experimental studies. The source which has simple geometry was researched with respect to criteria of AAPM Tg-43 Report. The approach offered by this study involves determination of differential dose contributions that come from virtual partitions of a massive radioactive element of the studied source to a total dose at analytical calculation point. Some brachytherapy seeds contain multi-radioactive elements so the dose at any point is a total of separate doses from each element. It is momentous to know well the angular and radial dose distributions around the source that is located in cancerous tissue for clinical treatments. Interior geometry of a source is effective on dose characteristics of a distribution. Dose information of inner geometrical structure of a brachytherapy source cannot be acquired by experimental methods because of limits of physical material and geometry in the healthy tissue, so Monte Carlo simulation is a required approach of the study. EGSnrc Monte Carlo simulation software was used. In the design of a simulation, the radioactive source was divided into 10 rings, partitioned but not separate from each other. All differential sources were simulated for dose calculation, and the shape of dose distribution was determined comparatively distribution of a single-complete source. In this work anisotropy function was examined also mathematically. PMID:24376927

  20. GEOCHEMISTRY OF PAHS IN AQUATIC ENVIRONMENTS: A SYNTHESIS OF DISTRIBUTION, SOURCE, PERSISTENCE, PARTITIONING AND BIOAVAILABILITY

    EPA Science Inventory

    On the basis of their distributions, sources, persistence, partitioning and bioavailability, polycyclic aromatic hydrocarbons (PAHs) are a unique class of persistent organic pollutants (POPs) contaminating the aquatic environment. They are of particular interest to geochemists an...

  1. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... earnings and profits. In determining the source of a distribution, consideration should be given first, to.... (b) If the earnings and profits of the taxable year (computed as of the close of the year...

  2. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... earnings and profits. In determining the source of a distribution, consideration should be given first, to.... (b) If the earnings and profits of the taxable year (computed as of the close of the year...

  3. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... earnings and profits. In determining the source of a distribution, consideration should be given first, to.... (b) If the earnings and profits of the taxable year (computed as of the close of the year...

  4. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... and profits. In determining the source of a distribution, consideration should be given first, to the... earnings and profits of the taxable year (computed as of the close of the year without diminution by...

  5. Long-Term Probability Distribution of Wind Turbine Planetary Bearing Loads (Poster)

    SciTech Connect

    Jiang, Z.; Xing, Y.; Guo, Y.; Dong, W.; Moan, T.; Gao, Z.

    2013-04-01

    Among the various causes of bearing damage and failure, metal fatigue of the rolling contact surface is the dominant failure mechanism. The fatigue life is associated with the load conditions under which wind turbines operate in the field. Therefore, it is important to understand the long-term distribution of the bearing loads under various environmental conditions. The National Renewable Energy Laboratory's 750-kW Gearbox Reliability Collaborative wind turbine is studied in this work. A decoupled analysis using several computer codes is carried out. The global aero-elastic simulations are performed using HAWC2. The time series of the drivetrain loads and motions from the global dynamic analysis are fed to a drivetrain model in SIMPACK. The time-varying internal pressure distribution along the raceway is obtained analytically. A series of probability distribution functions are then used to fit the long-term statistical distribution at different locations along raceways. The long-term distribution of the bearing raceway loads are estimated under different environmental conditions. Finally, the bearing fatigue lives are calculated.

  6. Identifying synonymy between SNOMED clinical terms of varying length using distributional analysis of electronic health records.

    PubMed

    Henriksson, Aron; Conway, Mike; Duneld, Martin; Chapman, Wendy W

    2013-01-01

    Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records - the MIMIC-II database - can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length.

  7. Identifying Synonymy between SNOMED Clinical Terms of Varying Length Using Distributional Analysis of Electronic Health Records

    PubMed Central

    Henriksson, Aron; Conway, Mike; Duneld, Martin; Chapman, Wendy W.

    2013-01-01

    Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records – the MIMIC-II database – can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length. PMID:24551362

  8. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  9. Lattice Boltzmann method for n-dimensional nonlinear hyperbolic conservation laws with the source term.

    PubMed

    Wang, Zhenghua; Shi, Baochang; Xiang, Xiuqiao; Chai, Zhenhua; Lu, Jianhua

    2011-03-01

    It is important for nonlinear hyperbolic conservation laws (NHCL) to own a simulation scheme with high order accuracy, simple computation, and non-oscillatory character. In this paper, a unified and novel lattice Boltzmann model is presented for solving n-dimensional NHCL with the source term. By introducing the high order source term of explicit lattice Boltzmann method (LBM) and the optimum dimensionless relaxation time varied with the specific issues, the effects of space and time resolutions on the accuracy and stability of the model are investigated for the different problems in one to three dimensions. Both the theoretical analysis and numerical simulation validate that the results by the proposed LBM have second-order accuracy in both space and time, which agree well with the analytical solutions.

  10. Long-term monitoring of airborne nickel (Ni) pollution in association with some potential source processes in the urban environment.

    PubMed

    Kim, Ki-Hyun; Shon, Zang-Ho; Mauulida, Puteri T; Song, Sang-Keun

    2014-09-01

    The environmental behavior and pollution status of nickel (Ni) were investigated in seven major cities in Korea over a 13-year time span (1998-2010). The mean concentrations of Ni measured during the whole study period fell within the range of 3.71 (Gwangju: GJ) to 12.6ngm(-3) (Incheon: IC). Although Ni values showed a good comparability in a relatively large spatial scale, its values in most cities (6 out of 7) were subject to moderate reductions over the study period. To assess the effect of major sources on the long-term distribution of Ni, the relationship between their concentrations and the potent source processes like non-road transportation sources (e.g., ship and aircraft emissions) were examined from some cities with port and airport facilities. The potential impact of long-range transport of Asian dust particles in controlling Ni levels was also evaluated. The overall results suggest that the Ni levels were subject to gradual reductions over the study period irrespective of changes in such localized non-road source activities. The pollution of Ni at all the study sites was maintained well below the international threshold (Directive 2004/107/EC) value of 20ngm(-3).

  11. Relative contribution of DNAPL dissolution and matrix diffusion to the long-term persistence of chlorinated solvent source zones.

    PubMed

    Seyedabbasi, Mir Ahmad; Newell, Charles J; Adamson, David T; Sale, Thomas C

    2012-06-01

    The relative contribution of dense non-aqueous phase liquid (DNAPL) dissolution versus matrix diffusion processes to the longevity of chlorinated source zones was investigated. Matrix diffusion is being increasingly recognized as an important non-DNAPL component of source behavior over time, and understanding the persistence of contaminants that have diffused into lower permeability units can impact remedial decision-making. In this study, a hypothetical DNAPL source zone architecture consisting of several different sized pools and fingers originally developed by Anderson et al. (1992) was adapted to include defined low permeability layers. A coupled dissolution-diffusion model was developed to allow diffusion into these layers while in contact with DNAPL, followed by diffusion out of these same layers after complete DNAPL dissolution. This exercise was performed for releases of equivalent masses (675 kg) of three different compounds, including chlorinated solvents with solubilities ranging from low (tetrachloroethene (PCE)), moderate (trichloroethene (TCE)) to high (dichloromethane (DCM)). The results of this simple modeling exercise demonstrate that matrix diffusion can be a critical component of source zone longevity and may represent a longer-term contributor to source longevity (i.e., longer time maintaining concentrations above MCLs) than DNAPL dissolution alone at many sites. For the hypothetical TCE release, the simulation indicated that dissolution of DNAPL would take approximately 38 years, while the back diffusion from low permeability zones could maintain the source for an additional 83 years. This effect was even more dramatic for the higher solubility DCM (97% of longevity due to matrix diffusion), while the lower solubility PCE showed a more equal contribution from DNAPL dissolution vs. matrix diffusion. Several methods were used to describe the resulting source attenuation curves, including a first-order decay model which showed that half-life of

  12. Analytical source term optimization for radioactive releases with approximate knowledge of nuclide ratios

    NASA Astrophysics Data System (ADS)

    Hofman, Radek; Seibert, Petra; Kovalets, Ivan; Andronopoulos, Spyros

    2015-04-01

    We are concerned with source term retrieval in the case of an accident in a nuclear power with off-site consequences. The goal is to optimize atmospheric dispersion model inputs using inverse modeling of gamma dose rate measurements (instantaneous or time-integrated). These are the most abundant type of measurements provided by various radiation monitoring networks across Europe and available continuously in near-real time. Usually, a source term of an accidental release comprises of a mixture of nuclides. Unfortunately, gamma dose rate measurements do not provide a direct information on the source term composition; however, physical properties of respective nuclides (deposition properties, decay half-life) can yield some insight. In the method presented, we assume that nuclide ratios are known at least approximately, e.g. from nuclide specific observations or reactor inventory and assumptions on the accident type. The source term can be in multiple phases, each being characterized by constant nuclide ratios. The method is an extension of a well-established source term inversion approach based on the optimization of an objective function (minimization of a cost function). This function has two quadratic terms: mismatch between model and measurements weighted by an observation error covariance matrix and the deviation of the solution from a first guess weighted by the first-guess error covariance matrix. For simplicity, both error covariance matrices are approximated as diagonal. Analytical minimization of the cost function leads to a liner system of equations. Possible negative parts of the solution are iteratively removed by the means of first guess error variance reduction. Nuclide ratios enter the problem in the form of additional linear equations, where the deviations from prescribed ratios are weighted by factors; the corresponding error variance allows us to control how strongly we want to impose the prescribed ratios. This introduces some freedom into the

  13. Distribution of Short-Term and Lifetime Predicted Risks of Cardiovascular Diseases in Peruvian Adults

    PubMed Central

    Quispe, Renato; Bazo-Alvarez, Juan Carlos; Burroughs Peña, Melissa S; Poterico, Julio A; Gilman, Robert H; Checkley, William; Bernabé-Ortiz, Antonio; Huffman, Mark D; Miranda, J Jaime

    2015-01-01

    Background Short-term risk assessment tools for prediction of cardiovascular disease events are widely recommended in clinical practice and are used largely for single time-point estimations; however, persons with low predicted short-term risk may have higher risks across longer time horizons. Methods and Results We estimated short-term and lifetime cardiovascular disease risk in a pooled population from 2 studies of Peruvian populations. Short-term risk was estimated using the atherosclerotic cardiovascular disease Pooled Cohort Risk Equations. Lifetime risk was evaluated using the algorithm derived from the Framingham Heart Study cohort. Using previously published thresholds, participants were classified into 3 categories: low short-term and low lifetime risk, low short-term and high lifetime risk, and high short-term predicted risk. We also compared the distribution of these risk profiles across educational level, wealth index, and place of residence. We included 2844 participants (50% men, mean age 55.9 years [SD 10.2 years]) in the analysis. Approximately 1 of every 3 participants (34% [95% CI 33 to 36]) had a high short-term estimated cardiovascular disease risk. Among those with a low short-term predicted risk, more than half (54% [95% CI 52 to 56]) had a high lifetime predicted risk. Short-term and lifetime predicted risks were higher for participants with lower versus higher wealth indexes and educational levels and for those living in urban versus rural areas (P<0.01). These results were consistent by sex. Conclusions These findings highlight potential shortcomings of using short-term risk tools for primary prevention strategies because a substantial proportion of Peruvian adults were classified as low short-term risk but high lifetime risk. Vulnerable adults, such as those from low socioeconomic status and those living in urban areas, may need greater attention regarding cardiovascular preventive strategies. PMID:26254303

  14. Final report on shipping-cask sabotage source-term investigation

    SciTech Connect

    Schmidt, E W; Walters, M A; Trott, B D; Gieseke, J A

    1982-10-01

    A need existed to estimate the source term resulting from a sabotage attack on a spent nuclear fuel shipping cask. An experimental program sponsored by the US NRC and conducted at Battelle's Columbus Laboratories was designed to meet that need. In the program a precision shaped charge was fired through a subscale model cask loaded with segments of spent PWR fuel rods and the radioactive material released was analyzed. This report describes these experiments and presents their results.

  15. Response of a viscoelastic halfspace to subsurface distributed acoustic sources with application to medical diagnosis

    NASA Astrophysics Data System (ADS)

    Royston, Thomas J.; Yazicioglu, Yigit; Loth, Francis

    2003-04-01

    The response within and at the surface of an isotropic viscoelastic medium to subsurface distributed low audible frequency acoustic sources is considered. Spherically and cylindrically distributed sources are approximated as arrays of infinitesimal point sources. Analytical approximations for the acoustic field radiating from these sources are then obtained as a summation of tractable point source expressions. These theoretical approximations are compared to computational finite element predictions and experimental studies in selected cases. The objective is to better understand low audible frequency sound propagation in soft biological tissue caused by subsurface sources. Distributed acoustic sources could represent vibratory motion of the vascular wall caused by turbulent blood flow past a constriction (stenosis). Additionally focused vibratory stimulation using a dynamic radiation force caused by interfering ultrasound beams effectively creates a distributed subsurface acoustic source. A dynamic radiation force has been investigated as a means of probing subsurface tissue anomalies, including calcified vascular plaque and tumorous growths. In these cases, there is an interest in relating acoustic measurements at the skin surface and within the medium to the underlying flow/constriction environment or tissue anomaly. [Research supported by NIH NCRR 14250 and Whitaker Foundation BME RG 01-0198.

  16. Elevated Natural Source Water Ammonia and Nitrification in the Distribution Systems of Four Water Utilities

    EPA Science Inventory

    Nitrification in drinking water distribution systems is a concern of many drinking water systems. Although chloramination as a source of nitrification (i.e., addition of excess ammonia or breakdown of chloramines) has drawn the most attention, many source waters contain signific...

  17. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Sources of distribution in general. 1.316-2 Section 1.316-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Definitions; Constructive Ownership of Stock § 1.316-2 Sources...

  18. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    ERIC Educational Resources Information Center

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  19. Numerical analysis of atomic density distribution in arc driven negative ion sources

    SciTech Connect

    Yamamoto, T. Shibata, T.; Hatayama, A.; Kashiwagi, M.; Hanada, M.; Sawada, K.

    2014-02-15

    The purpose of this study is to calculate atomic (H{sup 0}) density distribution in JAEA 10 ampere negative ion source. A collisional radiative model is developed for the calculation of the H{sup 0} density distribution. The non-equilibrium feature of the electron energy distribution function (EEDF), which mainly determines the H{sup 0} production rate, is included by substituting the EEDF calculated from 3D electron transport analysis. In this paper, the H{sup 0} production rate, the ionization rate, and the density distribution in the source chamber are calculated. In the region where high energy electrons exist, the H{sup 0} production and the ionization are enhanced. The calculated H{sup 0} density distribution without the effect of the H{sup 0} transport is relatively small in the upper region. In the next step, the effect should be taken into account to obtain more realistic H{sup 0} distribution.

  20. Balancing the source terms in a SPH model for solving the shallow water equations

    NASA Astrophysics Data System (ADS)

    Xia, Xilin; Liang, Qiuhua; Pastor, Manuel; Zou, Weilie; Zhuang, Yan-Feng

    2013-09-01

    A shallow flow generally features complex hydrodynamics induced by complicated domain topography and geometry. A numerical scheme with well-balanced flux and source term gradients is therefore essential before a shallow flow model can be applied to simulate real-world problems. The issue of source term balancing has been exhaustively investigated in grid-based numerical approaches, e.g. discontinuous Galerkin finite element methods and finite volume Godunov-type methods. In recent years, a relatively new computational method, smooth particle hydrodynamics (SPH), has started to gain popularity in solving the shallow water equations (SWEs). However, the well-balanced problem has not been fully investigated and resolved in the context of SPH. This work aims to discuss the well-balanced problem caused by a standard SPH discretization to the SWEs with slope source terms and derive a corrected SPH algorithm that is able to preserve the solution of lake at rest. In order to enhance the shock capturing capability of the resulting SPH model, the Monotone Upwind-centered Scheme for Conservation Laws (MUSCL) is also explored and applied to enable Riemann solver based artificial viscosity. The new SPH model is validated against several idealized benchmark tests and a real-world dam-break case and promising results are obtained.

  1. Conditioning and long-term storage of spent radium sources in Turkey.

    PubMed

    Osmanlioglu, Ahmet Erdal

    2006-06-30

    Conditioning of radium sources is required before long-term interim storage to avoid the release of radioactive material and to limit radiation exposure. In this study, containment of the radium sources was achieved by high integrity encapsulation designed to control the radon emanation problem. The capsules were made of Type 316 austenitic stainless steel with dimensions of 22mm diameter and 160mm height. The gas pressures which was caused by encapsulation of different amounts of (226)Ra were determined. The maximum gas pressure found 10atm for 900mCi of (226)Ra in one capsule at 20 degrees C. A lead shielding device was designed to limit radiation exposure. A 200l drum was used as a conditioned waste package for the radium sources and represents a Type A package under the IAEA transport regulations. PMID:16386365

  2. Extended Tonks-Langmuir-type model with non-Boltzmann-distributed electrons and cold ion sources

    NASA Astrophysics Data System (ADS)

    Kamran, M.; Kuhn, S.; Tskhakaya, D. D.; Khan, M.; Khan

    2013-04-01

    A general formalism for calculating the potential distribution Φ(z) in the quasineutral region of a new class of plane Tonks-Langmuir (TL)-type bounded-plasma-system (BPS) models differing from the well-known `classical' TL model (Tonks, L. and Langmuir, I. 1929 A general theory of the plasma of an arc. Phys. Rev. 34, 876) by allowing for arbitrary (but still cold) ion sources and arbitrary electron distributions is developed. With individual particles usually undergoing microscopic collision/sink/source (CSS) events, extensive use is made here of the basic kinetic-theory concept of `CSS-free trajectories' (i.e., the characteristics of the kinetic equation). Two types of electron populations, occupying the `type-t' and `type-p' domains of electron phase space, are distinguished. By definition, the type-t and type-p domains are made up of phase points lying on type-t (`trapped') CSS-free trajectories (not intersecting the walls and closing on themselves) and type-p (`passing') ones (starting at one of the walls and ending at the other). This work being the first step, it is assumed that ɛ ≡ λ D /l -> 0+ (where λ D and l are a typical Debye length and a typical ionization length respectively) so that the system exhibits a finite quasineutral `plasma' region and two infinitesimally thin `sheath' regions associated with the `sheath-edge singularities' | dΦ/dz| z->+/-zs -> ∞. The potential in the plasma region is required to satisfy a plasma equation (quasineutrality condition) of the form n i {Φ} = n e (Φ), where the electron density n e (Φ) is given and the ion density n i {Φ} is expressed in terms of trajectory integrals of the ion kinetic equation, with the ions produced by electron-impact ionization of cold neutrals. While previous TL-type models were characterized by electrons diffusing under the influence of frequent collisions with the neutral background particles and approximated by Maxwellian (Riemann, K.-U. 2006 Plasma-sheath transition in the

  3. Low-level waste disposal performance assessments - Total source-term analysis

    SciTech Connect

    Wilhite, E.L.

    1995-12-31

    Disposal of low-level radioactive waste at Department of Energy (DOE) facilities is regulated by DOE. DOE Order 5820.2A establishes policies, guidelines, and minimum requirements for managing radioactive waste. Requirements for disposal of low-level waste emplaced after September 1988 include providing reasonable assurance of meeting stated performance objectives by completing a radiological performance assessment. Recently, the Defense Nuclear Facilities Safety Board issued Recommendation 94-2, {open_quotes}Conformance with Safety Standards at Department of Energy Low-Level Nuclear Waste and Disposal Sites.{close_quotes} One of the elements of the recommendation is that low-level waste performance assessments do not include the entire source term because low-level waste emplaced prior to September 1988, as well as other DOE sources of radioactivity in the ground, are excluded. DOE has developed and issued guidance for preliminary assessments of the impact of including the total source term in performance assessments. This paper will present issues resulting from the inclusion of all DOE sources of radioactivity in performance assessments of low-level waste disposal facilities.

  4. Detailed dose distribution prediction of Cf-252 brachytherapy source with boron loading dose enhancement.

    PubMed

    Ghassoun, J; Mostacci, D; Molinari, V; Jehouani, A

    2010-02-01

    The purpose of this work is to evaluate the dose rate distribution and to determine the boron effect on dose rate distribution for (252)Cf brachytherapy source. This study was carried out using a Monte Carlo simulation. To validate the Monte Carlo computer code, the dosimetric parameters were determined following the updated TG-43 formalism and compared with current literature data. The validated computer code was then applied to evaluate the neutron and photon dose distribution and to illustrate the boron loading effect.

  5. Impact of routine episodic emissions on the expected frequency distribution of emissions from oil and gas production sources.

    NASA Astrophysics Data System (ADS)

    Smith, N.; Blewitt, D.; Hebert, L. B.

    2015-12-01

    In coordination with oil and gas operators, we developed a high resolution (< 1 min) simulation of temporal variability in well-pad oil and gas emissions over a year. We include routine emissions from condensate tanks, dehydrators, pneumatic devices, fugitive leaks and liquids unloading. We explore the variability in natural gas emissions from these individual well-pad sources, and find that routine short-term episodic emissions such as tank flashing and liquids unloading result in the appearance of a skewed, or 'fat-tail' distribution of emissions, from an individual well-pad over time. Additionally, we explore the expected variability in emissions from multiple wells with different raw gas composition, gas/liquids production volumes and control equipment. Differences in well-level composition, production volume and control equipment translate into differences in well-level emissions leading to a fat-tail distribution of emissions in the absence of operational upsets. Our results have several implications for recent studies focusing on emissions from oil and gas sources. Time scale of emission estimates are important and have important policy implications. Fat tail distributions may not be entirely driven by avoidable mechanical failures, and are expected to occur under routine operational conditions from short-duration emissions (e.g., tank flashing, liquid unloading). An understanding of the expected distribution of emissions for a particular population of wells is necessary to evaluate whether the observed distribution is more skewed than expected. Temporal variability in well-pad emissions make comparisons to annual average emissions inventories difficult and may complicate the interpretation of long-term ambient fenceline monitoring data. Sophisticated change detection algorithms will be necessary to identify when true operational upsets occur versus routine short-term emissions.

  6. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a "first guess" source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the

  7. Vertical distributions of lightning sources and flashes over Kennedy Space Center, Florida

    NASA Astrophysics Data System (ADS)

    Hansen, Amanda E.; Fuelberg, Henry E.; Pickering, Kenneth E.

    2010-07-01

    Warm season vertical distributions of lightning sources and flash segments are presented using data from the lightning detection and ranging network at Kennedy Space Center, Fla. We emphasize the percentage of sources/flash segments at each level compared to the vertical total and present the distributions as a function of storm top above ground level (AGL). The vertical profiles of sources and flash segments are compared with each other and with those from previous studies. Results indicate that storms with tops higher than ˜10 km AGL often have a bimodal or multiple peak distribution of percentage sources and flash segments. However, distributions for storms with tops lower than ˜10 km AGL exhibit only a single dominant peak. Temporal variations in the vertical distributions of flash percentages are examined for four clusters of storms occurring on different days. Results reveal considerable storm-to-storm and intrastorm variability. However, two similarities are observed between the four cases: (1) maximum flash density (flash segments km-3) occurs as the maximum storm top is reached and (2) as the storms increase in intensity, both maximum flash density and flash segment percentage increase in altitude, and then both decrease in altitude as the storms decay. The distributions are useful for understanding lightning characteristics as a function of storm evolution, specifying the vertical distribution of lightning-produced nitrogen oxides in chemical transport models and verifying model-simulated lightning.

  8. Size distribution, mixing state and source apportionment of black carbon aerosol in London during wintertime

    NASA Astrophysics Data System (ADS)

    Liu, D.; Allan, J. D.; Young, D. E.; Coe, H.; Beddows, D.; Fleming, Z. L.; Flynn, M. J.; Gallagher, M. W.; Harrison, R. M.; Lee, J.; Prevot, A. S. H.; Taylor, J. W.; Yin, J.; Williams, P. I.; Zotter, P.

    2014-09-01

    Black carbon aerosols (BC) at a London urban site were characterised in both winter- and summertime 2012 during the Clean Air for London (ClearfLo) project. Positive matrix factorisation (PMF) factors of organic aerosol mass spectra measured by a high-resolution aerosol mass spectrometer (HR-AMS) showed traffic-dominant sources in summer but in winter the influence of additional non-traffic sources became more important, mainly from solid fuel sources (SF). Measurements using a single particle soot photometer (SP2, DMT), showed the traffic-dominant BC exhibited an almost uniform BC core size (Dc) distribution with very thin coating thickness throughout the detectable range of Dc. However, the size distribution of sf (project average mass median Dc = 149 ± 22 nm in winter, and 120 ± 6 nm in summer) and BC coating thickness varied significantly in winter. A novel methodology was developed to attribute the BC number concentrations and mass abundances from traffic (BCtr) and from SF (BCsf), by using a 2-D histogram of the particle optical properties as a function of BC core size, as measured by the SP2. The BCtr and BCsf showed distinctly different sf distributions and coating thicknesses, with BCsf displaying larger Dc and larger coating thickness compared to BCtr. BC particles from different sources were also apportioned by applying a multiple linear regression between the total BC mass and each AMS-PMF factor (BC-AMS-PMF method), and also attributed by applying the absorption spectral dependence of carbonaceous aerosols to 7-wavelength Aethalometer measurements (Aethalometer method). Air masses that originated from westerly (W), southeasterly (SE), and easterly (E) sectors showed BCsf fractions that ranged from low to high, and whose mass median Dc values were 137 ± 10 nm, 143 ± 11 nm and 169 ± 29 nm, respectively. The corresponding bulk relative coating thickness of BC (coated particle size/BC core - Dp/Dc) for these same sectors was 1.28 ± 0.07, 1.45 ± 0

  9. Size distribution, mixing state and source apportionments of black carbon aerosols in London during winter time

    NASA Astrophysics Data System (ADS)

    Liu, D.; Allan, J. D.; Young, D. E.; Coe, H.; Beddows, D.; Fleming, Z. L.; Flynn, M. J.; Gallagher, M. W.; Harrison, R. M.; Lee, J.; Prevot, A. S. H.; Taylor, J. W.; Yin, J.; Williams, P. I.; Zotter, P.

    2014-06-01

    Black carbon aerosols (BC) at a London urban site were characterized in both winter and summer time 2012 during the Clean Air for London (ClearfLo) project. Positive matrix factorization (PMF) factors of organic aerosol mass spectra measured by a high resolution aerosol mass spectrometer (HR-AMS) showed traffic-dominant sources in summer but in winter the influence of additional non-traffic sources became more important, mainly from solid fuel sources (SF). Measurements using a single particle soot photometer (SP2, DMT), showed the traffic-dominant BC exhibited an almost uniform BC core size (Dc) distribution with very thin coating thickness throughout the detectable range of Dc. However the size distribution of Dc (project average mass median Dc = 149 ± 22 nm in winter, and 120 ± 6 nm in summer) and BC coating thickness varied significantly in winter. A novel methodology was developed to attribute the BC number concentrations and mass abundances from traffic (BCtr) and from SF (BCsf), by using a 2-D histogram of the particle optical properties as a function of BC core size, as measured by the SP2. The BCtr and BCsf showed distinctly different Dc distributions and coating thicknesses, with BCsf displaying larger Dc and larger coating thickness compared to BCtr. BC particles from different sources were also apportioned by applying a multiple linear regression between the total BC mass and each AMS-PMF factor (BC-AMS-PMF method), and also attributed by applying the absorption spectral dependence of carbonaceous aerosols to 7-wavelength Aethalometer measurements (Aethalometer method). Air masses that originated from westerly (W), southeasterly (SE), or easterly (E) sectors showed BCsf fractions that ranged from low to high, and whose mass median Dc values were 137 ± 10 nm, 143 ± 11 nm, and 169 ± 29 nm respectively. The corresponding bulk relative coating thickness of BC (coated particle size / BC core - Dp / Dc) for these same sectors was 1.28 ± 0.07, 1.45 ± 0

  10. Trace elements in particulate matter from metropolitan regions of Northern China: Sources, concentrations and size distributions.

    PubMed

    Pan, Yuepeng; Tian, Shili; Li, Xingru; Sun, Ying; Li, Yi; Wentworth, Gregory R; Wang, Yuesi

    2015-12-15

    Public concerns over airborne trace elements (TEs) in metropolitan areas are increasing, but long-term and multi-site observations of size-resolved aerosol TEs in China are still lacking. Here, we identify highly elevated levels of atmospheric TEs in megacities and industrial sites in a Beijing-Tianjin-Hebei urban agglomeration relative to background areas, with the annual mean values of As, Pb, Ni, Cd and Mn exceeding the acceptable limits of the World Health Organization. Despite the spatial variability in concentrations, the size distribution pattern of each trace element was quite similar across the region. Crustal elements of Al and Fe were mainly found in coarse particles (2.1-9 μm), whereas the main fraction of toxic metals, such as Cu, Zn, As, Se, Cd and Pb, was found in submicron particles (<1.1 μm). These toxic metals were enriched by over 100-fold relative to the Earth's crust. The size distributions of Na, Mg, K, Ca, V, Cr, Mn, Ni, Mo and Ba were bimodal, with two peaks at 0.43-0.65 μm and 4.7-5.8 μm. The combination of the size distribution information, principal component analysis and air mass back trajectory model offered a robust technique for distinguishing the main sources for airborne TEs, e.g., soil dust, fossil fuel combustion and industrial emissions, at different sites. In addition, higher elemental concentrations coincided with westerly flow, indicating that polluted soil and fugitive dust were major sources of TEs on the regional scale. However, the contribution of coal burning, iron industry/oil combustion and non-ferrous smelters to atmospheric metal pollution in Northern China should be given more attention. Considering that the concentrations of heavy metals associated with fine particles in the target region were significantly higher than those in other Asian sites, the implementations of strict environmental standards in China are required to reduce the amounts of these hazardous pollutants released into the atmosphere.

  11. Trace elements in particulate matter from metropolitan regions of Northern China: Sources, concentrations and size distributions.

    PubMed

    Pan, Yuepeng; Tian, Shili; Li, Xingru; Sun, Ying; Li, Yi; Wentworth, Gregory R; Wang, Yuesi

    2015-12-15

    Public concerns over airborne trace elements (TEs) in metropolitan areas are increasing, but long-term and multi-site observations of size-resolved aerosol TEs in China are still lacking. Here, we identify highly elevated levels of atmospheric TEs in megacities and industrial sites in a Beijing-Tianjin-Hebei urban agglomeration relative to background areas, with the annual mean values of As, Pb, Ni, Cd and Mn exceeding the acceptable limits of the World Health Organization. Despite the spatial variability in concentrations, the size distribution pattern of each trace element was quite similar across the region. Crustal elements of Al and Fe were mainly found in coarse particles (2.1-9 μm), whereas the main fraction of toxic metals, such as Cu, Zn, As, Se, Cd and Pb, was found in submicron particles (<1.1 μm). These toxic metals were enriched by over 100-fold relative to the Earth's crust. The size distributions of Na, Mg, K, Ca, V, Cr, Mn, Ni, Mo and Ba were bimodal, with two peaks at 0.43-0.65 μm and 4.7-5.8 μm. The combination of the size distribution information, principal component analysis and air mass back trajectory model offered a robust technique for distinguishing the main sources for airborne TEs, e.g., soil dust, fossil fuel combustion and industrial emissions, at different sites. In addition, higher elemental concentrations coincided with westerly flow, indicating that polluted soil and fugitive dust were major sources of TEs on the regional scale. However, the contribution of coal burning, iron industry/oil combustion and non-ferrous smelters to atmospheric metal pollution in Northern China should be given more attention. Considering that the concentrations of heavy metals associated with fine particles in the target region were significantly higher than those in other Asian sites, the implementations of strict environmental standards in China are required to reduce the amounts of these hazardous pollutants released into the atmosphere. PMID

  12. Voltage management of distribution networks with high penetration of distributed photovoltaic generation sources

    NASA Astrophysics Data System (ADS)

    Alyami, Saeed

    Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage

  13. Review of uncertainty sources affecting the long-term predictions of space debris evolutionary models

    NASA Astrophysics Data System (ADS)

    Dolado-Perez, J. C.; Pardini, Carmen; Anselmo, Luciano

    2015-08-01

    Since the launch of Sputnik-I in 1957, the amount of space debris in Earth's orbit has increased continuously. Historically, besides abandoned intact objects (spacecraft and orbital stages), the primary sources of space debris in Earth's orbit were (i) accidental and intentional break-ups which produced long-lasting debris and (ii) debris released intentionally during the operation of launch vehicle orbital stages and spacecraft. In the future, fragments generated by collisions are expected to become a significant source as well. In this context, and from a purely mathematical point of view, the orbital debris population in Low Earth Orbit (LEO) should be intrinsically unstable, due to the physics of mutual collisions and the relative ineffectiveness of natural sink mechanisms above~700 km. Therefore, the real question should not be "if", but "when" the exponential growth of the space debris population is supposed to start. From a practical point of view, and in order to answer the previous question, since the end of the 1980's several sophisticated long-term debris evolutionary models have been developed. Unfortunately, the predictions performed with such models, in particular beyond a few decades, are affected by considerable uncertainty. Such uncertainty comes from a relative important number of variables that being either under the partial control or completely out of the control of modellers, introduce a variability on the long-term simulation of the space debris population which cannot be captured with standard Monte Carlo statistics. The objective of this paper is to present and discuss many of the uncertainty sources affecting the long-term predictions done with evolutionary models, in order to serve as a roadmap for the uncertainty and the statistical robustness analysis of the long-term evolution of the space debris population.

  14. Development of a tool dedicated to the evaluation of hydrogen term source for technological Wastes: assumptions, physical models, and validation

    SciTech Connect

    Lamouroux, C.

    2013-07-01

    In radioactive waste packages hydrogen is generated, in one hand, from the radiolysis of wastes (mainly organic materials) and, in the other hand, from the radiolysis of water content in the cement matrix. In order to assess hydrogen generation 2 tools based on operational models have been developed. One is dedicated to the determination of the hydrogen source term issues from the radiolysis of the wastes: the STORAGE tool (Simulation Tool Of Emission Radiolysis Gas), the other deals with the hydrogen source term gas, produced by radiolysis of the cement matrices (the Damar tool). The approach used by the STORAGE tool for assessing the production rate of radiolysis gases is divided into five steps: 1) Specification of the data packages, in particular, inventories and radiological materials defined for a package medium; 2) Determination of radiochemical yields for the different constituents and the laws of behavior associated, this determination of radiochemical yields is made from the PRELOG database in which radiochemical yields in different irradiation conditions have been compiled; 3) Definition of hypothesis concerning the composition and the distribution of contamination inside the package to allow assessment of the power absorbed by the constituents; 4) Sum-up of all the contributions; And finally, 5) validation calculations by comparison with a reduced sampling of packages. Comparisons with measured values confirm the conservative character of the methodology and give confidence in the safety margins for safety analysis report.

  15. Evaluation of severe accident risks: Methodology for the containment, source term, consequence, and risk integration analyses; Volume 1, Revision 1

    SciTech Connect

    Gorham, E.D.; Breeding, R.J.; Brown, T.D.; Harper, F.T.; Helton, J.C.; Murfin, W.B.; Hora, S.C.

    1993-12-01

    NUREG-1150 examines the risk to the public from five nuclear power plants. The NUREG-1150 plant studies are Level III probabilistic risk assessments (PRAs) and, as such, they consist of four analysis components: accident frequency analysis, accident progression analysis, source term analysis, and consequence analysis. This volume summarizes the methods utilized in performing the last three components and the assembly of these analyses into an overall risk assessment. The NUREG-1150 analysis approach is based on the following ideas: (1) general and relatively fast-running models for the individual analysis components, (2) well-defined interfaces between the individual analysis components, (3) use of Monte Carlo techniques together with an efficient sampling procedure to propagate uncertainties, (4) use of expert panels to develop distributions for important phenomenological issues, and (5) automation of the overall analysis. Many features of the new analysis procedures were adopted to facilitate a comprehensive treatment of uncertainty in the complete risk analysis. Uncertainties in the accident frequency, accident progression and source term analyses were included in the overall uncertainty assessment. The uncertainties in the consequence analysis were not included in this assessment. A large effort was devoted to the development of procedures for obtaining expert opinion and the execution of these procedures to quantify parameters and phenomena for which there is large uncertainty and divergent opinions in the reactor safety community.

  16. Intensity distribution of the X-ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. A.

    1993-01-01

    Intensity distribution measurements of the X-ray source for the AXAF VETA-I mirror test are reported. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are shown. They are compared and shown to agree with the results from pinhole camera measurements. It is demonstrated that under operating conditions characteristic of the VETA-I test, all the source sizes have an FWHM of less than 0.45 mm. For a source of this size at 528 m away, the angular size to VETA is less than 0.17 arcsec, which is small compared to the on-ground VETA angular resolution. These results were crucial for VETA data analysis and for obtaining the on-ground and predicted in-orbit VETA point response function.

  17. The impact of light source spectral power distribution on sky glow

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Christian B.; Boley, Paul A.; Davis, Donald R.

    2014-05-01

    The effect of light source spectral power distribution on the visual brightness of anthropogenic sky glow is described. Under visual adaptation levels relevant to observing the night sky, namely with dark-adapted (scotopic) vision, blue-rich (“white”) sources produce a dramatically greater sky brightness than yellow-rich sources. High correlated color temperature LEDs and metal halide sources produce a visual brightness up to 8× brighter than low-pressure sodium and 3× brighter than high-pressure sodium when matched lumen-for-lumen and observed nearby. Though the sky brightness arising from blue-rich sources decreases more strongly with distance, the visual sky glow resulting from such sources remains significantly brighter than from yellow sources out to the limits of this study at 300 km.

  18. Distribution of terminal electron-accepting processes in an aquifer having multiple contaminant sources

    USGS Publications Warehouse

    McMahon, P.B.; Bruce, B.W.

    1997-01-01

    Concentrations of electron acceptors, electron donors, and H2 in groundwater were measured to determine the distribution of terminal electron-accepting processes (TEAPs) in an alluvial aquifer having multiple contaminant sources. Upgradient contaminant sources included two separate hydrocarbon point sources, one of which contained the fuel oxygenate methyl tertbutyl ether (MTBE). Infiltrating river water was a source of dissolved NO31 SO4 and organic carbon (DOC) to the downgradient part of the aquifer. Groundwater downgradient from the MTBE source had larger concentrations of electron acceptors (dissolved O2 and SO4) and smaller concentrations of TEAP end products (dissolved inorganic C, Fe2+ and CH4) than groundwater downgradient from the other hydrocarbon source, suggesting that MTBE was not as suitable for supporting TEAPs as the other hydrocarbons. Measurements of dissolved H2 indicated that SO4 reduction predominated in the aquifer during a period of high water levels in the aquifer and river. The predominant TEAP shifted to Fe3+ reduction in upgradient areas after water levels receded but remained SO4 reducing downgradient near the river. This distribution of TEAPs is the opposite of what is commonly observed in aquifers having a single contaminant point source and probably reflects the input of Dec and SO4 to the aquifer from the river. Results of this study indicate that the distribution of TEAPs in aquifers having multiple contaminant sources depends on the composition and location of the contaminants and on the availability of electron acceptors.

  19. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations

  20. Intensity distribution of the x ray source for the AXAF VETA-I mirror test

    NASA Technical Reports Server (NTRS)

    Zhao, Ping; Kellogg, Edwin M.; Schwartz, Daniel A.; Shao, Yibo; Fulton, M. Ann

    1992-01-01

    The X-ray generator for the AXAF VETA-I mirror test is an electron impact X-ray source with various anode materials. The source sizes of different anodes and their intensity distributions were measured with a pinhole camera before the VETA-I test. The pinhole camera consists of a 30 micrometers diameter pinhole for imaging the source and a Microchannel Plate Imaging Detector with 25 micrometers FWHM spatial resolution for detecting and recording the image. The camera has a magnification factor of 8.79, which enables measuring the detailed spatial structure of the source. The spot size, the intensity distribution, and the flux level of each source were measured with different operating parameters. During the VETA-I test, microscope pictures were taken for each used anode immediately after it was brought out of the source chamber. The source sizes and the intensity distribution structures are clearly shown in the pictures. They are compared and agree with the results from the pinhole camera measurements. This paper presents the results of the above measurements. The results show that under operating conditions characteristic of the VETA-I test, all the source sizes have a FWHM of less than 0.45 mm. For a source of this size at 528 meters away, the angular size to VETA is less than 0.17 arcsec which is small compared to the on ground VETA angular resolution (0.5 arcsec, required and 0.22 arcsec, measured). Even so, the results show the intensity distributions of the sources have complicated structures. These results were crucial for the VETA data analysis and for obtaining the on ground and predicted in orbit VETA Point Response Function.

  1. Source and long-term behavior of transuranic aerosols in the WIPP environment.

    PubMed

    Thakur, P; Lemons, B G

    2016-10-01

    Source and long-term behavior transuranic aerosols ((239+240)Pu, (238)Pu, and (241)Am) in the ambient air samples collected at and near the Waste Isolation Pilot Plant (WIPP) deep geologic repository site were investigated using historical data from an independent monitoring program conducted by the Carlsbad Environmental Monitoring and Research Center and an oversight monitoring program conducted by the management and operating contractor for WIPP at and near the facility. An analysis of historical data indicates frequent detections of (239+240)Pu and (241)Am, whereas (238)Pu is detected infrequently. Peaks in (239+240)Pu and (241)Am concentrations in ambient air generally occur from March to June timeframe, which is when strong and gusty winds in the area frequently give rise to blowing dust. Long-term measurements of plutonium isotopes (1985-2015) in the WIPP environment suggest that the resuspension of previously contaminated soils is likely the primary source of plutonium in the ambient air samples from WIPP and its vicinity. There is no evidence that WIPP is a source of environmental contamination that can be considered significant by any health-based standard. PMID:27394421

  2. The Multimedia Environmental Pollutant Assessment System (MEPAS){reg_sign}: Source-term release formulations

    SciTech Connect

    Streile, G.P.; Shields, K.D.; Stroh, J.L.; Bagaasen, L.M.; Whelan, G.; McDonald, J.P.; Droppo, J.G.; Buck, J.W.

    1996-11-01

    This report is one of a series of reports that document the mathematical models in the Multimedia Environmental Pollutant Assessment System (MEPAS). Developed by Pacific Northwest National Laboratory for the US Department of Energy, MEPAS is an integrated impact assessment software implementation of physics-based fate and transport models in air, soil, and water media. Outputs are estimates of exposures and health risk assessments for radioactive and hazardous pollutants. Each of the MEPAS formulation documents covers a major MEPAS component such as source-term, atmospheric, vadose zone/groundwater, surface water, and health exposure/health impact assessment. Other MEPAS documentation reports cover the sensitivity/uncertainty formulations and the database parameter constituent property estimation methods. The pollutant source-term release component is documented in this report. MEPAS simulates the release of contaminants from a source, transport through the air, groundwater, surface water, or overland pathways, and transfer through food chains and exposure pathways to the exposed individual or population. For human health impacts, risks are computed for carcinogens and hazard quotients for noncarcinogens. MEPAS is implemented on a desktop computer with a user-friendly interface that allows the user to define the problem, input the required data, and execute the appropriate models for both deterministic and probabilistic analyses.

  3. Source and long-term behavior of transuranic aerosols in the WIPP environment.

    PubMed

    Thakur, P; Lemons, B G

    2016-10-01

    Source and long-term behavior transuranic aerosols ((239+240)Pu, (238)Pu, and (241)Am) in the ambient air samples collected at and near the Waste Isolation Pilot Plant (WIPP) deep geologic repository site were investigated using historical data from an independent monitoring program conducted by the Carlsbad Environmental Monitoring and Research Center and an oversight monitoring program conducted by the management and operating contractor for WIPP at and near the facility. An analysis of historical data indicates frequent detections of (239+240)Pu and (241)Am, whereas (238)Pu is detected infrequently. Peaks in (239+240)Pu and (241)Am concentrations in ambient air generally occur from March to June timeframe, which is when strong and gusty winds in the area frequently give rise to blowing dust. Long-term measurements of plutonium isotopes (1985-2015) in the WIPP environment suggest that the resuspension of previously contaminated soils is likely the primary source of plutonium in the ambient air samples from WIPP and its vicinity. There is no evidence that WIPP is a source of environmental contamination that can be considered significant by any health-based standard.

  4. Effect of tissue inhomogeneities on dose distributions from Cf-252 brachytherapy source.

    PubMed

    Ghassoun, J

    2013-01-01

    The Monte Carlo method was used to determine the effect of tissue inhomogeneities on dose distribution from a Cf-252 brachytherapy source. Neutron and gamma-ray fluences, energy spectra and dose rate distributions were determined in both homogenous and inhomogeneous phantoms. Simulations were performed using the MCNP5 code. Obtained results were compared with experimentally measured values published in literature. Results showed a significant change in neutron dose rate distributions in presence of heterogeneities. However, their effect on gamma rays dose distribution is minimal.

  5. Sources and distribution of late Pleistocene sand, northern Gulf of Mexico Shelf

    SciTech Connect

    Mazzullo, J.M.; Bates, C.; Reutter, D.; Withers, K.

    1985-02-01

    A completed 3-yr study of the sources and consequent distribution of late Pleistocene sand on the northern Gulf shelf clarifies paleogeography and alluvial identification. Techniques used to determine the sources of sand are: the Fourier technique (which differentiated sands from different source terranes on the basis of the shapes of quartz sand grains), mineralogic analysis (which identified the composition of the source terranes that contributed each quartz-shape type), and an evaluation of the source terranes drained by each of the southern US rivers (thereby linking each shape type to a particular river). These data and the mapped distribution of sand deposited on the shelf by each of these rivers during the late Pleistocene lowstand indicate distribution patterns have not been modified by modern shelf currents to any great extent, and thus record the late Pleistocene paleogeography of the shelf. These distributions show, among other things, the locations of the late Pleistocene alluvial valleys of each of the southern US rivers, and identify the sources of shelf-edge deltas off the coasts of Texas and Louisiana that were detected by shallow seismic analysis.

  6. Technical considerations related to interim source-term assumptions for emergency planning and equipment qualification. [PWR; BWR

    SciTech Connect

    Niemczyk, S.J.; McDowell-Boyer, L.M.

    1982-09-01

    The source terms recommended in the current regulatory guidance for many considerations of light water reactor (LWR) accidents were developed a number of years ago when understandings of many of the phenomena pertinent to source term estimation were relatively primitive. The purpose of the work presented here was to develop more realistic source term assumptions which could be used for interim regulatory purposes for two specific considerations, namely, equipment qualification and emergency planning. The overall approach taken was to adopt assumptions and models previously proposed for various aspects of source term estimation and to modify those assumptions and models to reflect recently gained insights into, and data describing, the release and transport of radionuclides during and after LWR accidents. To obtain illustrative estimates of the magnitudes of the source terms, the results of previous calculations employing the adopted assumptions and models were utilized and were modified to account for the effects of the recent insights and data.

  7. Characterization and Source Term Assessments of Radioactive Particles from Marshall Islands Using Non-Destructive Analytical Techniques

    SciTech Connect

    Jernstrom, J; Eriksson, M; Simon, R; Tamborini, G; Bildstein, O; Carlos-Marquez, R; Kehl, S R; Betti, M; Hamilton, T

    2005-06-11

    A considerable fraction of radioactivity entering the environment from different nuclear events is associated with particles. The impact of these events can only be fully assessed where there is some knowledge about the mobility of particle bound radionuclides entering the environment. The behavior of particulate radionuclides is dependent on several factors, including the physical, chemical and redox state of the environment, the characteristics of the particles (e.g., the chemical composition, crystallinity and particle size) and on the oxidative state of radionuclides contained in the particles. Six plutonium-containing particles stemming from Runit Island soil (Marshall Islands) were characterized using non-destructive analytical and microanalytical methods. By determining the activity of {sup 239,240}Pu and {sup 241}Am isotopes from their gamma peaks structural information related to Pu matrix was obtained, and the source term was revealed. Composition and elemental distribution in the particles were studied with synchrotron radiation based micro X-ray fluorescence (SR-{mu}-XRF) spectrometry. Scanning electron microscope equipped with energy dispersive X-ray detector (SEMEDX) and secondary ion mass spectrometer (SIMS) were used to examine particle surfaces. Based on the elemental composition the particles were divided into two groups; particles with plain Pu matrix, and particles where the plutonium is included in Si/O-rich matrix being more heterogeneously distributed. All of the particles were identified as fragments of initial weapons material. As containing plutonium with low {sup 240}Pu/{sup 239}Pu atomic ratio, {approx}2-6%, which corresponds to weapons grade plutonium, the source term was identified to be among the safety tests conducted in the history of Runit Island.

  8. The integration of renewable energy sources into electric power distribution systems. Volume 2, Utility case assessments

    SciTech Connect

    Zaininger, H.W.; Ellis, P.R.; Schaefer, J.C.

    1994-06-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: (1) The local solar insolation and/or wind characteristics; (2) renewable energy source penetration level; (3) whether battery or other energy storage systems are applied; and (4) local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kw-scale applications may be connected to three-phase secondaries, and larger hundred-kW and MW-scale applications, such as MW-scale windfarms or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications.

  9. Operational source term estimation and ensemble prediction for the Grimsvoetn 2011 event

    NASA Astrophysics Data System (ADS)

    Maurer, Christian; Arnold, Delia; Klonner, Robert; Wotawa, Gerhard

    2014-05-01

    The ESA-funded international project VAST (Volcanic Ash Strategic Initiative Team) includes focusing on a realistic source term estimation in the case of volcanic eruptions as well as on an estimate of the forecast uncertainty in the resulting atmospheric dispersion calculations, which partly derive from the forecast uncertainty in the meteorological input data. SEVIRI earth observation data serve as a basis for the source term estimation, from which the total atmospheric column ash content can be estimated. In an operational environment, the already available EUMETCAST VOLE product may be used. Further an a priori source term is needed, which can be coarsely estimated according to information from previous eruptions and/or constrained with observations of the eruption column. The link between observations and the a priori source is established by runs of the atmospheric transport model FLEXPART for individual emission periods and a predefined number of vertical levels. Through minimizing the differences between observations and model results the so-called a posteriori source term can be depicted for a certain time interval as a function of height. Such a result is shown for a first test case, the eruption of the Grimsvoetn volcano on Iceland in May 2011. Once the dispersion calculations are as optimized as possible with regard to the source term, the uncertainty stemming from the forecast uncertainty of the numeric weather prediction model used is still present, adding up to the unavoidable model errors. Since it is impossible to perform FLEXPART runs for all 50 members of the Integrated Forecasting System (IFS) of ECMWF due to computational (time-storage) constraints, the number of members gets restricted to five (maximum seven) representative runs via cluster analysis. The approach used is as of Klonner (2012) where it was demonstrated that exclusive consideration of the wind components on a pressure level (e.g. 400 hPa) makes it possible to find clusters and

  10. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  11. A Comprehensive Probabilistic Tsunami Hazard Assessment: Multiple Sources and Short-Term Interactions

    NASA Astrophysics Data System (ADS)

    Anita, G.; Selva, J.; Laura, S.

    2011-12-01

    We develop a comprehensive and total probabilistic tsunami hazard assessment (TotPTHA), in which many different possible source types concur to the definition of the total tsunami hazard at given target sites. In a multi-hazard and multi-risk perspective, such an innovative approach allows, in principle, to consider all possible tsunamigenic sources, from seismic events, to slides, asteroids, volcanic eruptions, etc. In this respect, we also formally introduce and discuss the treatment of interaction/cascade effects in the TotPTHA analysis. We demonstrate how external triggering events may induce significant temporary variations in the tsunami hazard. Because of this, such effects should always be considered, at least in short-term applications, to obtain unbiased analyses. Finally, we prove the feasibility of the TotPTHA and of the treatment of interaction/cascade effects by applying this methodology to an ideal region with realistic characteristics (Neverland).

  12. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  13. Analysis of source term modeling for low-level radioactive waste performance assessments

    SciTech Connect

    Icenhour, A.S.

    1995-03-01

    Site-specific radiological performance assessments are required for the disposal of low-level radioactive waste (LLW) at both commercial and US Department of Energy facilities. This work explores source term modeling of LLW disposal facilities by using two state-of-the-art computer codes, SOURCEI and SOURCE2. An overview of the performance assessment methodology is presented, and the basic processes modeled in the SOURCE1 and SOURCE2 codes are described. Comparisons are made between the two advective models for a variety of radionuclides, transport parameters, and waste-disposal technologies. These comparisons show that, in general, the zero-order model predicts undecayed cumulative fractions leached that are slightly greater than or equal to those of the first-order model. For long-lived radionuclides, results from the two models eventually reach the same value. By contrast, for short-lived radionuclides, the zero-order model predicts a slightly higher undecayed cumulative fraction leached than does the first-order model. A new methodology, based on sensitivity and uncertainty analyses, is developed for predicting intruder scenarios. This method is demonstrated for {sup 137}Cs in a tumulus-type disposal facility. The sensitivity and uncertainty analyses incorporate input-parameter uncertainty into the evaluation of a potential time of intrusion and the remaining radionuclide inventory. Finally, conclusions from this study are presented, and recommendations for continuing work are made.

  14. Short-term spatial change in a volcanic tremor source during the 2011 Kirishima eruption

    NASA Astrophysics Data System (ADS)

    Matsumoto, Satoshi; Shimizu, Hiroshi; Matsushima, Takeshi; Uehira, Kenji; Yamashita, Yusuke; Nakamoto, Manami; Miyazaki, Masahiro; Chikura, Hiromi

    2013-04-01

    Volcanic tremors are indicators of magmatic behavior, which is strongly related to volcanic eruptions and activity. Detection of spatial and temporal variations in the source location is important for understanding the mechanism of volcanic eruptions. However, short-term temporal variations within a tremor event have not always been detected by seismic array observations around volcanoes. Here, we show that volcanic tremor sources were activated at both the top (i.e., the crater) and the lower end of the conduit, by analyzing seismograms from a dense seismic array 3 km from the Shinmoedake crater, Kirishima volcano, Japan. We observed changes in the seismic ray direction during a volcanic tremor sequence, and inferred two major sources of the tremor from the slowness vectors of the approaching waves. One was located in a shallow region beneath the Shinmoedake crater. The other was found in a direction N30°W from the array, pointing to a location above a pressure source. The fine spatial and temporal characteristics of volcanic tremors suggest an interaction between deep and shallow conduits.

  15. Long-term storage life of light source modules by temperature cycling accelerated life test

    NASA Astrophysics Data System (ADS)

    Ningning, Sun; Manqing, Tan; Ping, Li; Jian, Jiao; Xiaofeng, Guo; Wentao, Guo

    2014-05-01

    Light source modules are the most crucial and fragile devices that affect the life and reliability of the interferometric fiber optic gyroscope (IFOG). While the light emitting chips were stable in most cases, the module packaging proved to be less satisfactory. In long-term storage or the working environment, the ambient temperature changes constantly and thus the packaging and coupling performance of light source modules are more likely to degrade slowly due to different materials with different coefficients of thermal expansion in the bonding interface. A constant temperature accelerated life test cannot evaluate the impact of temperature variation on the performance of a module package, so the temperature cycling accelerated life test was studied. The main failure mechanism affecting light source modules is package failure due to solder fatigue failure including a fiber coupling shift, loss of cooling efficiency and thermal resistor degradation, so the Norris-Landzberg model was used to model solder fatigue life and determine the activation energy related to solder fatigue failure mechanism. By analyzing the test data, activation energy was determined and then the mean life of light source modules in different storage environments with a continuously changing temperature was simulated, which has provided direct reference data for the storage life prediction of IFOG.

  16. Numerical Dissipation and Wrong Propagation Speed of Discontinuities for Stiff Source Terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Kotov, D. V.; Sjoegreen, B.

    2012-01-01

    In compressible turbulent combustion/nonequilibrium flows, the constructions of numerical schemes for (a) stable and accurate simulation of turbulence with strong shocks, and (b) obtaining correct propagation speed of discontinuities for stiff reacting terms on coarse grids share one important ingredient - minimization of numerical dissipation while maintaining numerical stability. Here coarse grids means standard mesh density requirement for accurate simulation of typical non-reacting flows. This dual requirement to achieve both numerical stability and accuracy with zero or minimal use of numerical dissipation is most often conflicting for existing schemes that were designed for non-reacting flows. The goal of this paper is to relate numerical dissipations that are inherited in a selected set of high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities as a function of stiffness of the source term and the grid spacing.

  17. EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY

    SciTech Connect

    Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young; Kim, Chang-Lak

    2003-02-27

    Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation of the Korean concept of the LILW disposal project in the near future.

  18. Basic repository source term and data sheet report: Deaf Smith County

    SciTech Connect

    Not Available

    1987-01-01

    This report is one of a series describing studies undertaken in support of the US Department of Energy Civilian Radioactive Waste Management (CRWM) Program. This study contains the derivation of values for environmental source terms and resources consumed for a CRWM repository. Estimates include heavy construction equipment; support equipment; shaft-sinking equipment; transportation equipment; and consumption of fuel, water, electricity, and natural gas. Data are presented for construction and operation at an assumed site in Deaf Smith County, Texas. 2 refs., 6 tabs.

  19. Design parameters and source terms: Volume 1, Design parameters: Revision 0

    SciTech Connect

    Not Available

    1987-09-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report to the level of the Site Characterization Plan - Conceptual Design Report, SCP-CDR. The previous unpublished SCC Study identified the data needs for the Environmental Assessment effort for seven possible salt repository sites.

  20. The source term and waste optimization of molten salt reactors with processing

    SciTech Connect

    Gat, U.; Dodds, H.L.

    1993-07-01

    The source term of a molten salt reactor (MSR) with fuel processing is reduced by the ratio of processing time to refueling time as compared to solid fuel reactors. The reduction, which can be one to two orders of magnitude, is due to removal of the long-lived fission products. The waste from MSRs can be optimized with respect to its chemical composition, concentration, mixture, shape, and size. The actinides and long-lived isotopes can be separated out and returned to the reactor for transmutation. These features make MSRs more acceptable and simpler in operation and handling.

  1. DWPF Algorithm for Calculation of Source Terms and Consequences for EXCEL

    1997-02-11

    The DWPFAST software application algorithm is an Excel spreadsheet, with optional macros, designed to calculate the radiological source terms and consequences due to postulated accident progressions in non-reactor nuclear facilities (currently it is being used for DWPF). Upon input of a multi-character accident progression identification code, and basic facility data, the algorithm calculates individual accident segment releases, overall facility releases, and radiological consequences for various receptors, for up to 13 individual radionuclides. The algorithm wasmore » designed to support probabilistic safety assements (PSAs).« less

  2. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  3. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-01-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  4. User`s Manual for the SOURCE1 and SOURCE2 Computer Codes: Models for Evaluating Low-Level Radioactive Waste Disposal Facility Source Terms (Version 2.0)

    SciTech Connect

    Icenhour, A.S.; Tharp, M.L.

    1996-08-01

    The SOURCE1 and SOURCE2 computer codes calculate source terms (i.e. radionuclide release rates) for performance assessments of low-level radioactive waste (LLW) disposal facilities. SOURCE1 is used to simulate radionuclide releases from tumulus-type facilities. SOURCE2 is used to simulate releases from silo-, well-, well-in-silo-, and trench-type disposal facilities. The SOURCE codes (a) simulate the degradation of engineered barriers and (b) provide an estimate of the source term for LLW disposal facilities. This manual summarizes the major changes that have been effected since the codes were originally developed.

  5. Distribution and source of (129)I, (239)(,240)Pu, (137)Cs in the environment of Lithuania.

    PubMed

    Ežerinskis, Ž; Hou, X L; Druteikienė, R; Puzas, A; Šapolaitė, J; Gvozdaitė, R; Gudelis, A; Buivydas, Š; Remeikis, V

    2016-01-01

    Fifty five soil samples collected in the Lithuania teritory in 2011 and 2012 were analyzed for (129)I, (137)Cs and Pu isotopes in order to investigate the level and distribution of artificial radioactivity in Lithuania. The activity and atomic ratio of (238)Pu/((239,24)0)Pu, (129)I/(127)I and (131)I/(137)Cs were used to identify the origin of these radionuclides. The (238)Pu/(239+240)Pu and (240)Pu/(239)Pu ratios in the soil samples analyzed varied in the range of 0.02-0.18 and 0.18-0.24, respectively, suggesting the global fallout as the major source of Pu in Lithuania. The values of 10(-9) to 10(-6) for (129)I/(127)I atomic ratio revealed that the source of (129)I in Lithuania is global fallout in most cases though several sampling sites shows a possible impact of reprocessing releases. Estimated (129)I/(131)I ratio in soil samples from the southern part of Lithuania shows negligible input of the Chernobyl fallout. No correlation of the (137)Cs and Pu isotopes with (129)I was observed, indicating their different sources terms. Results demonstrate uneven distribution of these radionuclides in the Lithuanian territory and several sources of contamination i.e. Chernobyl accident, reprocessing releases and global fallout.

  6. Regulatory Technology Development Plan Sodium Fast Reactor. Mechanistic Source Term Development

    SciTech Connect

    Grabaskas, David S.; Brunett, Acacia Joann; Bucknor, Matthew D.; Sienicki, James J.; Sofu, Tanju

    2015-02-28

    Construction and operation of a nuclear power installation in the U.S. requires licensing by the U.S. Nuclear Regulatory Commission (NRC). A vital part of this licensing process and integrated safety assessment entails the analysis of a source term (or source terms) that represents the release of radionuclides during normal operation and accident sequences. Historically, nuclear plant source term analyses have utilized deterministic, bounding assessments of the radionuclides released to the environment. Significant advancements in technical capabilities and the knowledge state have enabled the development of more realistic analyses such that a mechanistic source term (MST) assessment is now expected to be a requirement of advanced reactor licensing. This report focuses on the state of development of an MST for a sodium fast reactor (SFR), with the intent of aiding in the process of MST definition by qualitatively identifying and characterizing the major sources and transport processes of radionuclides. Due to common design characteristics among current U.S. SFR vendor designs, a metal-fuel, pool-type SFR has been selected as the reference design for this work, with all phenomenological discussions geared toward this specific reactor configuration. This works also aims to identify the key gaps and uncertainties in the current knowledge state that must be addressed for SFR MST development. It is anticipated that this knowledge state assessment can enable the coordination of technology and analysis tool development discussions such that any knowledge gaps may be addressed. Sources of radionuclides considered in this report include releases originating both in-vessel and ex-vessel, including in-core fuel, primary sodium and cover gas cleanup systems, and spent fuel movement and handling. Transport phenomena affecting various release groups are identified and qualitatively discussed, including fuel pin and primary coolant retention, and behavior in the cover gas and

  7. Temperature distribution of air source heat pump barn with different air flow

    NASA Astrophysics Data System (ADS)

    He, X.; Li, J. C.; Zhao, G. Q.

    2016-08-01

    There are two type of airflow form in tobacco barn, one is air rising, the other is air falling. They are different in the structure layout and working principle, which affect the tobacco barn in the distribution of temperature field and velocity distribution. In order to compare the temperature and air distribution of the two, thereby obtain a tobacco barn whose temperature field and velocity distribution are more uniform. Taking the air source heat pump tobacco barn as the investigated subject and establishing relevant mathematical model, the thermodynamics of the two type of curing barn was analysed and compared based on Fluent. Provide a reasonable evidence for chamber arrangement and selection of outlet for air source heat pump tobacco barn.

  8. The Steady Distribution of Moisture Beneath A Two-Dimensional Surface Source

    NASA Astrophysics Data System (ADS)

    Martinez, M. J.; McTigue, D. F.

    1991-06-01

    The steady distribution of moisture beneath a two-dimensional strip source is analyzed by applying the quasi-linear approximation. The source is described by specifying either the moisture content or the infiltration rate. A water table is specified at some depth D below the surface, the depth varying from shallow to semi-infinite. Numerical solutions are determined, via the boundary integral equation method, as a function of the material inverse sorptive length α, the width of the strip source 2L, and the depth to the water table. The moisture introduced at the source is broadly spread below the surface when αL ≪ 1, for which absorption by capillary forces is dominant over gravity-induced flow. Conversely, the distribution becomes fingerlike along the vertical when αL ≫ 1, where gravity is dominant over absorption. For a source described by specifying the moisture content, the presence of a water table at finite depth influences the infiltration through the source when αD is less than about 4; infiltration rates obtained assuming the water table depth is semi-infinite are of sufficient accuracy for greater values of αD. When the source is described by a specified infiltration flux, the maximum allowable value of this flux for which the material beneath the source remains unsaturated is determined as a function of nondimensional sorptive number and depth to the water table.

  9. The distribution of moisture beneath a two-dimensional surface source

    NASA Astrophysics Data System (ADS)

    Martinez, M. J.; McTigue, D. F.

    1991-03-01

    The distribution of moisture beneath a 2-D strip source is analyzed by applying the quasi-linear approximation. The source is described by specifying either the moisture content or the infiltration rate. A water table is specified at some depth, D, below the surface, the depth varying from shallow to semi-infinite. Numerical solutions are determined, via the boundary integral equation method, as a function of material sorptivity, alpha, the width of the strip source, 2L, and the depth to the water table. The moisture introduced at the source is broadly spread below the surface when alpha L (much less than) 1, for which absorption by capillary forces is dominant over gravity induced flow. Conversely, the distribution becomes finger-like along the vertical when alpha L (much greater than) 1, where gravity is dominant over absorption. For a source described by specifying the moisture content, the presence of a water table at finite depth influences the infiltration through the source when alpha D is less than about 4; infiltration rates obtained when the water table depth is semi-infinite are of sufficient accuracy for greater values of alpha D. When the source is described by a specified infiltration flux, the maximum allowable value of this flux for which the material beneath the source remains unsaturated is determined as a function of nondimensional sorptivity and depth to the water table.

  10. Higher-order-in-spin interaction Hamiltonians for binary black holes from source terms of Kerr geometry in approximate ADM coordinates

    SciTech Connect

    Hergt, Steven; Schaefer, Gerhard

    2008-05-15

    The Kerr metric outside the ergosphere is transformed into Arnowitt-Deser-Misner coordinates up to the orders 1/r{sup 4} and a{sup 2}, respectively, in radial coordinate r and reduced angular momentum variable a, starting from the Kerr solution in quasi-isotropic as well as harmonic coordinates. The distributional source terms for the approximate solution are calculated. To leading order in linear momenta, higher-order-in-spin interaction Hamiltonians for black hole binaries are derived.

  11. Constraints on galactic distributions of gamma-ray burst sources from BATSE observations

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon; Meegan, Charles A.; Pendleton, Geoffrey N.; Fishman, Gerald J.; Wilson, Robert B.; Paciesas, William S.; Brock, Martin N.; Horack, John M.

    1994-01-01

    The paradigm that gamma-ray bursts originate from Galactic sources is studied in detail using the angular and intensity distributions observed by the Burst and Transient Source Experiment (BATSE) on NASA's Compton Gamma Ray Observatory (CGRO). Monte Carlo models of gamma-ray burst spatial distributions and luminosity functions are used to simulate bursts, which are then folded through mathematical models of BATSE selection effects. The observed and computed angular intensity distributions are analyzed using modifications of standard statistical homogeneity and isotropy studies. Analysis of the BATSE angular and intensity distributions greatly constrains the origins and luminosities of burst sources. In particular, it appears that no single population of sources confined to a Galactic disk, halo, or localized spiral arm satisfactorily explains BATSE observations and that effects of the burst luminosity function are secondary when considering such models. One family of models that still satisfies BATSE observations comprises sources located in an extended spherical Galactic corona. Coronal models are limited to small ranges of burst luminosity and core radius, and the allowed parameter space for such models shrinks with each new burst BATSE observes. Multiple-population models of bursts are found to work only if (1) the primary population accounts for the general isotropy and inhomogeneity seen in the BATSE observations and (2) secondary populations either have characteristics similar to the primary population or contain numbers that are small relative to the primary population.

  12. Computer simulation of PPF distribution under blue and red LED light source for plant growth.

    PubMed

    Takita, S; Okamoto, K; Yanagi, T

    1996-12-01

    The superimposed pattern of "luminescence spectrum of blue light emitting diode (LED)" and "that of red LED", corresponds well to light absorption spectrum of chlorophyll. If these two kinds of LED are used as a light source, various plant cultivation experiments are possible. The cultivation experiments which use such light sources are becoming increasingly active, and in such experiments, it is very important to know the distribution of the photosynthetic photon flux (PPF) which exerts an important influence on photosynthesis. Therefore, we have developed a computer simulation system which can visualize the PPF distribution under a light source equipped with blue and red LEDs. In this system, an LED is assumed to be a point light source, and only the photons which are emitted directly from LED are considered. This simulation system can display a perspective view of the PPF distribution, a transverse and a longitudinal section of the distribution, and a contour map of the distribution. Moreover, a contour map of the ratio of the value of the PPF emitted by blue LEDs to that by blue and red LEDs can be displayed. As the representation is achieved by colored lines according to the magnitudes of the PPF in our system, a user can understand and evaluate the state of the PPF well.

  13. Characterization and reconstruction of planar sources that generate identical intensity distributions in the Fraunhofer zone.

    PubMed

    Martínez-Herrero, R; Mejías, P M

    1981-12-01

    A general explicit form of the correlation functions of all the partially coherent quasi-monochromatic sources that generate identical intensity distributions at the far (Fraunhofer) zone is given. The common characteristic part of all of these correlation functions is pointed out. Also, the possibility is shown for reconstructing (in unique way), from intensity data at the far zone, any source whose correlation function at some region Omega depends on the coordinate difference only.

  14. Contribution of higher order terms in electron-acoustic solitary waves with vortex electron distribution

    NASA Astrophysics Data System (ADS)

    Demiray, Hilmi

    2014-12-01

    The basic equations describing the nonlinear electron-acoustic waves in a plasma composed of a cold electron fluid, hot electrons obeying a trapped/vortex-like distribution, and stationary ions, in the long-wave limit, are re-examined through the use of the modified PLK method. Introducing the concept of strained coordinates and expanding the field variables into a power series of the smallness parameter ɛ, a set of evolution equations is obtained for various order terms in the perturbation expansion. The evolution equation for the lowest order term in the perturbation expansion is characterized by the conventional modified Korteweg-deVries (mKdV) equation, whereas the evolution equations for the higher order terms in the expansion are described by the degenerate(linearized) mKdV equation. By studying the localized traveling wave solution to the evolution equations, the strained coordinate for this order is determined so as to remove possible secularities that might occur in the solution. It is observed that the coefficient of the strained coordinate for this order corresponds to the correction term in the wave speed. The numerical results reveal that the contribution of second order term to the wave amplitude is about 20 %, which cannot be ignored.

  15. Quantifying the Combined Effect of Radiation Therapy and Hyperthermia in Terms of Equivalent Dose Distributions

    SciTech Connect

    Kok, H. Petra; Crezee, Johannes; Franken, Nicolaas A.P.; Barendsen, Gerrit W.

    2014-03-01

    Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normal tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different

  16. The Analytical Repository Source-Term (AREST) model: Description and documentation

    SciTech Connect

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs.

  17. Accident source terms for pressurized water reactors with high-burnup cores calculated using MELCOR 1.8.5.

    SciTech Connect

    Gauntt, Randall O.; Powers, Dana Auburn; Ashbaugh, Scott G.; Leonard, Mark Thomas; Longmire, Pamela

    2010-04-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in this study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs2MoO4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU

  18. Measurement of High Frequency Perturbations to the Ion Velocity Distribution in the HELIX Helicon Plasma Source

    NASA Astrophysics Data System (ADS)

    Kline, J. L.; Boivin, R. F.; Franck, C.; Klinger, T.; Scime, E. E.

    2001-10-01

    Using lasers to measure plasma parameters has become more common in recent years. Lasers can provide information about plasma parameters without perturbing the plasma. The most common technique for ion parameter measurements is Laser Induced Fluorescence (LIF). LIF typically measures the ion velocity distribution and provides information about the ion temperatures and ion flows in the plasma. More recently, Skiff and Anderegg [1987] and Safarty et al. [1996] have shown that measurements of the perturbed ion velocity distribution can provide wave number information for waves propagating in a plasma due the non-local nature of the dielectric tensor. In the past two years, attempts have been made to measure the perturbed ion velocity distribution function at frequencies relevant to Helicon plasma sources. The objective of the measurements is to identify electrostatic oscillation associated to the slow wave or "Trivelpeice Gould modes" in helicon plasma sources. Past efforts to measure the perturbed ion velocity distribution function have been unsuccessful due to technical difficulties associated with measuring the cross correlation of the photon and reference signals. Using a high frequency SR544 Stanford Research lock-in amplifier, high frequency perturbations to the ion velocity distribution in a helicon source have been measured. Perturbed ion velocity distribution measurements, along with the related theory will be presented.

  19. Source-term development for a contaminant plume for use by multimedia risk assessment models

    SciTech Connect

    Whelan, Gene ); McDonald, John P. ); Taira, Randal Y. ); Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equal importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.

  20. Marine litter on Mediterranean shores: Analysis of composition, spatial distribution and sources in north-western Adriatic beaches.

    PubMed

    Munari, Cristina; Corbau, Corinne; Simeoni, Umberto; Mistri, Michele

    2016-03-01

    Marine litter is one descriptor in the EU Marine Strategy Framework Directive (MSFD). This study provides the first account of an MSFD indicator (Trends in the amount of litter deposited on coastlines) for the north-western Adriatic. Five beaches were sampled in 2015. Plastic dominated in terms of abundance, followed by paper and other groups. The average density was 0.2 litter items m(-2), but at one beach it raised to 0.57 items m(-2). The major categories were cigarette butts, unrecognizable plastic pieces, bottle caps, and others. The majority of marine litter came from land-based sources: shoreline and recreational activities, smoke-related activities and dumping. Sea-based sources contributed for less. The abundance and distribution of litter seemed to be particularly influenced by beach users, reflecting inadequate disposal practices. The solution to these problems involves implementation and enforcement of local educational and management policies. PMID:26725754

  1. Marine litter on Mediterranean shores: Analysis of composition, spatial distribution and sources in north-western Adriatic beaches.

    PubMed

    Munari, Cristina; Corbau, Corinne; Simeoni, Umberto; Mistri, Michele

    2016-03-01

    Marine litter is one descriptor in the EU Marine Strategy Framework Directive (MSFD). This study provides the first account of an MSFD indicator (Trends in the amount of litter deposited on coastlines) for the north-western Adriatic. Five beaches were sampled in 2015. Plastic dominated in terms of abundance, followed by paper and other groups. The average density was 0.2 litter items m(-2), but at one beach it raised to 0.57 items m(-2). The major categories were cigarette butts, unrecognizable plastic pieces, bottle caps, and others. The majority of marine litter came from land-based sources: shoreline and recreational activities, smoke-related activities and dumping. Sea-based sources contributed for less. The abundance and distribution of litter seemed to be particularly influenced by beach users, reflecting inadequate disposal practices. The solution to these problems involves implementation and enforcement of local educational and management policies.

  2. Impact of the differential fluence distribution of brachytherapy sources on the spectroscopic dose-rate constant

    SciTech Connect

    Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A. E-mail: ladewerd@wisc.edu

    2015-05-15

    Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-only model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in

  3. Detecting Long-term Changes in Point Source Fossil CO2 Emissions with Tree Ring Archives

    NASA Astrophysics Data System (ADS)

    Keller, E. D.; Turnbull, J. C.; Norris, M. W.

    2015-12-01

    We examine the utility of tree ring 14C archives for detecting long term changes in fossil CO2 emissions from a point source. Trees assimilate carbon from the atmosphere during photosynthesis, in the process faithfully recording the average atmospheric 14C content over the growing season in each annual tree ring. Using 14C as a proxy for fossil CO2, we examine interannual variability over six years of fossil CO2 observations between 2004 and 2012 from two trees growing near the Kapuni Natural Gas Plant in rural Taranaki, New Zealand. We quantify the amount of variability that can be attributed to transport and meteorology by simulating constant point source fossil CO2 emissions over the observation period with the atmospheric transport model WindTrax. We then calculate the amount of change in emissions that we can detect with new observations over annual or multi-year time periods given both measurement uncertainty of 1ppm and the modelled variation in transport. In particular, we ask, what is the minimum amount of change in emissions that we can detect using this method, given a reference period of six years? We find that changes of 42% or more could be detected in a new sample from one year at the pine tree, or 22% in the case of four years of new samples. This threshold lowers and the method becomes more practical with a larger signal; for point sources 10 times the magnitude of the Kapuni plant (a typical size for large electricity generation point sources worldwide), it would be possible to detect sustained emissions changes on the order of 10% given suitable meteorology and observations.

  4. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  5. Bacterial Composition in a Metropolitan Drinking Water Distribution System Utilizing Different Source Waters

    EPA Science Inventory

    The microbial community structure was investigated from bulk phase water samples of multiple collection sites from two service areas within the Cincinnati drinking water distribution system (DWDS). Each area is associated with a different primary source of water (i.e., groundwat...

  6. 76 FR 77223 - SourceGas Distribution LLC; Notice of Petition for Rate Approval and Revised Statement of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-12

    ... to utilize rates that are the same as those contained in SourceGas' transportation rate schedules for... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Petition for Rate Approval and Revised Statement of Operating Conditions Take notice that on December 1, 2011, SourceGas Distribution LLC...

  7. 77 FR 40609 - SourceGas Distribution LLC; Notice of Petition for Rate Approval and Revised Statement of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... utilize rates that are the same as those contained in SourceGas' transportation rate schedules for... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Petition for Rate Approval and Revised Statement of Operating Conditions Take notice that on June 29, 2012, SourceGas Distribution LLC...

  8. Improvement of capabilities of the Distributed Electrochemistry Modeling Tool for investigating SOFC long term performance

    SciTech Connect

    Gonzalez Galdamez, Rinaldo A.; Recknagle, Kurtis P.

    2012-04-30

    This report provides an overview of the work performed for Solid Oxide Fuel Cell (SOFC) modeling during the 2012 Winter/Spring Science Undergraduate Laboratory Internship at Pacific Northwest National Laboratory (PNNL). A brief introduction on the concept, operation basics and applications of fuel cells is given for the general audience. Further details are given regarding the modifications and improvements of the Distributed Electrochemistry (DEC) Modeling tool developed by PNNL engineers to model SOFC long term performance. Within this analysis, a literature review on anode degradation mechanisms is explained and future plans of implementing these into the DEC modeling tool are also proposed.

  9. Transient Flows and Stratification of an Enclosure Containing Both a Localised and Distributed Source of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2014-11-01

    We examine the transient flow and stratification in a naturally ventilated enclosure containing both a localised and distributed source of buoyancy. Both sources of buoyancy are located at the base of the enclosure to represent a building where there is a distributed heat flux from the floor, for example from a sun patch, that competes with a localised heat source within the space. The steady conditions of the space are controlled purely by the geometry of the enclosure and the ratio of the distributed and localised buoyancy fluxes Ψ and are independent of the order buoyancy fluxes are introduced into the space. However, the order sources are introduced into the space, such as delaying the introduction of a localised source, alter the transients significantly. To investigate this problem, small-scale experiments were conducted and compared to a `perfect-mixing' model of the transients. How the stratification evolves in time, in particular how long it takes to reach steady conditions, is key to understanding what can be expected in real buildings. The transient evolution of the interior stratification is reported here and compared to the theoretical model.

  10. [Soil Heavy Metal Spatial Distribution and Source Analysis Around an Aluminum Plant in Baotou].

    PubMed

    Zhang, Lian-ke; Li, Hai-peng; Huang, Xue-min; Li, Yu-mei; Jiao, Kun-ling; Sun, Peng; Wang, Wei-da

    2016-03-15

    The soil with 500 m distance from an aluminum plant in Baotou was studied. A total of 64 soil samples were taken from the 0-5 cm, 5-20 cm, 20-40 cm and 40-60 cm layers, and the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn were tested, respectively. The correlation analysis and principal component analysis were used to identify the sources of these heavy metals in soils. The results suggested that the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn in study area were 32.9, 50.35, 69.92, 43.78, 0.54, 554.42 and 36.65 mg · kg⁻¹ respectively. All seven heavy metals tested were overweight compared with the background values of soil in Inner Mongolia. The spatial distribution of heavy metals showed that the horizontal distribution of heavy metals was obviously enriched in the southwest, while in vertical distribution, the heavy metal content (0 to 5 cm) was highest in the surface soil, and the heavy metal content decreased with increasing depth and tended to be stabilized when the depth was over 20 cm. Source analysis showed that the source of Cu, Zn, Cr and Mn might be influenced by the aluminum plant and the surrounding industrial activity. The source of Pb and Cd might be mainly related to road transportation. The source of Ni may be affected by agricultural activities and soil parent material together. PMID:27337911

  11. [Soil Heavy Metal Spatial Distribution and Source Analysis Around an Aluminum Plant in Baotou].

    PubMed

    Zhang, Lian-ke; Li, Hai-peng; Huang, Xue-min; Li, Yu-mei; Jiao, Kun-ling; Sun, Peng; Wang, Wei-da

    2016-03-15

    The soil with 500 m distance from an aluminum plant in Baotou was studied. A total of 64 soil samples were taken from the 0-5 cm, 5-20 cm, 20-40 cm and 40-60 cm layers, and the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn were tested, respectively. The correlation analysis and principal component analysis were used to identify the sources of these heavy metals in soils. The results suggested that the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn in study area were 32.9, 50.35, 69.92, 43.78, 0.54, 554.42 and 36.65 mg · kg⁻¹ respectively. All seven heavy metals tested were overweight compared with the background values of soil in Inner Mongolia. The spatial distribution of heavy metals showed that the horizontal distribution of heavy metals was obviously enriched in the southwest, while in vertical distribution, the heavy metal content (0 to 5 cm) was highest in the surface soil, and the heavy metal content decreased with increasing depth and tended to be stabilized when the depth was over 20 cm. Source analysis showed that the source of Cu, Zn, Cr and Mn might be influenced by the aluminum plant and the surrounding industrial activity. The source of Pb and Cd might be mainly related to road transportation. The source of Ni may be affected by agricultural activities and soil parent material together.

  12. Spatial distribution and luminosity function of OH/IR maser sources

    NASA Astrophysics Data System (ADS)

    Tong, Y.; Sun, J.; Xie, S.-D.; Yang, X.-X.

    1984-12-01

    Published observational data on 127 OH-maser sources for which visible or IR identifications and distance estimates are available (mainly from the list of Engels, 1979) are analyzed statistically to determine their Galactic distribution and luminosity function. The results are presented graphically and discussed. A density distribution with a steep peak at about 7.5 kpc from the Galactic center and FWHM 2.1 kpc, similar to that found for Mira variables by Glass et al. (1982) and markedly different from that of Baud et al. (1979 and 1981) for unidentified objects, is observed. The luminosity function rho(L) is found to be equal to 189.67 L exp -1.79, like that for unidentified objects, despite the fact that the observed luminosity range (0.16-1000 Jy kpc sq) of the identified sources is wider than that determined by Bowers (1978) for unidentified sources.

  13. DOA Estimation of Coherently Distributed Sources Based on Block-Sparse Constraint

    NASA Astrophysics Data System (ADS)

    Gan, Lu; Wang, Xiao Qing; Liao, Hong Shu

    In this letter, a new method is proposed to solve the direction-of-arrivals (DOAs) estimation problem of coherently distributed sources based on the block-sparse signal model of compressed sensing (CS) and the convex optimization theory. We make use of a certain number of point sources and the CS array architecture to establish the compressive version of the discrete model of coherently distributed sources. The central DOA and the angular spread can be estimated simultaneously by solving a convex optimization problem which employs a joint norm constraint. As a result we can avoid the two-dimensional search used in conventional algorithms. Furthermore, the multiple-measurement-vectors (MMV) scenario is also considered to achieve robust estimation. The effectiveness of our method is confirmed by simulation results.

  14. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    SciTech Connect

    Dunn, D.; Rappaport, C.M.; Terzuoli, A.J. Jr.

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  15. Long-term observations of aerosol size distributions in semi-clean and polluted savannah in South Africa

    NASA Astrophysics Data System (ADS)

    Vakkari, V.; Beukes, J. P.; Laakso, H.; Mabaso, D.; Pienaar, J. J.; Kulmala, M.; Laakso, L.

    2012-09-01

    originates from regional wild fires, while at Marikana domestic heating in the informal settlements is the main source. Air mass history analysis for Botsalano identified four regional scale source areas in Southern Africa and enabled the differentiation between fresh and aged rural background aerosol originating from the clean sector, i.e., western sector with very few large anthropogenic sources. Comparison to size distributions published for other comparable environments in Northern Hemisphere shows Southern African savannah to have a unique combination of sources and meteorological parameters. The observed strong link between combustion and seasonal variation is comparable only to the Amazon basin; however the lack of long-term observations in the Amazonas does not allow a quantitative comparison. All the data presented in the figures, as well as the time series of monthly mean and median size distributions are included in numeric form as a Supplement to provide a reference point for the aerosol modelling community.

  16. Long-term observations of aerosol size distributions in semi-clean and polluted savannah in South Africa

    NASA Astrophysics Data System (ADS)

    Vakkari, V.; Beukes, J. P.; Laakso, H.; Mabaso, D.; Pienaar, J. J.; Kulmala, M.; Laakso, L.

    2013-02-01

    concentration originates from regional wild fires, while at Marikana domestic heating in the informal settlements is the main source. Air mass history analysis for Botsalano identified four regional scale source areas in southern Africa and enabled the differentiation between fresh and aged rural background aerosol originating from the clean sector, i.e., western sector with very few large anthropogenic sources. Comparison to size distributions published for other comparable environments in Northern Hemisphere shows southern African savannah to have a unique combination of sources and meteorological parameters. The observed strong link between combustion and seasonal variation is comparable only to the Amazon basin; however, the lack of long-term observations in the Amazonas does not allow a quantitative comparison. All the data presented in the figures, as well as the time series of monthly mean and median size distributions are included in numeric form as a Supplement to provide a reference point for the aerosol modelling community.

  17. CCN frequency distributions and aerosol chemical composition from long-term observations at European ACTRIS supersites

    NASA Astrophysics Data System (ADS)

    Decesari, Stefano; Rinaldi, Matteo; Schmale, Julia Yvonne; Gysel, Martin; Fröhlich, Roman; Poulain, Laurent; Henning, Silvia; Stratmann, Frank; Facchini, Maria Cristina

    2016-04-01

    Cloud droplet number concentration is regulated by the availability of aerosol acting as cloud condensation nuclei (CCN). Predicting the air concentrations of CCN involves knowledge of all physical and chemical processes that contribute to shape the particle size distribution and determine aerosol hygroscopicity. The relevance of specific atmospheric processes (e.g., nucleation, coagulation, condensation of secondary organic and inorganic aerosol, etc.) is time- and site-dependent, therefore the availability of long-term, time-resolved aerosol observations at locations representative of diverse environments is strategic for the validation of state-of-the-art chemical transport models suited to predict CCN concentrations. We focused on long-term (year-long) datasets of CCN and of aerosol composition data including black carbon, and inorganic as well as organic compounds from the Aerosol Chemical Speciation Monitor (ACSM) at selected ACTRIS supersites (http://www.actris.eu/). We discuss here the joint frequency distribution of CCN levels and of aerosol chemical components concentrations for two stations: an alpine site (Jungfraujoch, CH) and a central European rural site (Melpitz, DE). The CCN frequency distributions at Jungfraujoch are broad and generally correlated with the distributions of the concentrations of aerosol chemical components (e.g., high CCN concentrations are most frequently found for high organic matter or black carbon concentrations, and vice versa), which can be explained as an effect of the strong seasonality in the aerosol characteristics at the mountain site. The CCN frequency distributions in Melpitz show a much weaker overlap with the distributions of BC concentrations or other chemical compounds. However, especially at high CCN concentration levels, a statistical correlation with organic matter (OM) concentration can be observed. For instance, the number of CCN (with particle diameter between 20 and 250 nm) at a supersaturation of 0.7% is

  18. Evaluation of severe accident risks: Quantification of major input parameters. Experts` determination of source term issues: Volume 2, Revision 1, Part 4

    SciTech Connect

    Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D.; Amos, C.N.; Helton, J.; Boyd, G.

    1992-06-01

    In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.

  19. Long-term accounting for raindrop size distribution variations improves quantitative precipitation estimation by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2016-04-01

    Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.

  20. Effective petroleum source rocks of the world: Stratigraphic distribution and controlling depositional factors

    SciTech Connect

    Klemme, H.D. ); Ulmishek, G.F. )

    1991-12-01

    Six stratigraphic intervals, representing one-third of Phanerozoic time, contain petroleum source rocks that have provided more than 90% of the world's discovered original reserves of oil and gas (in barrels of oil equivalent). The six intervals are (1) Silurian (generated 9% of the world's reserves), (2) Upper Devonian-Tournaisian (8% of reserves), (3) Pennsylvanian-Lower Permian (8% of reserves), (4) Upper Jurassic (25% of reserves), (5) middle Cretaceous (29% of reserves), and (6) Oligocene-Miocene (12.5% of reserves). This uneven distribution of source rocks vary from interval to interval. Maps that show facies, structural forms, and petroleum source rocks were prepared for this study. Analysis of the maps indicates that several primary factors controlled the areal distribution of source rocks, their geochemical type, and their effectiveness (i.e., the amounts of discovered original conventionally recoverable reserves of oil and gas generated by these rocks). These factors are geologic age, paleolatitude of the depositional areas, structural forms in which the deposition of source rocks occurred, and the evolution of biota. The maturation time of these source rocks demonstrates that majority of discovered oil and gas is very young; almost 70% of the world's original reserves of oil and gas has been generated since the Coniacian, and nearly 50% of the world's petroleum{sup 4} has been generated and trapped since the Oligocene.

  1. Sources and distribution of surficial silt, eastern US Atlantic continental shelf

    SciTech Connect

    Leschak, P.; Prusak, D.; Mazzullo, J.

    1985-01-01

    A study of the source and distribution of the surficial sediment on the eastern US Atlantic continental shelf between Cape Hatteras and Nova Scotia reveals that there are three major sources of silt on this shelf. One source is the unconsolidated Cretaceous and Tertiary coastal plain strata, represented by well-rounded and equant quartz grains which have been smoothed and rounded by chemical processes. A second source is the late Pleistocene glacial deposits, represented by highly angular and elongate quartz grains produced by mechanical fracture and breakage. A third source is the Paleozoic and Mesozoic lithified sedimentary and crystalline rocks of the Appalachians, represented by moderately angular and irregular quartz grains with crystalline and pseudocrystalline nodes. The distributional patterns of the three silt types relative to those from coarser sediments from the same sources shows evidence for the extensive reworking of fine-grained sediments on the shelf since the Holocene transgression. In general, coastal plain-derived silt dominates the shelf sediments from Cape Cod southward to Cape Hatteras; glacially-transported silt dominates the shelf sediments north of Cape Cod; and sedimentary/crystalline-derived silt is found in variable quantities throughout the entire study area. These three silt types were presumably redistributed by late Pleistocene as well as modern processes of sedimentation on the shelf.

  2. Improved treatment of source terms in TVD scheme for shallow water equations

    NASA Astrophysics Data System (ADS)

    Tseng, Ming-Hseng

    2004-06-01

    A number of high-resolution schemes have been recently developed to solve the homogeneous form of the shallow water equations. However, most approximate Riemann solvers experience difficulties with natural river applications if the irregular bed topography is not handled correctly. Based on the finite-difference flux-limited total variation diminishing (TVD) scheme, this paper develops a simple approach to handle the source terms for the one-dimensional open channel flow simulation with rapidly varying bed topography. Conclusions on the validity of the operator-splitting approach, the eigenvector-projection approach, and the proposed approach are presented. Analytical solution, experimental data, and available numerical result comparisons are shown to demonstrate the accuracy, robustness, stability, simplicity, and applicability of the proposed model.

  3. ACT: a program for calculation of the changes in radiological source terms with time

    SciTech Connect

    Woolfolk, S.W.

    1985-08-12

    The program ACT calculates the source term activity from a set of initial activities as a function of discrete time steps. This calculation considers inbreeding of daughter products. ACT also calculates ''Probable Release'', which is the activity at a given time multiplied by both the fraction released and the probability of the release. The ''Probable Release'' not only assumes that the fraction released is a single step function with time, but that the probability of release is zero for a limited period and it can be described by the ''Wisconsin Regression'' function using time as the independent variable. Finally, the program calculates the time integrated sum of the ''Probable Release'' for each isotope. This program is intended to support analysis of releases from radioactive waste disposal sites such as those required by 40 CFR 191.

  4. Update to the NARAC NNPP Non-Reactor Source Term Products

    SciTech Connect

    Vogt, P

    2009-06-29

    Recent updates to NARAC plots for NNPP requires a modification to your iClient database. The steps you need to take are described below. Implementation of the non-reactor source terms in February 2009 included four plots, the traditional three instantaneous plots (1-3) and a new Gamma Dose Rate: 1. Particulate Air Concentration 2. Total Ground Deposition 3. Whole Body Inhalation Dose Rate (CEDE Rate) 4. Gamma Dose Rate These plots were all initially implemented to be instantaneous output and generated 30 minutes after the release time. Recently, Bettis and NAVSEA have requested the Whole Body CEDE rate plot to be changed to an integrated dose valid at two hours. This is consistent with the change made to the Thyroid Dose rate plot conversion to a 2-hour Integrated Thyroid dose for the Reactor and Criticality accidents.

  5. Microbial characterization for the Source-Term Waste Test Program (STTP) at Los Alamos

    SciTech Connect

    Leonard, P.A.; Strietelmeier, B.A.; Pansoy-Hjelvik, M.E.; Villarreal, R.

    1999-04-01

    The effects of microbial activity on the performance of the proposed underground nuclear waste repository, the Waste Isolation Pilot Plant (WIPP) at Carlsbad, New Mexico are being studied at Los Alamos National Laboratory (LANL) as part of an ex situ large-scale experiment. Actual actinide-containing waste is being used to predict the effect of potential brine inundation in the repository in the distant future. The study conditions are meant to simulate what might exist should the underground repository be flooded hundreds of years after closure as a result of inadvertent drilling into brine pockets below the repository. The Department of Energy (DOE) selected LANL to conduct the Actinide Source-Term Waste Test Program (STTP) to confirm the predictive capability of computer models being developed at Sandia National Laboratory.

  6. Methodology and tools for source term assessment in case of emergency.

    PubMed

    Herviou, Karine; Calmtorp, Christer

    2004-01-01

    By looking at the power plant state of fission product barriers and critical safety systems, the magnitude of a potential radioactive release could be predicted in a timely manner to allow emergency response to be executed even before the occurrence of a release. This is the perspective in which the development of ASTRID methodology and tool is performed. The methodology maps out, for several reactor types as well as reactor containments, relevant process parameters and indicators, what and how to calculate and a structured way to summarise and conclude on potential source term and likely time projections. A computer tool is proposed to support the methodology, to suite different user situations, both on-site and off-site as well as size of staff, priority and work order. The output from such an assessment is intended to, first, give bases for decisions on necessary urgent protective actions pre-release, and, second, an input for the sophisticated dispersion calculation codes. PMID:15238656

  7. Overlap functions, spectroscopic factors, and asymptotic normalization coefficients generated by a shell-model source term

    SciTech Connect

    Timofeyuk, N. K.

    2010-06-15

    Overlap functions for one-nucleon removal are calculated as solutions of the inhomogeneous equation. The source term for this equation is generated by the 0(Planck constant/2pi)omega no-core shell-model wave functions and the effective nucleon-nucleon (NN) interactions that fit oscillator matrix elements derived from the NN scattering data. For the lightest A<=4 nuclei this method gives reasonable agreement with exact ab initio calculations. For 4

  8. Resolution of USQ regarding source term in the 232-Z waste incinerator building

    SciTech Connect

    Westsik, G.

    1995-12-31

    The 232-Z waste incinerator at the Hanford plutonium finishing facility was used to incinerate plutonium-bearing combustible materials generated during normal plant operations. Nondestructive analysis performed after the incinerator ceased operations indicated high plutonium loading in exhaust ductwork near the incinerator glove box, while the incinerator was found to have only low quantities. Measurements following a campaign to remove some of the ductwork resulted in a markedly higher assay valve for the incinerator glove box itself. Subsequent assays confirmed the most recent results and pointed to a potential further underestimation of the holdup, in part due to the attenuation due to fire brick which could not be seen and which had been thought to be present. Resolution of the raised concerns entailed forming a task team to perform further assay based on gamma and neutron NDA methods. This paper is a discussion of the unreviewed safety question regarding the source term in this area.

  9. The integration of renewable energy sources into electric power distribution systems. Volume 1: National assessment

    SciTech Connect

    Barnes, P.R.; Van Dyke, J.W.; Tesche, F.M.; Zaininger, H.W.

    1994-06-01

    Renewable energy technologies such as photovoltaic, solar thermal electricity, and wind turbine power are environmentally beneficial sources of electric power generation. The integration of renewable energy sources into electric power distribution systems can provide additional economic benefits because of a reduction in the losses associated with transmission and distribution lines. Benefits associated with the deferment of transmission and distribution investment may also be possible for cases where there is a high correlation between peak circuit load and renewable energy electric generation, such as photovoltaic systems in the Southwest. Case studies were conducted with actual power distribution system data for seven electric utilities with the participation of those utilities. Integrating renewable energy systems into electric power distribution systems increased the value of the benefits by about 20 to 55% above central station benefits in the national regional assessment. In the case studies presented in Vol. II, the range was larger: from a few percent to near 80% for a case where costly investments were deferred. In general, additional savings of at least 10 to 20% can be expected by integrating at the distribution level. Wind energy systems were found to be economical in good wind resource regions, whereas photovoltaic systems costs are presently a factor of 2.5 too expensive under the most favorable conditions.

  10. The integration of renewable energy sources into electric power distribution systems. Volume 1: National assessment

    NASA Astrophysics Data System (ADS)

    Barnes, P. R.; Vandyke, J. W.; Tesche, F. M.; Zaininger, H. W.

    1994-06-01

    Renewable energy technologies such as photovoltaic, solar thermal electricity, and wind turbine power are environmentally beneficial sources of electric power generation. The integration of renewable energy sources into electric power distribution systems can provide additional economic benefits because of a reduction in the losses associated with transmission and distribution lines. Benefits associated with the deferment of transmission and distribution investment may also be possible for cases where there is a high correlation between peak circuit load and renewable energy electric generation, such as photovoltaic systems in the Southwest. Case studies were conducted with actual power distribution system data for seven electric utilities with the participation of those utilities. Integrating renewable energy systems into electric power distribution systems increased the value of the benefits by about 20 to 55% above central station benefits in the national regional assessment. In the case studies presented in Vol. 2, the range was larger: from a few percent to near 80% for a case where costly investments were deferred. In general, additional savings of at least 10 to 20% can be expected by integrating at the distribution level. Wind energy systems were found to be economical in good wind resource regions, whereas photovoltaic systems costs are presently a factor of 2.5 too expensive under the most favorable conditions.

  11. A simplified radionuclide source term for total-system performance assessment; Yucca Mountain Site Characterization Project

    SciTech Connect

    Wilson, M.L.

    1991-11-01

    A parametric model for releases of radionuclides from spent-nuclear-fuel containers in a waste repository is presented. The model is appropriate for use in preliminary total-system performance assessments of the potential repository site at Yucca Mountain, Nevada; for this reason it is simpler than the models used for detailed studies of waste-package performance. Terms are included for releases from the spent fuel pellets, from the pellet/cladding gap and the grain boundaries within the fuel pellets, from the cladding of the fuel rods, and from the radioactive fuel-assembly parts. Multiple barriers are considered, including the waste container, the fuel-rod cladding, the thermal ``dry-out``, and the waste form itself. The basic formulas for release from a single fuel rod or container are extended to formulas for expected releases for the whole repository by using analytic expressions for probability distributions of some important parameters. 39 refs., 4 figs., 4 tabs.

  12. A simple method for estimating potential source term bypass fractions from confinement structures

    SciTech Connect

    Kalinich, D.A.; Paddleford, D.F.

    1997-07-01

    Confinement structures house many of the operating processes at the Savannah River Site (SRS). Under normal operating conditions, a confinement structure in conjunction with its associated ventilation systems prevents the release of radiological material to the environment. However, under potential accident conditions, the performance of the ventilation systems and integrity of the structure may be challenged. In order to calculate the radiological consequences associated with a potential accident (e.g. fires, explosion, spills, etc.), it is necessary to determine the fraction of the source term initially generated by the accident that escapes from the confinement structure to the environment. While it would be desirable to estimate the potential bypass fraction using sophisticated control-volume/flow path computer codes (e.g. CONTAIN, MELCOR, etc.) in order to take as much credit as possible for the mitigative effects of the confinement structure, there are many instances where using such codes is not tractable due to limits on the level-of-effort allotted to perform the analysis. Moreover, the current review environment, with its emphasis on deterministic/bounding-versus probabilistic/best-estimate-analysis discourages using analytical techniques that require the consideration of a large number of parameters. Discussed herein is a simplified control-volume/flow path approach for calculating source term bypass fraction that is amenable to solution in a spreadsheet or with a commercial mathematical solver (e.g. MathCad or Mathematica). It considers the effects of wind and fire pressure gradients on the structure, ventilation system operation, and Halon discharges. Simple models are used to characterize the engineered and non-engineered flow paths. By making judicious choices for the limited set of problem parameters, the results from this approach can be defended as bounding and conservative.

  13. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  14. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  15. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  16. Independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool

    SciTech Connect

    Madni, I.K.; Eltawila, F.

    1994-01-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the US Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC called ``MELCOR Verification, Benchmarking, and Applications,`` whose aim is to provide independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool. The scope of this program is to perform quality control verification on all released versions of MELCOR, to benchmark MELCOR against more mechanistic codes and experimental data from severe fuel damage tests, and to evaluate the ability of MELCOR to simulate long-term severe accident transients in commercial LWRs, by applying the code to model both BWRs and PWRs. Under this program, BNL provided input to the NRC-sponsored MELCOR Peer Review, and is currently contributing to the MELCOR Cooperative Assessment Program (MCAP). This paper presents a summary of MELCOR assessment efforts at BNL and their contribution to NRC goals with respect to MELCOR.

  17. Long-term aerosol measurements in Gran Canaria, Canary Islands: Particle concentration, sources and elemental composition

    NASA Astrophysics Data System (ADS)

    Gelado-Caballero, MaríA. D.; López-GarcíA, Patricia; Prieto, Sandra; Patey, Matthew D.; Collado, Cayetano; HéRnáNdez-Brito, José J.

    2012-02-01

    There are very few sets of long-term measurements of aerosol concentrations over the North Atlantic Ocean, yet such data is invaluable in quantifying atmospheric dust inputs to this ocean region. We present an 8-year record of total suspended particles (TSP) collected at three stations on Gran Canaria Island, Spain (Taliarte at sea level, Tafira 269 m above sea level (a.s.l.) and Pico de la Gorra 1930 m a.s.l.). Using wet and dry deposition measurements, the mean dust flux was calculated at 42.3 mg m-2 d-1. Air mass back trajectories (HYSPLIT, NOAA) suggested that the Sahara desert is the major source of African dust (dominant during 32-50% of days), while the Sahel desert was the major source only 2-10% of the time (maximum in summer). Elemental composition ratios of African samples indicate that, despite the homogeneity of the dust in collected samples, some signatures of the bedrocks can still be detected. Differences were found for the Sahel, Central Sahara and North of Sahara regions in Ti/Al, Mg/Al and Ca/Al ratios, respectively. Elements often associated with pollution (Pb, Cd, Ni, Zn) appeared to share a common origin, while Cu may have a predominantly local source, as suggested by a decrease in the enrichment factor (EF) of Cu during dust events. The inter-annual variability of dust concentrations is investigated in this work. During winter, African dust concentration measurements at the Pico de la Gorra station were found to correlate with the North Atlantic Oscillation (NAO) index.

  18. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  19. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  20. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    PubMed Central

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  1. Geometric-optic synthesis of single-reflector antennas with distributed sources

    NASA Astrophysics Data System (ADS)

    Westcott, B. S.; Brickell, F.

    1984-02-01

    Previous systematic treatments of reflector synthesis have usually assumed that the feed is a point source producing a spherical wave, or a line source producing a cylindrical wave. To cater for a more general source, such as a feed array or a feed/subreflector system, within the existing methodology of geometric-optic synthesis, it is convenient to define a source aperture over which the field can be arbitrarily specified. The basic equations necessary to synthesize a single reflector to meet a prespecified output aperture field distribution are derived under general conditions, and the relationship with existing work is indicated. The Monge-Ampere partial differential equation occupies a central role in the discussion.

  2. Polycyclic aromatic hydrocarbon and elemental carbon size distributions in Los Angeles aerosol: Source resolution and deposition velocities

    SciTech Connect

    Venkataraman, C.

    1992-01-01

    Particulate PAH size distributions for several species were measured, for the first time, at three ambient sites in Los Angeles. PAH size distributions in automobile exhaust were also measured by sampling aerosol in two traffic tunnels. A low flow impactor was used to minimize sampling losses in combination with a high resolution analysis method based on HPLC and fluorescence detection. Elemental carbon size distributions were measured using a thermal evolution method and flame ionization detection. Differences in ambient concentrations and size distributions are explained in terms of location within the basin, seasonal variations and differences in species reactivity and volatility. Differences between tunnel and ambient size distributions are explained in terms of gas to particle conversion. A particle morphology study confirmed that the structure of primary particles (0.05-0.5 [mu]m) is similar to soot agglomerates while the accumulation mode particles (0.5-1 [mu]m) are coated with a film of liquid aerosol. PAH profiles were estimated for the automobile source from the traffic tunnel measurements. These were used along with a characteristic PAH profile for meat cooking to apportion ambient aerosol PAH concentrations at Pico Rivera and Upland. Model estimates show that the Pico Rivera site is dominated by auto emissions which account for over 90% of all PAH (exception chrysene, 75%) and CO concentrations measured at the site. 61% of the EC concentration was explained by the model and attributed to auto emissions. In contrast, meat cooking operations contributed significantly (20 to 80%) to the concentrations of 2-4 ring PAH measured at Upland. The 5-ring and larger PAH were attributed to auto emissions at this site as well.

  3. Atmospheric moisture transports to the Arctic from different reanalyses: comparative assessment and analysis of source terms

    NASA Astrophysics Data System (ADS)

    Dufour, Ambroise; Zolina, Olga; Gulev, Sergey

    2014-05-01

    Accurate knowledge of the Arctic heat and moisture balances is critically important for understanding mechanisms of polar climate change and the observed amplification of the Arctic warming. Basic characteristics of the atmosphere in the Arctic region have quite a large spread in the modern era and first generation reanalyses, thus preventing effective use of reanalyses for the assessment of atmospheric moisture and heat transports and analysis of variability in the source terms. We used Eulerian approach to derive and intercompare to each other estimates of the moisture transports in the atmosphere from 5 reanalyses (ERA-Interim, MERRA, NCEP-CFSR, JRA-25, NCEP-1). Computational procedure involved decomposition of the velocity and moisture fields into mean conditions and variations around the mean. This concept allowed for the further association of the mean and eddy transports with large scale circulation modes (mean component) and synoptic transients (eddy component). The latter was associated with the characteristics of cyclone activity derived from the same reanalyses using state of the art numerical algorithm for cyclone identification and tracking. Atmospheric moisture transport is most intense over the GIN Sea and the North European basin, however over this area of the most intense transports, the contributions from the eddy and mean transport components are not correlated hinting on different pattern of variability in moisture fluxes due to cyclone activity and mean circulation. Decadal scale variability in the atmospheric moisture transports has been further associated with the Arctic-scale and regional differences between local precipitation and evaporation as well as with the magnitude of the storage terms. Potential mechanisms of variability in these terms are discussed.

  4. Estimating distributions of long-term particulate matter and manganese exposures for residents of Toronto, Canada

    NASA Astrophysics Data System (ADS)

    Clayton, C. A.; Pellizzari, E. D.; Rodes, C. E.; Mason, R. E.; Piper, L. L.

    Methylcyclopentadienyl manganese tricarbonyl (MMT), a manganese-based gasoline additive, has been used in Canadian gasoline for about 20 yr. Because MMT potentially increases manganese levels in particulate matter resulting from automotive exhausts, a population-based study conducted in Toronto, Canada assessed the levels of personal manganese exposures. Integrated 3-day particulate matter (PM 2.5) exposure measurements, obtained for 922 participant periods over the course of a year (September 1995-August 1996), were analyzed for several constituent elements, including Mn. The 922 measurements included 542 participants who provided a single 3-day observation plus 190 participants who provided two observations (in two different months). In addition to characterizing the distributions of 3-day average exposures, which can be estimated directly from the data, including the second observation for some participants enabled us to use a model-based approach to estimate the long-term (i.e. annual) exposure distributions for PM 2.5 mass and Mn. The model assumes that individuals' 3-day average exposure measurements within a given month are lognormally distributed and that the correlation between 3-day log-scale measurements k months apart (after seasonal adjustment) depends only on the lag time, k, and not on the time of year. The approach produces a set of simulated annual exposures from which an annual distribution can be inferred using estimated correlations and monthly means and variances (log scale) as model inputs. The model appeared to perform reasonably well for the overall population distribution of PM 2.5 exposures (mean=28 μg m -3). For example, the model predicted the 95th percentile of the annual distribution to be 62.9 μg m -3 while the corresponding percentile estimated for the 3-day data was 86.6 μg m -3. The assumptions of the model did not appear to hold for the overall population of Mn exposures (mean=13.1 ng m -3). Since the population included

  5. Identification and quantification of mixed sources of oil spills based on distributions and isotope profiles of long-chain n-alkanes.

    PubMed

    Li, Yun; Xiong, Yongqiang

    2009-12-01

    Combined with quantitative determination of concentration and isotopic composition of petroleum hydrocarbons, weathering simulation experiments on artificially mixed oils and their two end-member oils are performed for identification and quantification of mixed sources. The >C(18)n-alkanes show no appreciable losses during a short-term weathering process. An approach based on distribution of long-chain n-alkanes (>C(18)) is suggested for estimating the contribution proportion of each source in mixed oils. Stable carbon isotope profile of individual n-alkanes is a powerful tool to differentiate sources of oil spills, but unavailable to accurately allocate each contribution due to a relatively large analytical error.

  6. Multi-term approximation to the Boltzmann transport equation for electron energy distribution functions in nitrogen

    NASA Astrophysics Data System (ADS)

    Feng, Yue

    Plasma is currently a hot topic and it has many significant applications due to its composition of both positively and negatively charged particles. The energy distribution function is important in plasma science since it characterizes the ability of the plasma to affect chemical reactions, affect physical outcomes, and drive various applications. The Boltzmann Transport Equation is an important kinetic equation that provides an accurate basis for characterizing the distribution function---both in energy and space. This dissertation research proposes a multi-term approximation to solve the Boltzmann Transport Equation by treating the relaxation process using an expansion of the electron distribution function in Legendre polynomials. The elastic and 29 inelastic cross sections for electron collisions with nitrogen molecules (N2) and singly ionized nitrogen molecules ( N+2 ) have been used in this application of the Boltzmann Transport Equation. Different numerical methods have been considered to compare the results. The numerical methods discussed in this thesis are the implicit time-independent method, the time-dependent Euler method, the time-dependent Runge-Kutta method, and finally the implicit time-dependent relaxation method by generating the 4-way grid with a matrix solver. The results show that the implicit time-dependent relaxation method is the most accurate and stable method for obtaining reliable results. The results were observed to match with the published experimental data rather well.

  7. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  8. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    NASA Astrophysics Data System (ADS)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  9. Neuroimaging Evidence for Agenda-Dependent Monitoring of Different Features during Short-Term Source Memory Tests

    ERIC Educational Resources Information Center

    Mitchell, Karen J.; Raye, Carol L.; McGuire, Joseph T.; Frankel, Hillary; Greene, Erich J.; Johnson, Marcia K.

    2008-01-01

    A short-term source monitoring procedure with functional magnetic resonance imaging assessed neural activity when participants made judgments about the format of 1 of 4 studied items (picture, word), the encoding task performed (cost, place), or whether an item was old or new. The results support findings from long-term memory studies showing that…

  10. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    NASA Astrophysics Data System (ADS)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  11. Long-term Satellite Observations of Asian Dust Storm: Source, Pathway, and Interannual Variability

    NASA Technical Reports Server (NTRS)

    Hsu, N. Christina

    2008-01-01

    between Deep Blue retrievals of aerosol optical thickness and those directly from AERONET sunphotometers over desert and semi-desert regions. New Deep Blue products will allow scientists to determine quantitatively the aerosol properties near sources using high spatial resolution measurements from SeaWiFS and MODIS-like instruments. Long-term satellite measurements (1998 - 2007) from SeaWiFS will be utilized to investigate the interannual variability of source, pathway, and dust loading associated with the Asian dust storm outbreaks. In addition, monthly averaged aerosol optical thickness during the springtime from SeaWiFS will also be compared with the MODIS Deep Blue products.

  12. Calculating method for confinement time and charge distribution of ions in electron cyclotron resonance sources

    SciTech Connect

    Dougar-Jabon, V.D.; Umnov, A.M.; Kutner, V.B.

    1996-03-01

    It is common knowledge that the electrostatic pit in a core plasma of electron cyclotron resonance sources exerts strict control over generation of ions in high charge states. This work is aimed at finding a dependence of the lifetime of ions on their charge states in the core region and to elaborate a numerical model of ion charge dispersion not only for the core plasmas but for extracted beams as well. The calculated data are in good agreement with the experimental results on charge distributions and magnitudes for currents of beams extracted from the 14 GHz DECRIS source. {copyright} {ital 1996 American Institute of Physics.}

  13. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  14. Reactive hydro- end chlorocarbons in the troposphere and lower stratosphere : sources, distributions, and chemical impact

    NASA Astrophysics Data System (ADS)

    Scheeren, H. A.

    2003-09-01

    The work presented in this thesis focuses on measurements of chemical reactive C2 C7 non-methane hydrocarbons (NMHC) and C1 C2 chlorocarbons with atmospheric lifetimes of a few hours up to about a year. The group of reactive chlorocarbons includes the most abundant atmospheric species with large natural sources, which are chloromethane (CH3Cl), dichloromethane (CH2Cl2), and trichloromethane (CHCl3), and tetrachloroethylene (C2Cl4) with mainly anthropogenic sources. The NMHC and chlorocarbons are present at relatively low quantities in our atmosphere (10-12 10-9 mol mol-1 of air). Nevertheless, they play a key role in atmospheric photochemistry. For example, the oxidation of NMHC plays a dominant role in the formation of ozone in the troposphere, while the photolysis of chlorocarbons contributes to enhanced ozone depletion in the stratosphere. In spite of their important role, however, their global source and sinks budgets are still poorly understood. Hence, this study aims at improving our understanding of the sources, distribution, and chemical role of reactive NMHC and chlorocarbons in the troposphere and lower stratosphere. To meet this aim, a comprehensive data set of selected C2 C7 NMHC and chlorocarbons has been analyzed, derived from six aircraft measurement campaigns with two different jet aircrafts (the Dutch TUD/NLR Cessna Citation PH-LAB, and the German DLR Falcon) conducted between 1995 and 2001 (STREAM 1995 and 1997 and 1998, LBA-CLAIRE 1998, INDOEX 1999, MINOS 2001). The NMHC and chlorocarbons have been detected by gas-chromatography (GC-FID/ECD) in pre-concentrated whole air samples collected in stainless steel canister on-board the measurement aircrafts. The measurement locations include tropical (Maldives/Indian Ocean and Surinam), midlatitude (Western Europe and Canada) and polar regions (Lapland/northern Sweden) between the equator to about 70ºN, covering different seasons and pollution levels in the troposphere and lower stratosphere. Of

  15. Source terms released into the environment for a station blackout severe accident at the Peach Bottom Atomic Power Station

    SciTech Connect

    Carbajo, J.J.

    1995-07-01

    This study calculates source terms released into the environment at the Peach Bottom Atomic Power Station after containment failure during a postulated low-pressure, short-term station blackout severe accident. The severe accident analysis code MELCOR, version 1.8.1, was used in these calculations. Source terms were calculated for three different containment failure modes. The largest environmental releases occur for early containment failure at the drywell liner in contact with the cavity by liner melt-through. This containment failure mode is very likely to occur when the cavity is dry during this postulated severe accident sequence.

  16. Baseline distribution and sources of linear alkyl benzenes (LABs) in surface sediments from Brunei Bay, Brunei.

    PubMed

    Alkhadher, Sadeq Abullah Abdo; Zakaria, Mohamad Pauzi; Yusoff, Fatimah Md; Kannan, Narayanan; Suratman, Suhaimi; Keshavarzifard, Mehrzad; Magam, Sami Muhsen; Masood, Najat; Vaezzadeh, Vahab; Sani, Muhamad Shirwan Abdullah

    2015-12-15

    Sewage pollution is one of major concerns of coastal and shoreline settlements in Southeast Asia, especially Brunei. The distribution and sources of LABs as sewage molecular markers were evaluated in surface sediments collected from Brunei Bay. The samples were extracted, fractionated and analyzed using gas chromatography- mass spectrometry (GC-MS). LABs concentrations ranged from 7.1 to 41.3 ng g(-1) dry weight (dw) in surficial sediments from Brunei Bay. The study results showed LABs concentrations variably due to the LABs intensity and anthropogenic influence along Brunei Bay in recent years. The ratio of Internal to External isomers (I/E ratio) of LABs in sediment samples from Brunei Bay ranged from 0.56 to 2.17 along Brunei Bay stations, indicating that the study areas were receiving primary and secondary effluents. This is the first study carried out to assess the distribution and sources of LABs in surface sediments from Brunei Bay, Brunei. PMID:26478457

  17. Theoretical and measured electric field distributions within an annular phased array: consideration of source antennas.

    PubMed

    Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V

    1993-08-01

    The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices.

  18. Theoretical and measured electric field distributions within an annular phased array: consideration of source antennas.

    PubMed

    Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V

    1993-08-01

    The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices. PMID:8258444

  19. Dynamical changes of ion current distribution for a Penning discharge source using a Langmuir probe array.

    PubMed

    Li, M; Xiang, W; Xiao, K X; Chen, L

    2012-02-01

    A paralleled plate electrode and a 9-tip Langmuir probe array located 1 mm behind the extraction exit of a cold cathode Penning ion source are employed to measure the total current and the dynamical changes of the ion current in the 2D profile, respectively. Operation of the ion source by 500 V DC power supply, the paralleled plate electrode and the Langmuir probe array are driven by a bias voltage ranging from -200 V to 200 V. The dependence of the total current and the dynamical changes of the ion current in the 2D profile are presented at the different bias voltage. The experimental results show that the distribution of ion current is axial symmetry and approximate a unimodal distribution.

  20. Electron energy distribution function by using probe method in electron cyclotron resonance multicharged ion source

    SciTech Connect

    Kumakura, Sho Kurisu, Yosuke; Kimura, Daiju; Yano, Keisuke; Imai, Youta; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2014-02-15

    We are constructing a tandem type electron cyclotron resonance (ECR) ion source (ECRIS). High-energy electrons in ECRIS plasma affect electron energy distribution and generate multicharged ion. In this study, we measure electron energy distribution function (EEDF) of low energy region (≦100 eV) in ECRIS plasma at extremely low pressures (10{sup −3}–10{sup −5} Pa) by using cylindrical Langmuir probe. From the result, it is found that the EEDF correlates with the electron density and the temperature from the conventional probe analysis. In addition, we confirm that the tail of EEDF spreads to high energy region as the pressure rises and that there are electrons with high energy in ECR multicharged ion source plasma. The effective temperature estimated from the experimentally obtained EEDF is larger than the electron temperature obtained from the conventional method.

  1. Electron energy distribution function by using probe method in electron cyclotron resonance multicharged ion source.

    PubMed

    Kumakura, Sho; Kurisu, Yosuke; Kimura, Daiju; Yano, Keisuke; Imai, Youta; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2014-02-01

    We are constructing a tandem type electron cyclotron resonance (ECR) ion source (ECRIS). High-energy electrons in ECRIS plasma affect electron energy distribution and generate multicharged ion. In this study, we measure electron energy distribution function (EEDF) of low energy region (≦100 eV) in ECRIS plasma at extremely low pressures (10(-3)-10(-5) Pa) by using cylindrical Langmuir probe. From the result, it is found that the EEDF correlates with the electron density and the temperature from the conventional probe analysis. In addition, we confirm that the tail of EEDF spreads to high energy region as the pressure rises and that there are electrons with high energy in ECR multicharged ion source plasma. The effective temperature estimated from the experimentally obtained EEDF is larger than the electron temperature obtained from the conventional method.

  2. A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms.

    PubMed

    Leveque, Randall J

    2011-07-01

    Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be "well balanced" and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments.

  3. Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks

    NASA Astrophysics Data System (ADS)

    Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.

    2010-12-01

    Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for

  4. Influence of the electron source distribution on field-aligned currents

    NASA Technical Reports Server (NTRS)

    Bruening, K.; Goertz, C. K.

    1985-01-01

    The field-aligned current density above a discrete auroral arc has been deduced from the downward electron flux and magnetic field measurements onboard the rocket Porcupine flight 4. Both measurements show that the field-aligned current density is, in spite of decreasing peak energies towards the edge of the arc, about 4 times higher there than in the center of the arc. This can be explained by using the single particle description for an anisotropic electron source distribution.

  5. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    PubMed

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  6. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Vittino, Andrea

    2016-08-01

    The source-count distribution as a function of their flux, {dN}/{dS}, is one of the main quantities characterizing gamma-ray source populations. We employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (| b| ≥slant 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6 yr Fermi-LAT data set (P7REP), we show that the {dN}/{dS} distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure {dN}/{dS} down to an integral flux of ˜ 2× {10}-11 {{cm}}-2 {{{s}}}-1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall {dN}/{dS} distribution is consistent with a broken power law, with a break at {2.1}-1.3+1.0× {10}-8 {{cm}}-2 {{{s}}}-1. The power-law index {n}1={3.1}-0.5+0.7 for bright sources above the break hardens to {n}2=1.97+/- 0.03 for fainter sources below the break. A possible second break of the {dN}/{dS} distribution is constrained to be at fluxes below 6.4× {10}-11 {{cm}}-2 {{{s}}}-1 at 95% confidence level. The high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ˜25% point sources, ˜69.3% diffuse Galactic foreground emission, and ˜6% isotropic diffuse background.

  7. Distributed Lag Analyses of Daily Hospital Admissions and Source-Apportioned Fine Particle Air Pollution

    PubMed Central

    Lall, Ramona; Ito, Kazuhiko; Thurston, George D.

    2011-01-01

    Background Past time-series studies of the health effects of fine particulate matter [aerodynamic diameter ≤ 2.5 μm (PM2.5)] have used chemically nonspecific PM2.5 mass. However, PM2.5 is known to vary in chemical composition with source, and health impacts may vary accordingly. Objective We tested the association between source-specific daily PM2.5 mass and hospital admissions in a time-series investigation that considered both single-lag and distributed-lag models. Methods Daily PM2.5 speciation measurements collected in midtown Manhattan were analyzed via positive matrix factorization source apportionment. Daily and distributed-lag generalized linear models of Medicare respiratory and cardiovascular hospital admissions during 2001–2002 considered PM2.5 mass and PM2.5 from five sources: transported sulfate, residual oil, traffic, steel metal works, and soil. Results Source-related PM2.5 (specifically steel and traffic) was significantly associated with hospital admissions but not with total PM2.5 mass. Steel metal works–related PM2.5 was associated with respiratory admissions for multiple-lag days, especially during the cleanup efforts at the World Trade Center. Traffic-related PM2.5 was consistently associated with same-day cardiovascular admissions across disease-specific subcategories. PM2.5 constituents associated with each source (e.g., elemental carbon with traffic) were likewise associated with admissions in a consistent manner. Mean effects of distributed-lag models were significantly greater than were maximum single-day effect models for both steel- and traffic-related PM2.5. Conclusions Past analyses that have considered only PM2.5 mass or only maximum single-day lag effects have likely underestimated PM2.5 health effects by not considering source-specific and distributed-lag effects. Differing lag structures and disease specificity observed for steel-related versus traffic-related PM2.5 raise the possibility of distinct mechanistic pathways of

  8. Long-term fluctuations of hailstorms in South Moravia, Czech Republic: synthesis of different data sources

    NASA Astrophysics Data System (ADS)

    Chromá, Kateřina; Brázdil, Rudolf; Dolák, Lukáš; Řezníčková, Ladislava; Valášek, Hubert; Zahradníček, Pavel

    2016-04-01

    Hailstorms belong to natural phenomena causing great material damage in present time, similarly as it was in the past. In Moravia (eastern part of the Czech Republic), systematic meteorological observations started generally in the latter half of the 19th century. Therefore, in order to create long-term series of hailstorms, it is necessary to search for other sources of information. Different types of documentary evidence are used in historical climatology, such as annals, chronicles, diaries, private letters, newspapers etc. Besides them, institutional documentary evidence of economic and administrative character (e.g. taxation records) has particular importance. This study aims to create a long-term series of hailstorms in South Moravia using various types of documentary evidence (such as taxation records, family archives, chronicles and newspapers which are the most important) and systematic meteorological observations in the station network. Although available hailstorm data cover the 1541-2014 period, incomplete documentary evidence allows reasonable analysis of fluctuations in hailstorm frequency only since the 1770s. The series compiled from documentary data and systematic meteorological observations is used to identify periods of lower and higher hailstorm frequency. Existing data may be used also for the study of spatial hailstorm variability. Basic uncertainties of compiled hailstorm series are discussed. Despite some bias in hailstorm data, South-Moravian hailstorm series significantly extends our knowledge about this phenomenon in the south-eastern part of the Czech Republic. The study is a part of the research project "Hydrometeorological extremes in Southern Moravia derived from documentary evidence" supported by the Grant Agency of the Czech Republic, reg. no. 13-19831S.

  9. [Influence of water source switching on water quality in drinking water distribution system].

    PubMed

    Wang, Yang; Niu, Zhang-bin; Zhang, Xiao-jian; Chen, Chao; He, Wen-jie; Han, Hong-da

    2007-10-01

    This study investigates the regularity of the change on the physical and chemical water qualities in the distribution system during the process of water source switching in A city. Due to the water source switching, the water quality is chemical-astable. Because of the differences between the two water sources, pH reduced from 7.54 to 7.18, alkalinity reduced from 188 mg x L(-1) to 117 mg x L(-1), chloride (Cl(-)) reduced from 310 mg x L(-1) to 132 mg x L(-1), conductance reduced from 0.176 S x m(-1) to 0.087 S x m(-1) and the ions of calcium and magnesium reduced to 15 mg x L(-1) and 11 mg x L(-1) respectively. Residual chlorine changed while the increase of the chlorine demand and the water quantity decreasing at night, and the changes of pH, alkalinity and residual chlorine brought the iron increased to 0.4 mg x L(-1) at the tiptop, which was over the standard. The influence of the change of the water parameters on the water chemical-stability in the drinking water distribution system is analyzed, and the controlling countermeasure is advanced: increasing pH, using phosphate and enhancing the quality of the water in distribution system especially the residual chlorine.

  10. Use of source distributions for evaluating theoretical aerodynamics of thin finite wings at supersonic speeds

    NASA Technical Reports Server (NTRS)

    Evvard, John C

    1950-01-01

    A series of publications on the source-distribution methods for evaluating the aerodynamics of thin wings at supersonic speeds is summarized, extended, and unified. Included in the first part are the deviations of: (a) the linearized partial-differential equation for unsteady flow at a substantially constant Mach number. b) The source-distribution solution for the perturbation-velocity potential that satisfies the boundary conditions of tangential flow at the surface and in the plane of the wing; and (c) the integral equation for determining the strength and the location of sources to describe the interaction effects (as represented by upwash) of the bottom and top wing surfaces through the region between the finite wing boundary and the foremost Mach wave. The second part deals with steady-state thin-wing problems. The third part of the report approximates the integral equation for unsteady upwash and includes a solution of approximate equation. Expressions are then derived to evaluate the load distributions for time-dependent finite-wing motions.

  11. Experimental measurement-device-independent quantum key distribution with imperfect sources

    NASA Astrophysics Data System (ADS)

    Tang, Zhiyuan; Wei, Kejin; Bedroya, Olinka; Qian, Li; Lo, Hoi-Kwong

    2016-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks, is the most promising solution to the security issues in practical quantum key distribution systems. Although several experimental demonstrations of MDI-QKD have been reported, they all make one crucial but not yet verified assumption, that is, there are no flaws in state preparation. Such an assumption is unrealistic and security loopholes remain in the source. Here we present a MDI-QKD experiment with the modulation error taken into consideration. By applying the loss-tolerant security proof by Tamaki et al. [Phys. Rev. A 90, 052314 (2014)], 10.1103/PhysRevA.90.052314, we distribute secure keys over fiber links up to 40 km with imperfect sources, which would not have been possible under previous security proofs. By simultaneously closing loopholes at the detectors and a critical loophole—modulation error in the source, our work shows the feasibility of secure QKD with practical imperfect devices.

  12. Source contributions to the regional distribution of secondary particulate matter in California

    NASA Astrophysics Data System (ADS)

    Ying, Qi; Kleeman, Michael J.

    Source contributions to PM2.5 nitrate, sulfate and ammonium ion concentrations in California's San Joaquin Valley (SJV) (4-6 January 1996) and South Coast Air Basin (SoCAB) surrounding Los Angeles (23-25 September 1996) were predicted using a three-dimensional source-oriented Eulerian air quality model. The air quality model tracks the formation of PM2.5 nitrate, sulfate and ammonium ion from primary particles and precursor gases emitted from different sources though a mathematical simulation of emission, chemical reaction, gas-to-particle conversion, transport and deposition. The observed PM2.5 nitrate, sulfate and ammonium ion concentrations, and the mass distribution of nitrate, sulfate and ammonium ion as a function of particle size have been successfully reproduced by the model simulation. Approximately 45-57% of the PM2.5 nitrate and 34-40% of the PM2.5 ammonium ion in the SJV is formed from precursor gaseous species released from sources upwind of the valley. In the SoCAB, approximately 83% of the PM2.5 nitrate and 82% of the PM2.5 ammonium ion is formed from precursor gaseous species released from sources within the air basin. In the SJV, transportation related sources contribute approximately 24-30% of the PM2.5 nitrate (diesel engines ˜13.5-17.0%, catalyst equipped gasoline engines ˜10.2-12.8% and non-catalyst equipped gasoline engines ˜0.3-0.4%). In the SoCAB, transportation related sources directly contribute to approximately 67% of the PM2.5 nitrate (diesel engines 34.6%, non-catalyst equipped gasoline engine 4.7% and catalyst equipped gasoline engine 28.1%). PM2.5 ammonium ion concentrations in the SJV were dominated by area (including animal) NH 3 sources (16.7-25.3%), soil (7.2-10.9%), fertilizer NH 3 sources (11.4-17.3%) and point NH 3 sources (14.3-21.7%). In the SoCAB, ammonium ion is mainly associated with animal sources (28.2%) and catalyst equipped gasoline engines (16.2%). In both regions, the majority of the relatively low PM2.5 sulfate

  13. Polycyclic Aromatic Hydrocarbons in the Dagang Oilfield (China): Distribution, Sources, and Risk Assessment

    PubMed Central

    Jiao, Haihua; Rui, Xiaoping; Wu, Shanghua; Bai, Zhihui; Zhuang, Xuliang; Huang, Zhanbin

    2015-01-01

    The levels of 16 polycyclic aromatic hydrocarbons (PAHs) were investigated in 27 upper layer (0–25 cm) soil samples collected from the Dagang Oilfield (China) in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs) varied between 103.6 µg·kg−1 and 5872 µg·kg−1, with a mean concentration of 919.8 µg·kg−1; increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg−1), to oil well areas (mean of 627.3 µg·kg−1), to urban and residential zones (mean of 1856 µg·kg−1). Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B) or not classified/non-carcinogenic (NB). The total concentrations of carcinogenic PAHs (∑BPAHs) varied from 13.3 µg·kg−1 to 4397 µg·kg−1 across all samples, with a mean concentration of 594.4 µg·kg−1. The results suggest that oilfield soil is subject to a certain level of ecological environment risk. PMID:26016436

  14. Polycyclic aromatic hydrocarbons in the dagang oilfield (china): distribution, sources, and risk assessment.

    PubMed

    Jiao, Haihua; Rui, Xiaoping; Wu, Shanghua; Bai, Zhihui; Zhuang, Xuliang; Huang, Zhanbin

    2015-06-01

    The levels of 16 polycyclic aromatic hydrocarbons (PAHs) were investigated in 27 upper layer (0-25 cm) soil samples collected from the Dagang Oilfield (China) in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs) varied between 103.6 µg·kg(-1) and 5872 µg·kg(-1), with a mean concentration of 919.8 µg·kg(-1); increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg(-1)), to oil well areas (mean of 627.3 µg·kg(-1)), to urban and residential zones (mean of 1856 µg·kg(-1)). Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B) or not classified/non-carcinogenic (NB). The total concentrations of carcinogenic PAHs (∑BPAHs) varied from 13.3 µg·kg(-1) to 4397 µg·kg(-1) across all samples, with a mean concentration of 594.4 µg·kg(-1). The results suggest that oilfield soil is subject to a certain level of ecological environment risk. PMID:26016436

  15. Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System

    PubMed Central

    Prest, E. I.; Weissbrodt, D. G.; Hammes, F.; van Loosdrecht, M. C. M.; Vrouwenvelder, J. S.

    2016-01-01

    Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1–3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson’s correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously

  16. Balancing continuous-variable quantum key distribution with source-tunable linear optics cloning machine

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Lv, Geli; Zeng, Guihua

    2015-11-01

    We show that the tolerable excess noise can be dynamically balanced in source preparation while inserting a tunable linear optics cloning machine (LOCM) for balancing the secret key rate and the maximal transmission distance of continuous-variable quantum key distribution (CVQKD). The intensities of source noise are sensitive to the tunable LOCM and can be stabilized to the suitable values to eliminate the impact of channel noise and defeat the potential attacks even in the case of the degenerated linear optics amplifier (LOA). The LOCM-additional noise can be elegantly employed by the reference partner of reconciliation to regulate the secret key rate and the transmission distance. Simulation results show that there is a considerable improvement in the secret key rate of the LOCM-based CVQKD while providing a tunable LOCM for source preparation with the specified parameters in suitable ranges.

  17. Heralded single-photon sources for quantum-key-distribution applications

    NASA Astrophysics Data System (ADS)

    Schiavon, Matteo; Vallone, Giuseppe; Ticozzi, Francesco; Villoresi, Paolo

    2016-01-01

    Single-photon sources (SPSs) are a fundamental building block for optical implementations of quantum information protocols. Among SPSs, multiple crystal heralded single-photon sources seem to give the best compromise between high pair production rate and low multiple photon events. In this work, we study their performance in a practical quantum-key-distribution experiment, by evaluating the achievable key rates. The analysis focuses on the two different schemes, symmetric and asymmetric, proposed for the practical implementation of heralded single-photon sources, with attention on the performance of their composing elements. The analysis is based on the protocol proposed by Bennett and Brassard in 1984 and on its improvement exploiting decoy state technique. Finally, a simple way of exploiting the postselection mechanism for a passive, one decoy state scheme is evaluated.

  18. A review of the environmental distribution, fate, and control of tetrabromobisphenol A released from sources.

    PubMed

    Malkoske, Tyler; Tang, Yulin; Xu, Wenying; Yu, Shuili; Wang, Hongtao

    2016-11-01

    Tetrabromobisphenol A (TBBPA), a high use brominated flame retardant (BFR), raising concerns of widespread pollution and harm to human and ecological health. BFR manufacturing, TBBPA-based product manufacturing, e-waste recycling, and wastewater treatment plants have been identified as the main emission point sources. This paper discusses the occurrence, distribution, and fate of TBBPA from source to the environment. After release to the environment, TBBPA may undergo adsorption, photolysis, and biological degradation. Exposure of humans and biota is also discussed along with the role of treatment and regulations in reducing release of TBBPA to the environment and exposure risks. In general this review found stronger enforcement of existing legislation, and investment in treatment of e-waste plastics and wastewater from emission point sources could be effective methods in reducing release and exposure of TBBPA in the environment. PMID:27325014

  19. 3D modeling of the electron energy distribution function in negative hydrogen ion sources.

    PubMed

    Terasaki, R; Fujino, I; Hatayama, A; Mizuno, T; Inoue, T

    2010-02-01

    For optimization and accurate prediction of the amount of H-ion production in negative ion sources, analysis of electron energy distribution function (EEDF) is necessary. We are developing a numerical code which analyzes EEDF in the tandem-type arc-discharge source. It is a three-dimensional Monte Carlo simulation code with realistic geometry and magnetic configuration. Coulomb collision between electrons is treated with the "binary collision" model and collisions with hydrogen species are treated with the "null-collision" method. We applied this code to the analysis of the JAEA 10 A negative ion source. The numerical result shows that the obtained EEDF is in good agreement with experimental results.

  20. Codon information value and codon transition-probability distributions in short-term evolution

    NASA Astrophysics Data System (ADS)

    Jiménez-Montaño, M. A.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Ramos-Fernández, A.

    2016-07-01

    To understand the way the Genetic Code and the physical-chemical properties of coded amino acids affect accepted amino acid substitutions in short-term protein evolution, taking into account only overall amino acid conservation, we consider an underlying codon-level model. This model employs codon pair-substitution frequencies from an empirical matrix in the literature, modified for single-base mutations only. Ordering the degenerated codons according to their codon information value (Volkenstein, 1979), we found that three-fold and most of four-fold degenerated codons, which have low codon values, were best fitted to rank-frequency distributions with constant failure rate (exponentials). In contrast, almost all two-fold degenerated codons, which have high codon values, were best fitted to rank-frequency distributions with variable failure rate (inverse power-laws). Six-fold degenerated codons are considered to be doubly assigned. The exceptional behavior of some codons, including non-degenerate codons, is discussed.

  1. Long term care financing in four OECD countries: fiscal burden and distributive effects.

    PubMed

    Karlsson, Martin; Mayhew, Les; Rickayzen, Ben

    2007-01-01

    This paper compares long term care (LTC) systems in four OECD countries (UK, Japan, Sweden and Germany). In the UK, provision is means tested, so that out of pocket payments depend on levels of income, savings and assets. In Sweden, where the system is wholly tax-financed, provision is essentially free at the point of use. In Germany and Japan, provision is financed from recently introduced compulsory insurance schemes, although the details of how each scheme operates and the distributive consequences differ somewhat. The paper analyses the effects of importing the other three countries' systems for financing LTC into the UK, focussing on both the distributive consequences and the tax burden. It finds that the German system would not be an improvement on the current UK system, because it uses a regressive method of financing. Therefore, the discussion of possible alternatives to the present UK system could be restricted to a general tax-based system as used in Sweden or the compulsory insurance system as used in Japan. The results suggest that all three systems would imply increased taxes in the UK. PMID:16564108

  2. Long term care financing in four OECD countries: fiscal burden and distributive effects.

    PubMed

    Karlsson, Martin; Mayhew, Les; Rickayzen, Ben

    2007-01-01

    This paper compares long term care (LTC) systems in four OECD countries (UK, Japan, Sweden and Germany). In the UK, provision is means tested, so that out of pocket payments depend on levels of income, savings and assets. In Sweden, where the system is wholly tax-financed, provision is essentially free at the point of use. In Germany and Japan, provision is financed from recently introduced compulsory insurance schemes, although the details of how each scheme operates and the distributive consequences differ somewhat. The paper analyses the effects of importing the other three countries' systems for financing LTC into the UK, focussing on both the distributive consequences and the tax burden. It finds that the German system would not be an improvement on the current UK system, because it uses a regressive method of financing. Therefore, the discussion of possible alternatives to the present UK system could be restricted to a general tax-based system as used in Sweden or the compulsory insurance system as used in Japan. The results suggest that all three systems would imply increased taxes in the UK.

  3. Source apportionment of ambient fine particle size distribution using positive matrix factorization in Erfurt, Germany

    PubMed Central

    Yue, Wei; Stölzel, Matthias; Cyrys, Josef; Pitz, Mike; Heinrich, Joachim; Kreyling, Wolfgang G.; Wichmann, H.-Erich; Peters, Annette; Wang, Sheng; Hopke, Philip K.

    2008-01-01

    Particle size distribution data collected between September 1997 and August 2001 in Erfurt, Germany were used to investigate the sources of ambient particulate matter by positive matrix factorization (PMF). A total of 29,313 hourly averaged particle size distribution measurements covering the size range of 0.01 to 3.0 μm were included in the analysis. The particle number concentrations (cm−3) for the 9 channels in the ultrafine range, and mass concentrations (ng m−3) for the 41 size bins in the accumulation mode and particle up to 3 μm in aerodynamic diameter were used in the PMF. The analysis was performed separately for each season. Additional analyses were performed including calculations of the correlations of factor contributions with gaseous pollutants (O3, NO, NO2, CO and SO2) and particle composition data (sulfate, organic carbon and elemental carbon), estimating the contributions of each factor to the total number and mass concentration, identifying the directional locations of the sources using the conditional probability function, and examining the diurnal patterns of factor scores. These results were used to assist in the interpretation of the factors. Five factors representing particles from airborne soil, ultrafine particles from local traffic, secondary aerosols from local fuel combustion, particles from remote traffic sources, and secondary aerosols from multiple sources were identified in all seasons. PMID:18433834

  4. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romanach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  5. Radiation Therapy Photon Beams Dose Conformation According to Dose Distribution Around Intracavitary-Applied Brachytherapy Sources

    SciTech Connect

    Jurkovic, Slaven Zauhar, Gordana; Faj, Dario; Radojcic, Deni Smilovic; Svabic, Manda

    2010-04-01

    Intracavitary application of brachytherapy sources followed by external beam radiation is essential for the local treatment of carcinoma of the cervix. Due to very high doses to the central portion of the target volume delivered by brachytherapy sources, this part of the target volume must be shielded while being irradiated by photon beams. Several shielding techniques are available, from rectangular block and standard cervix wedge to more precise, customized step wedge filters. Because the calculation of a step wedge filter's shape was usually based on effective attenuation coefficient, an approach that accounts, in a more precise way, for the scattered radiation, is suggested. The method was verified under simulated clinical conditions using film dosimetry. Measured data for various compensators were compared to the numerically determined sum of the dose distribution around brachytherapy sources and one of compensated beam. Improvements in total dose distribution are demonstrated, using our method. Agreement between calculation and measurements were within 3%. Sensitivity of the method on sources displacement during treatment has also been investigated.

  6. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons.

    PubMed

    Kwok, C S; Bialobzyski, P J; Yu, S K; Prestwich, W V

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204Tl and 147Pm in RM. When the point sources were at 12 mg/cm2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0-12 mg/cm2 from the interface were measured to be 1.08 +/- 0.03 (SE) and 1.03 +/- 0.03 (SE) for 204Tl and 147Pm, respectively. The Monte Carlo codes predicted 1.05 +/- 0.02 and 1.01 +/- 0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB-soft-tissue interface by as much as a factor of 2.

  7. Regional Sources of Nitrous Oxide over the United States: Seasonal Variation and Spatial Distribution

    SciTech Connect

    Miller, S. M.; Kort, E. A.; Hirsch, A. I.; Dlugokencky, E. J.; Andrews, A. E.; Xu, X.; Tian, H.; Nehrkorn, T.; Eluszkiewicz, J.; Michalak, A. M.; Wofsy, S. C.

    2012-01-01

    This paper presents top-down constraints on the magnitude, spatial distribution, and seasonality of nitrous oxide (N{sub 2}O) emissions over the central United States. We analyze data from tall towers in 2004 and 2008 using a high resolution Lagrangian particle dispersion model paired with both geostatistical and Bayesian inversions. Our results indicate peak N{sub 2}O emissions in June with a strong seasonal cycle. The spatial distribution of sources closely mirrors data on fertilizer application with particularly large N{sub 2}O sources over the US Cornbelt. Existing inventories for N{sub 2}O predict emissions that differ substantially from the inverse model results in both seasonal cycle and magnitude. We estimate a total annual N{sub 2}O budget over the central US of 0.9-1.2 TgN/yr and an extrapolated budget for the entire US and Canada of 2.1-2.6 TgN/yr. By this estimate, the US and Canada account for 12-15% of the total global N{sub 2}O source or 32-39% of the global anthropogenic source as reported by the Intergovernmental Panel on Climate Change in 2007.

  8. Analysis of electron energy distribution function in the Linac4 H⁻ source.

    PubMed

    Mochizuki, S; Mattei, S; Nishida, K; Hatayama, A; Lettry, J

    2016-02-01

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H(-) negative ion production by reducing the gas pressure. PMID:26931990

  9. Analysis of electron energy distribution function in the Linac4 H⁻ source.

    PubMed

    Mochizuki, S; Mattei, S; Nishida, K; Hatayama, A; Lettry, J

    2016-02-01

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H(-) negative ion production by reducing the gas pressure.

  10. Assessing the complexity of short-term heartbeat interval series by distribution entropy.

    PubMed

    Li, Peng; Liu, Chengyu; Li, Ke; Zheng, Dingchang; Liu, Changchun; Hou, Yinglong

    2015-01-01

    Complexity of heartbeat interval series is typically measured by entropy. Recent studies have found that sample entropy (SampEn) or fuzzy entropy (FuzzyEn) quantifies essentially the randomness, which may not be uniformly identical to complexity. Additionally, these entropy measures are heavily dependent on the predetermined parameters and confined to data length. Aiming at improving the robustness of complexity assessment for short-term RR interval series, this study developed a novel measure--distribution entropy (DistEn). The DistEn took full advantage of the inherent information underlying the vector-to-vector distances in the state space by probability density estimation. Performances of DistEn were examined by theoretical data and experimental short-term RR interval series. Results showed that DistEn correctly ranked the complexity of simulated chaotic series and Gaussian noise series. The DistEn had relatively lower sensitivity to the predetermined parameters and showed stability even for quantifying the complexity of extremely short series. Analysis further showed that the DistEn indicated the loss of complexity in both healthy aging and heart failure patients (both p < 0.01), whereas neither the SampEn nor the FuzzyEn achieved comparable results (all p ≥ 0.05). This study suggested that the DistEn would be a promising measure for prompt clinical examination of cardiovascular function.

  11. Two-dimensional extended fluid model for a dc glow discharge with nonlocal ionization source term

    NASA Astrophysics Data System (ADS)

    Rafatov, Ismail; Bogdanov, Eugeny; Kudryavtsev, Anatoliy

    2013-09-01

    Numerical techniques applied to the gas discharge plasma modelling are generally grouped into fluid and kinetic (particle) methods, and their combinations which lead to the hybrid models. Hybrid models usually employ Monte Carlo method to simulate fast electron dynamics, while slow plasma species are described as fluids. However, since fast electrons contribution to these models is limited to deriving the ionization rate distribution, their effect can be expressed by the analytical approximation of the ionization source function, and then integrating it into the fluid model. In the context of this approach, we incorporated effect of fast electrons into the ``extended fluid model'' of glow discharge, using two spatial dimensions. Slow electrons, ions and excited neutral species are described by the fluid plasma equations. Slow electron transport (diffusion and mobility) coefficients as well as electron induced reaction rates are determined from the solutions of the electron Boltzmann equation. The self-consistent electric field is calculated using the Poisson equation. We carried out test calculations for the discharge in argon gas. Comparison with the experimental data as well as with the hybrid model results exhibits good applicability of the proposed model. The work was supported by the joint research grant from the Scientific and Technical Research Council of Turkey (TUBITAK) 212T164 and Russian Foundation for Basic Research (RFBR).

  12. Mercury in soil near a long-term air emission source in southeastern Idaho

    USGS Publications Warehouse

    Abbott, M.L.; Susong, D.D.; Olson, M.; Krabbenhoft, D.P.

    2003-01-01

    At the Idaho National Engineering and Environmental Laboratory in southeastern Idaho, a 500??C fluidized bed calciner was intermittently operated for 37 years, with measured Hg emission rates of 9-11 g/h. Surface soil was sampled at 57 locations around the facility to determine the spatial distribution of Hg fallout and surface Hg variability, and to predict the total residual Hg mass in the soil from historical emissions. Measured soil concentrations were slightly higher (p<0.05) within 5 km of the source but were overall very low (15-20 ng/g) compared to background Hg levels published for similar soils in the USA (50-70 ng/g). Concentrations decreased 4%/cm with depth and were found to be twice as high under shrubs and in depressions. Mass balance calculations accounted for only 2.5-20% of the estimated total Hg emitted over the 37-year calciner operating history. These results suggest that much of the Hg deposited from calciner operations may have been reduced in the soil and re-emitted as Hg(0) to the global atmospheric pool.

  13. From Source to City: Particulate Matter Concentration and Size Distribution Data from an Icelandic Dust Storm

    NASA Astrophysics Data System (ADS)

    Thorsteinsson, T.; Mockford, T.; Bullard, J. E.

    2015-12-01

    Dust storms are the source of particulate matter in 20%-25% of the cases in which the PM10health limit is exceeded in Reykjavik; which occurred approximately 20 times a year in 2005-2010. Some of the most active source areas for dust storms in Iceland, contributing to the particulate matter load in Reykjavik, are on the south coast of Iceland, with more than 20 dust storm days per year (in 2002-2011). Measurements of particle matter concentration and size distribution were recorded at Markarfljot in May and June 2015. Markarfljot is a glacial river that is fed by Eyjafjallajokull and Myrdalsjokull, and the downstream sandur areas have been shown to be significant dust sources. Particulate matter concentration during dust storms was recorded on the sandur area using a TSI DustTrak DRX Aerosol Monitor 8533 and particle size data was recorded using a TSI Optical Particle Sizer 3330 (OPS). Wind speed was measured using cup anemometers at five heights. Particle size measured at the source area shows an extremely fine dust creation, PM1 concentration reaching over 5000 μg/m3 and accounting for most of the mass. This is potentially due to sand particles chipping during saltation instead of breaking uniformly. Dust events occurring during easterly winds were captured by two permanent PM10 aerosol monitoring stations in Reykjavik (140 km west of Markarfljot) suggesting the regional nature of these events. OPS measurements from Reykjavik also provide an interesting comparison of particle size distribution from source to city. Dust storms contribute to the particular matter pollution in Reykjavik and their small particle size, at least from this source area, might be a serious health concern.

  14. Source term identification of environmental radioactive Pu/U particles by their characterization with non-destructive spectrochemical analytical techniques

    NASA Astrophysics Data System (ADS)

    Eriksson, M.; Osán, J.; Jernström, J.; Wegrzynek, D.; Simon, R.; Chinea-Cano, E.; Markowicz, A.; Bamford, S.; Tamborini, G.; Török, S.; Falkenberg, G.; Alsecz, A.; Dahlgaard, H.; Wobrauschek, P.; Streli, C.; Zoeger, N.; Betti, M.

    2005-04-01

    Six radioactive particles stemming from Thule area (NW-Greenland) were investigated by gamma-ray and L X-ray spectrometry based on radioactive disintegration, scanning electron microscopy coupled with energy-dispersive and wavelength-dispersive X-ray spectrometer, synchrotron radiation based techniques as microscopic X-ray fluorescence, microscopic X-ray absorption near-edge structure (μ-XANES) as well as combined X-ray absorption and fluorescence microtomography. Additionally, one particle from Mururoa atoll was examined by microtomography. From the results obtained, it was found out that the U and Pu were mixed in the particles. The U/Pu intensity ratios in the Thule particles varied between 0.05 and 0.36. The results from the microtomography showed that U/Pu ratio was not homogeneously distributed. The 241Am/ 238 + 239 + 240 Pu activity ratios varied between 0.13 and 0.17, indicating that the particles originate from different source terms. The oxidation states of U and Pu as determined by μ-XANES showed that U(IV) is the preponderant species and for Pu, two types of particles could be evidenced. One set had about 90% Pu(IV) while in the other the ratio Pu(IV)/Pu(VI) was about one third.

  15. A Source-Term Based Boundary Layer Bleed/Effusion Model for Passive Shock Control

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Norris, Andrew T.

    2011-01-01

    A modeling framework for boundary layer effusion has been developed based on the use of source (or sink) terms instead of the usual practice of specifying bleed directly as a boundary condition. This framework allows the surface boundary condition (i.e. isothermal wall, adiabatic wall, slip wall, etc.) to remain unaltered in the presence of bleed. This approach also lends itself to easily permit the addition of empirical models for second order effects that are not easily accounted for by simply defining effective transpiration values. Two effusion models formulated for supersonic flows have been implemented into this framework; the Doerffer/Bohning law and the Slater formulation. These models were applied to unit problems that contain key aspects of the flow physics applicable to bleed systems designed for hypersonic air-breathing propulsion systems. The ability of each model to predict bulk bleed properties was assessed, as well as the response of the boundary layer as it passes through and downstream of a porous bleed system. The model assessment was performed with and without the presence of shock waves. Three-dimensional CFD simulations that included the geometric details of the porous plate bleed systems were also carried out to supplement the experimental data, and provide additional insights into the bleed flow physics. Overall, both bleed formulations fared well for the tests performed in this study. However, the sample of test problems considered in this effort was not large enough to permit a comprehensive validation of the models.

  16. High order finite difference methods with subcell resolution for advection equations with stiff source terms

    SciTech Connect

    Wang, Wei; Shu, Chi-Wang; Yee, H.C.; Sjögreen, Björn

    2012-01-01

    A new high order finite-difference method utilizing the idea of Harten ENO subcell resolution method is proposed for chemical reactive flows and combustion. In reaction problems, when the reaction time scale is very small, e.g., orders of magnitude smaller than the fluid dynamics time scales, the governing equations will become very stiff. Wrong propagation speed of discontinuity may occur due to the underresolved numerical solution in both space and time. The present proposed method is a modified fractional step method which solves the convection step and reaction step separately. In the convection step, any high order shock-capturing method can be used. In the reaction step, an ODE solver is applied but with the computed flow variables in the shock region modified by the Harten subcell resolution idea. For numerical experiments, a fifth-order finite-difference WENO scheme and its anti-diffusion WENO variant are considered. A wide range of 1D and 2D scalar and Euler system test cases are investigated. Studies indicate that for the considered test cases, the new method maintains high order accuracy in space for smooth flows, and for stiff source terms with discontinuities, it can capture the correct propagation speed of discontinuities in very coarse meshes with reasonable CFL numbers.

  17. Implementation of a source term control program in a mature boiling water reactor.

    PubMed

    Vargo, G J; Jarvis, A J; Remark, J F

    1991-06-01

    The implementation and results of a source term control program implemented at the James A. FitzPatrick Nuclear Power Plant (JAF), a mature boiling water reactor (BWR) facility that has been in commercial operation since 1975, are discussed. Following a chemical decontamination of the reactor water recirculation piping in the Reload 8/Cycle 9 refueling outage in 1988, hydrogen water chemistry (HWC) and feedwater Zn addition were implemented. This is the first application of both HWC and feedwater Zn addition in a BWR facility. The radiological benefits and impacts of combined operation of HWC and feedwater Zn addition at JAF during Cycle 9 are detailed and summarized. The implementation of hydrogen water chemistry resulted in a significant transport of corrosion products within the reactor coolant system that was greater than anticipated. Feedwater Zn addition appears to be effective in controlling buildup of other activated corrosion products such as 60Co on reactor water recirculation piping; however, adverse impacts were encountered. The major adverse impact of feedwater Zn addition is the production of 65Zn that is released during plant outages and operational transients. PMID:2032839

  18. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  19. Distribution of Practice and Metacognition in Learning and Long-Term Retention of a Discrete Motor Task

    ERIC Educational Resources Information Center

    Dail, Teresa K.; Christina, Robert W.

    2004-01-01

    This study examined judgments of learning and the long-term retention of a discrete motor task (golf putting) as a function of practice distribution. The results indicated that participants in the distributed practice group performed more proficiently than those in the massed practice group during both acquisition and retention phases. No…

  20. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material...

  1. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Specifically Licensed Items § 32.74 Manufacture and distribution of sources or devices containing...

  2. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Specifically Licensed Items § 32.74 Manufacture and distribution of sources or devices containing...

  3. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, Judy L.

    1987-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogeneous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the source terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  4. Long-term mechanical life testing of polymeric post insulators for distribution and a comparison to porcelain

    SciTech Connect

    Cherney, E.A. )

    1988-07-01

    The paper presents the results and analyses of long-term cantilever strength tests on polymeric line post insulators. The time-to-failure data for static cantilever loads are represented by the Weibull distribution. The life distribution, obtained from the maximum likelihood estimates of the accelerated failure times, fits an exponential model. An extrapolation of the life distribution to normal loads provides an estimate of the strength rating and mechanical equivalence to porcelain line post insulators.

  5. Sources/sinks analysis with satellite sensing for exploring global atmospheric CO2 distributions

    NASA Astrophysics Data System (ADS)

    Shim, C.; Nassar, R.; Kim, J.

    2010-12-01

    There is growing interest in CO2 budget analysis since space-borne measurements of global CO2 distribution have been conducted (e.g, GOSAT project). Here we simulated the global CO2 distribution to estimate individual source/sink contributions. The chemical transport model (GEOS-Chem) was used in order to simulate the global CO2 distribution with updated global sources/sinks with 2°x2.5° horizontal resolution. In addition, 3-D emissions from aviation and chemical oxidation of CO are implemented. The model simulated CO2 amounts were compared with the GOSAT column averaged CO2 column (SWIR L2 data) from April 2009 to May 2010. The seasonal cycles of CO2 concentration were compared and the regional patterns of CO2 distribution are explained by the model with a systemic difference by 1 ~ 2% in the CO2 concentration. In other work, the GEOS-Chem CO2 concentrations show reasonable agreement with GLOBALVIEW-CO2. We further estimated the sources/sinks contributions to the global CO2 budget through 9 tagged CO2 tracers (fossil fuels, ocean exchanges, biomass burning, biofuel burning, balanced biosphere, net terrestrial exchange, ship emissions, aviation emissions, and oxidation from carbon precursors) over the years 2005-2009. Global CO2 concentration shows an increase of 2.1 ppbv/year in which the human fossil fuel and cement emissions are the main driving force (5.0 ppbv/year) for the trend. Net terrestrial and oceanic exchange of CO2 are main sinks (-2.1 ppbv/year and -0.7 ppbv/year, respectively). Our model results will help to suggest the level of reduction in global human CO2 emissions which could control the global CO2 trends in 21th century.

  6. Heavy metals in soils from a typical county in Shanxi Province, China: Levels, sources and spatial distribution.

    PubMed

    Pan, Li-bo; Ma, Jin; Wang, Xian-liang; Hou, Hong

    2016-04-01

    The concentrations of As, Cd, Cr, Cu, Pb, Ni, Zn, and Hg in 128 surface soil samples from Xiangfen County, Shanxi Province, China were measured. The concentrations of these eight heavy metals were lower than the critical values in the national soil quality standard. However, these concentrations were found to be slightly higher than their background values in soils in Shanxi Province, indicating enrichment of these metals in soils in Xiangfen County, especially for Hg and Cd. Principal component analysis coupled with cluster analysis was used to analyze the data and identify possible sources of these heavy metals; the results showed that the eight heavy metals in soils from Xiangfen County came from three different sources. Lead, Cd, Cu and Zn mainly arose from agricultural practices and vehicle emissions. Arsenic and Ni arose mainly from parent materials. Industrial practices were the main sources of Cr and Hg. The spatial distribution of the heavy metals varied greatly, and was closely correlated to local anthropogenic activities. This study will be helpful not only for improving local soil environmental quality but will also provide a basis for effectively targeting policies to protect soils from long-term heavy metal accumulation. PMID:26807946

  7. Heavy metals in soils from a typical county in Shanxi Province, China: Levels, sources and spatial distribution.

    PubMed

    Pan, Li-bo; Ma, Jin; Wang, Xian-liang; Hou, Hong

    2016-04-01

    The concentrations of As, Cd, Cr, Cu, Pb, Ni, Zn, and Hg in 128 surface soil samples from Xiangfen County, Shanxi Province, China were measured. The concentrations of these eight heavy metals were lower than the critical values in the national soil quality standard. However, these concentrations were found to be slightly higher than their background values in soils in Shanxi Province, indicating enrichment of these metals in soils in Xiangfen County, especially for Hg and Cd. Principal component analysis coupled with cluster analysis was used to analyze the data and identify possible sources of these heavy metals; the results showed that the eight heavy metals in soils from Xiangfen County came from three different sources. Lead, Cd, Cu and Zn mainly arose from agricultural practices and vehicle emissions. Arsenic and Ni arose mainly from parent materials. Industrial practices were the main sources of Cr and Hg. The spatial distribution of the heavy metals varied greatly, and was closely correlated to local anthropogenic activities. This study will be helpful not only for improving local soil environmental quality but will also provide a basis for effectively targeting policies to protect soils from long-term heavy metal accumulation.

  8. Passive sources for the Bennett-Brassard 1984 quantum-key-distribution protocol with practical signals

    SciTech Connect

    Curty, Marcos; Ma Xiongfeng; Luetkenhaus, Norbert; Lo, Hoi-Kwong

    2010-11-15

    Most experimental realizations of quantum key distribution are based on the Bennett-Brassard 1984 (the so-called BB84) protocol. In a typical optical implementation of this scheme, the sender uses an active source to produce the required BB84 signal states. While active state preparation of BB84 signals is a simple and elegant solution in principle, in practice passive state preparation might be desirable in some scenarios, for instance, in those experimental setups operating at high transmission rates. Passive schemes might also be more robust against side-channel attacks than active sources. Typical passive devices involve parametric down-conversion. In this paper, we show that both coherent light and practical single-photon sources are also suitable for passive generation of BB84 signal states. Our method does not require any externally driven element, but only linear optical components and photodetectors. In the case of coherent light, the resulting key rate is similar to the one delivered by an active source. When the sender uses practical single-photon sources, however, the distance covered by a passive transmitter might be longer than that of an active configuration.

  9. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  10. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    SciTech Connect

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Tavecchio, Fabrizio

    2011-05-20

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrk 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.

  11. Acoustic emission source location using a distributed feedback fiber laser rosette.

    PubMed

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  12. Spatial distribution of the plasma parameters in the RF negative ion source prototype for fusion

    SciTech Connect

    Lishev, S.; Schiesko, L.; Wünderlich, D.; Fantz, U.

    2015-04-08

    A numerical model, based on the fluid plasma theory, has been used for description of the spatial distribution of the plasma parameters (electron density and temperature, plasma potential as well as densities of the three types of positive hydrogen ions) in the IPP prototype RF negative hydrogen ion source. The model covers the driver and the expansion plasma region of the source with their actual size and accounts for the presence of the magnetic filter field with its actual value and location as well as for the bias potential applied to the plasma grid. The obtained results show that without a magnetic filter the two 2D geometries considered, respectively, with an axial symmetry and a planar one, represent accurately the complex 3D structure of the source. The 2D model with a planar symmetry (where the E×B and diamagnetic drifts could be involved in the description) has been used for analysis of the influence, via the charged-particle and electron-energy fluxes, of the magnetic filter and of the bias potential on the spatial structure of the plasma parameters in the source. Benchmarking of results from the code to experimental data shows that the model reproduces the general trend in the axial behavior of the plasma parameters in the source.

  13. Spatial distribution of the plasma parameters in the RF negative ion source prototype for fusion

    NASA Astrophysics Data System (ADS)

    Lishev, S.; Schiesko, L.; Wünderlich, D.; Fantz, U.

    2015-04-01

    A numerical model, based on the fluid plasma theory, has been used for description of the spatial distribution of the plasma parameters (electron density and temperature, plasma potential as well as densities of the three types of positive hydrogen ions) in the IPP prototype RF negative hydrogen ion source. The model covers the driver and the expansion plasma region of the source with their actual size and accounts for the presence of the magnetic filter field with its actual value and location as well as for the bias potential applied to the plasma grid. The obtained results show that without a magnetic filter the two 2D geometries considered, respectively, with an axial symmetry and a planar one, represent accurately the complex 3D structure of the source. The 2D model with a planar symmetry (where the E×B and diamagnetic drifts could be involved in the description) has been used for analysis of the influence, via the charged-particle and electron-energy fluxes, of the magnetic filter and of the bias potential on the spatial structure of the plasma parameters in the source. Benchmarking of results from the code to experimental data shows that the model reproduces the general trend in the axial behavior of the plasma parameters in the source.

  14. Management of Ultimate Risk of Nuclear Power Plants by Source Terms - Lessons Learned from the Chernobyl Accident

    SciTech Connect

    Genn Saji

    2006-07-01

    The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessons learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and deposition

  15. Size distributions and source function of sea spray aerosol over the South China Sea

    NASA Astrophysics Data System (ADS)

    Chu, Yingjia; Sheng, Lifang; Liu, Qian; Zhao, Dongliang; Jia, Nan; Kong, Yawen

    2016-08-01

    The number concentrations in the radius range of 0.06-5 μm of aerosol particles and meteorological parameters were measured on board during a cruise in the South China Sea from August 25 to October 12, 2012. Effective fluxes in the reference height of 10 m were estimated by steady state dry deposition method based on the observed data, and the influences of different air masses on flux were discussed in this paper. The number size distribution was characterized by a bimodal mode, with the average total number concentration of (1.50 ± 0.76)×103 cm-3. The two mode radii were 0.099 µm and 0.886 µm, both of which were within the scope of accumulation mode. A typical daily average size distribution was compared with that measured in the Bay of Bengal. In the whole radius range, the number concentrations were in agreement with each other; the modes were more distinct in this study than that abtained in the Bay of Bengal. The size distribution of the fluxes was fitted with the sum of log-normal and power-law distribution. The impact of different air masses was mainly on flux magnitude, rather than the shape of spectral distribution. A semiempirical source function that is applicable in the radius range of 0.06 µm< r 80<0.3 µm with the wind speed varying from 1.00 m s-1 to 10.00 m s-1 was derived.

  16. Assessment and application of clustering techniques to atmospheric particle number size distribution for the purpose of source apportionment

    NASA Astrophysics Data System (ADS)

    Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.

    2014-11-01

    Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods that have been recently employed to analyse PNSD data; however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectrum to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help

  17. Assessment and application of clustering techniques to atmospheric particle number size distribution for the purpose of source apportionment

    NASA Astrophysics Data System (ADS)

    Salimi, F.; Ristovski, Z.; Mazaheri, M.; Laiman, R.; Crilley, L. R.; He, C.; Clifford, S.; Morawska, L.

    2014-06-01

    Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help

  18. Temperature distribution in tissues from a regular array of hot source implants: an analytical approximation.

    PubMed

    Haider, S A; Cetas, T C; Roemer, R B

    1993-05-01

    An approximate analytical model based upon the bioheat transfer equation is derived and used to calculate temperatures within a perfused region implanted regularly with dielectrically coated hot source implants; for example, hot water tubes, electrically heated rods, or inductively heated ferromagnetic implants. The effect of a regular array of mutually parallel heat sources of cylindrical shape is approximated by idealizing one of the boundary conditions. The solution, as could be expected, is in terms of modified Bessel functions. In calculating the temperature of each thermoregulating source in the array, the steady state power balance is enforced. The important feature of the model is that the finite size of implant diameter and its dielectric coating can be incorporated. The effect of thickness and thermal conductivity of the coating on the source and tissue temperatures along with various other interesting features are deduced from this model. The analytically calculated implant and tissue temperatures are compared with those of a numerical 3-D finite difference model. The analytical model also is used to define a range of parameters such that minimal therapeutic temperatures will be achieved in the implanted volume without exceeding prescribed maximum temperatures. This approach leads to a simple means of selecting implant spacing and regulation temperatures of hot source methods prospectively.

  19. Long-term monitoring of molecular markers can distinguish different seasonal patterns of fecal indicating bacteria sources.

    PubMed

    Riedel, Timothy E; Thulsiraj, Vanessa; Zimmer-Faust, Amity G; Dagit, Rosi; Krug, Jenna; Hanley, Kaitlyn T; Adamek, Krista; Ebentier, Darcy L; Torres, Robert; Cobian, Uriel; Peterson, Sophie; Jay, Jennifer A

    2015-03-15

    Elevated levels of fecal indicator bacteria (FIB) have been observed at Topanga Beach, CA, USA. To identify the FIB sources, a microbial source tracking study using a dog-, a gull- and two human-associated molecular markers was conducted at 10 sites over 21 months. Historical data suggest that episodic discharge from the lagoon at the mouth of Topanga Creek is the main source of bacteria to the beach. A decline in creek FIB/markers downstream from upper watershed development and a sharp increase in FIB/markers at the lagoon sites suggest sources are local to the lagoon. At the lagoon and beach, human markers are detected sporadically, dog marker peaks in abundance mid-winter, and gull marker is chronically elevated. Varied seasonal patterns of FIB and source markers were identified showing the importance of applying a suite of markers over long-term spatial and temporal sampling to identify a complex combination of sources of contamination.

  20. Efficient construction of high-resolution TVD conservative schemes for equations with source terms: application to shallow water flows

    NASA Astrophysics Data System (ADS)

    Burguete, J.; García-Navarro, P.

    2001-09-01

    High-resolution total variation diminishing (TVD) schemes are widely used for the numerical approximation of hyperbolic conservation laws. Their extension to equations with source terms involving spatial derivatives is not obvious. In this work, efficient ways of constructing conservative schemes from the conservative, non-conservative or characteristic form of the equations are described in detail. An upwind, as opposed to a pointwise, treatment of the source terms is adopted here, and a new technique is proposed in which source terms are included in the flux limiter functions to get a complete second-order compact scheme. A new correction to fix the entropy problem is also presented and a robust treatment of the boundary conditions according to the discretization used is stated. Copyright

  1. The distribution of polarized radio sources >15 μJy IN GOODS-N

    SciTech Connect

    Rudnick, L.; Owen, F. N.

    2014-04-10

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy){sup –0.6} per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m{sup –2} around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  2. A census of molecular hydrogen outflows and their sources along the Orion A molecular ridge. Characteristics and overall distribution

    NASA Astrophysics Data System (ADS)

    Davis, C. J.; Froebrich, D.; Stanke, T.; Megeath, S. T.; Kumar, M. S. N.; Adamson, A.; Eislöffel, J.; Gredel, R.; Khanzadyan, T.; Lucas, P.; Smith, M. D.; Varricatt, W. P.

    2009-03-01

    Aims: A census of molecular hydrogen flows across the entire Orion A giant molecular cloud is sought. With this paper we aim to associate each flow with its progenitor and associated molecular core, so that the characteristics of the outflows and outflow sources can be established. Methods: We present wide-field near-infrared images of Orion A, obtained with the Wide Field Camera, WFCAM, on the United Kingdom Infrared Telescope. Broad-band K and narrow-band H2 1-0S(1) images of a contiguous ~8 square degree region are compared to mid-IR photometry from the Spitzer Space Telescope and (sub)millimetre dust-continuum maps obtained with the MAMBO and SCUBA bolometer arrays. Using previously-published H2 images, we also measured proper motions for H2 features in 33 outflows, and use these data to help associate flows with existing sources and/or dust cores. Results: Together these data give a detailed picture of dynamical star formation across this extensive region. We increase the number of known H2 outflows to 116. A total of 111 H2 flows were observed with Spitzer; outflow sources are identified for 72 of them (12 more H2 flows have tentative progenitors). The MAMBO 1200 μm maps cover 97 H2 flows; 57 of them (59%) are associated with Spitzer sources and either dust cores or extended 1200 μm emission. The H2 jets are widely distributed and randomly orientated. The jets do not appear to be orthogonal to large-scale filaments or even to the small-scale cores associated with the outflow sources (at least when traced with the 11´´ resolution of the 1200 μm MAMBO observations). Moreover, H2 jet lengths (L) and opening angles (θ) are not obviously correlated with indicators of outflow source age - source spectral index, α (measured from mid-IR photometry), or (sub)millimetre core flux. It seems clear that excitation requirements limit the usefulness of H2 as a tracer of L and θ (though jet position angles are well defined). Conclusions: We demonstrate that H2 jet

  3. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may

  4. North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge

    USGS Publications Warehouse

    Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.

    2006-01-01

    Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high

  5. Effect of seasonal and long-term changes in stress on sources of water to wells

    USGS Publications Warehouse

    Reilly, Thomas E.; Pollock, David W.

    1995-01-01

    The source of water to wells is ultimately the location where the water flowing to a well enters the boundary surface of the ground-water system . In ground-water systems that receive most of their water from areal recharge, the location of the water entering the system is at the water table . The area contributing recharge to a discharging well is the surface area that defines the location of the water entering the groundwater system. Water entering the system at the water table flows to the well and is eventually discharged from the well. Many State agencies are currently (1994) developing wellhead-protection programs. The thrust of some of these programs is to protect water supplies by determining the areas contributing recharge to water-supply wells and by specifying regulations to minimize the opportunity for contamination of the recharge water by activities at the land surface. In the analyses of ground-water flow systems, steady-state average conditions are frequently used to simplify the problem and make a solution tractable. Recharge is usually cyclic in nature, however, having seasonal cycles and longer term climatic cycles. A hypothetical system is quantitatively analyzed to show that, in many cases, these cyclic changes in the recharge rates apparently do not significantly affect the location and size of the areas contributing recharge to wells. The ratio of the mean travel time to the length of the cyclic stress period appears to indicate whether the transient effects of the cyclic stress must be explicitly represented in the analysis of contributing areas to wells. For the cases examined, if the ratio of the mean travel time to the period of the cyclic stress was much greater than one, then the transient area contributing recharge to wells was similar to the area calculated using an average steady-state condition. Noncyclic long-term transient changes in water use, however, and cyclic stresses on systems with ratios less than 1 can and do affect the

  6. Long term structural health monitoring by distributed fiber-optic sensing

    NASA Astrophysics Data System (ADS)

    Persichetti, G.; Minardo, A.; Testa, G.; Bernini, R.

    2012-04-01

    Structural health monitoring (SHM) systems allow to detect unusual structural behaviors that indicate a malfunction in the structure, which is an unhealthy structural condition. Depending on the complexity level of the SHM system, it can even perform the diagnosis and the prognosis steps, supplying the required information to carry out the most suitable actuation. While standard SHM systems are based on the use of point sensors (e.g., strain gauges, crackmeters, tiltmeters, etc.), there is an increasing interest towards the use of distributed optical fiber sensors, in which the whole structure is monitored by use of a single optical fiber. In particular, distributed optical fiber sensors based on stimulated Brillouin scattering (SBS) permit to detect the strain in a fully distributed manner, with a spatial resolution in the meter or submeter range, and a sensing length that can reach tens of km. These features, which have no performance equivalent among the traditional electronic sensors, are to be considered extremely valuable. When the sensors are opportunely installed on the most significant structural members, this system can lead to the comprehension of the real static behaviour of the structure rather than merely measuring the punctual strain level on one of its members. In addition, the sensor required by Brillouin technology is an inexpensive, telecom-grade optical fiber that shares most of the typical advantages of other fiber-optic sensors, such as high resistance to moisture and corrosion, immunity to electromagnetic fields and potential for long-term monitoring. In this work, we report the result of a test campaign performed on a concrete bridge. In particular, the tests were performed by an portable prototype based on Brillouin Optical Time-Domain Analysis (BOTDA) [1,2]. This type of analysis makes use of a pulsed laser light and a frequency-shifted continuous-wave (CW) laser light, launched simultaneously at the two opposite ends of an optical fiber

  7. Analytical calculation of the skin temperature distribution due to subcutaneous heat production in a spherical heat source.

    PubMed

    Gustafsson, S E; Nilsson, S K; Torell, L M

    1975-03-01

    An analytical solution of the thermal conductivity equation describing the surface temperature distribution over a buried heat source is given in tabular form. The solution is applicable to experimental models for studies of the surface temperature over an implanted artificial heat source. The results can also be used for the analysis of the skin temperature over biological heat sources such as breat tumours.

  8. Scaling Relations Between Mainshock Source Parameters and Aftershock Distributions for Use in Aftershock Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2010-12-01

    Aftershocks are often used to delineate the mainshock rupture zone retrospectively. In aftershock forecasting on the other hand, the problem is to use mainshock rupture area to determine the aftershock zone prospectively. The procedures for this type of prediction are not as well developed and have been restricted to simple parameterizations such as the Utsu-Seki (1955) scaling relation between mainshock energy and aftershock area (Ogata and Zhueng, 2006). With a focus on improving current forecasting methods, we investigate the relationship between spatial source parameters that can be rapidly computed (spatial centroid and characteristic dimensions) and corresponding spatial measures of the aftershock distribution. For a set of about 30 large events, we either extracted source parameters from the McGuire et al (2002) finite moment tensor (FMT) catalog, or computed them from the online SRCMOD database (Mai, 2004). We identified aftershocks with windowing and scale-free methods, and computed both L1 and L2 measures of their distributions. Our comparisons produce scaling relations among the characteristic dimensions that can be used to initiate aftershock forecasts. By using rapidly-determined source parameters, we can decrease the forecasting latency and thus improve the probability gain of the forecasting methods.

  9. Distributed watershed modeling of design storms to identify nonpoint source loading areas

    SciTech Connect

    Endreny, T.A.; Wood, E.F.

    1999-03-01

    Watershed areas that generate nonpoint source (NPS) polluted runoff need to be identified prior to the design of basin-wide water quality projects. Current watershed-scale NPS models lack a variable source area (VSA) hydrology routine, and are therefore unable to identify spatially dynamic runoff zones. The TOPLATS model used a watertable-driven VSA hydrology routine to identify runoff zones in a 17.5 km{sup 2} agricultural watershed in central Oklahoma. Runoff areas were identified in a static modeling framework as a function of prestorm watertable depth and also in a dynamic modeling framework by simulating basin response to 2, 10, and 25 yr return period 6 h design storms. Variable source area expansion occurred throughout the duration of each 6 h storm and total runoff area increased with design storm intensity. Basin-average runoff rates of 1 mm h{sup {minus}1} provided little insight into runoff extremes while the spatially distributed analysis identified saturation excess zones with runoff rates equaling effective precipitation. The intersection of agricultural landcover areas with these saturation excess runoff zones targeted the priority potential NPS runoff zones that should be validated with field visits. These intersected areas, labeled as potential NPS runoff zones, were mapped within the watershed to demonstrate spatial analysis options available in TOPLATS for managing complex distributions of watershed runoff. TOPLATS concepts in spatial saturation excess runoff modelling should be incorporated into NPS management models.

  10. Residues, Distributions, Sources, and Ecological Risks of OCPs in the Water from Lake Chaohu, China

    PubMed Central

    Liu, Wen-Xiu; He, Wei; Qin, Ning; Kong, Xiang-Zhen; He, Qi-Shuang; Ouyang, Hui-Ling; Yang, Bin; Wang, Qing-Mei; Yang, Chen; Jiang, Yu-Jiao; Wu, Wen-Jing; Xu, Fu-Liu

    2012-01-01

    The levels of 18 organochlorine pesticides (OCPs) in the water from Lake Chaohu were measured by a solid phase extraction-gas chromatography-mass spectrometer detector. The spatial and temporal distribution, possible sources, and potential ecological risks of the OCPs were analyzed. The annual mean concentration for the OCPs in Lake Chaohu was 6.99 ng/L. Aldrin, HCHs, and DDTs accounted for large proportions of the OCPs. The spatial pollution followed the order of Central Lakes > Western Lakes > Eastern Lakes and water area. The sources of the HCHs were mainly from the historical usage of lindane. DDTs were degraded under aerobic conditions, and the main sources were from the use of technical DDTs. The ecological risks of 5 OCPs were assessed by the species sensitivity distribution (SSD) method in the order of heptachlor > γ-HCH > p,p′-DDT > aldrin > endrin. The combining risks of all sampling sites were MS > JC > ZM > TX, and those of different species were crustaceans > fish > insects and spiders. Overall, the ecological risks of OCP contaminants on aquatic animals were very low. PMID:23251107

  11. Organic micropollutants in coastal waters from NW Mediterranean Sea: sources distribution and potential risk.

    PubMed

    Sánchez-Avila, Juan; Tauler, Romà; Lacorte, Silvia

    2012-10-01

    This study provides a first estimation on the sources, distribution and risk of organic micropollutants (OMPs) in coastal waters from NW Mediterranean Sea. Polycyclic aromatic hydrocarbons, polychlorinated biphenyls, organochlorinated pesticides, polybrominated diphenyl ethers, phthalates and alkylphenols were analyzed by solid phase extraction and gas chromatography coupled to tandem mass spectrometry (SPE-GC-EI-MS/MS). River waters and wastewater treatment plant effluents discharging to the sea were identified as the main sources of OMPs to coastal waters, with an estimated input amount of around of 25,800 g d(-1). The concentration of ΣOMPs in coastal areas ranged from 17.4 to 8442 ng L(-1), and was the highest in port waters, followed by coastal and river mouth seawaters. A summarized overview of the patterns and sources of OMP contamination on the investigated coastal sea waters of NW Mediterranean Sea, as well as of their geographical distribution was obtained by Principal Component Analysis of the complete data set after its adequate pretreatment. Alkylphenols, bisphenol A and phthalates were the main contributors to ΣOMPs and produced an estimated significant pollution risk for fish, algae and the sensitive mysid shrimp organisms in seawater samples. The combination of GC-MS/MS, chemometrics and risk analysis is proven to be useful for a better control and management of OMP discharges. PMID:22706016

  12. Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments from the Neuquen River, Argentine Patagonia.

    PubMed

    Monza, Liliana B; Loewy, Ruth M; Savini, Mónica C; Pechen de d'Angelo, Ana M

    2013-01-01

    Spatial distribution and probable sources of aliphatic and polyaromatic hydrocarbons (AHs, PAHs) were investigated in surface sediments collected along the bank of the Neuquen River, Argentina. Total concentrations of aliphatic hydrocarbons ranged between 0.41 and 125 μg/g dw. Six stations presented low values of resolved aliphatic hydrocarbons and the n-alkane distribution indexes applied suggested a clear biogenic source. These values can be considered the baseline levels of aliphatic hydrocarbons for the river sediments. This constitutes important information for the assessment of future impacts since a strong impulse in the exploitation of shale gas and shale oil in these zones is nowadays undergoing. For the other 11 stations, a mixture of aliphatic hydrocarbons of petrogenic and biogenic origin was observed. The spatial distribution reflects local inputs of these pollutants with a significant increase in concentrations in the lower course, where two major cities are located. The highest values of total aliphatic hydrocarbons were found in this sector which, in turn, was the only one where individual PAHs were detected.

  13. Distribution and geological sources of selenium in environmental materials in Taoyuan County, Hunan Province, China.

    PubMed

    Ni, Runxiang; Luo, Kunli; Tian, Xinglei; Yan, Songgui; Zhong, Jitai; Liu, Maoqiu

    2016-06-01

    The selenium (Se) distribution and geological sources in Taoyuan County, China, were determined by using hydride generation atomic fluorescence spectrometry on rock, soil, and food crop samples collected from various geological regions within the county. The results show Se contents of 0.02-223.85, 0.18-7.05, and 0.006-5.374 mg/kg in the rock, soil, and food crops in Taoyuan County, respectively. The region showing the highest Se content is western Taoyuan County amid the Lower Cambrian and Ediacaran black rock series outcrop, which has banding distributed west to east. A relatively high-Se environment is found in the central and southern areas of Taoyuan County, where Quaternary Limnetic sedimentary facies and Neoproterozoic metamorphic volcanic rocks outcrop, respectively. A relatively low-Se environment includes the central and northern areas of Taoyuan County, where Middle and Upper Cambrian and Ordovician carbonate rocks and Cretaceous sandstones and conglomerates outcrop. These results indicate that Se distribution in Taoyuan County varies markedly and is controlled by the Se content of the bedrock. The Se-enriched Lower Cambrian and Ediacaran black rock series is the primary source of the seleniferous environment observed in Taoyuan County. Potential seleniferous environments are likely to be found near outcrops of the Lower Cambrian and Ediacaran black rock series in southern China.

  14. Distribution and geological sources of selenium in environmental materials in Taoyuan County, Hunan Province, China.

    PubMed

    Ni, Runxiang; Luo, Kunli; Tian, Xinglei; Yan, Songgui; Zhong, Jitai; Liu, Maoqiu

    2016-06-01

    The selenium (Se) distribution and geological sources in Taoyuan County, China, were determined by using hydride generation atomic fluorescence spectrometry on rock, soil, and food crop samples collected from various geological regions within the county. The results show Se contents of 0.02-223.85, 0.18-7.05, and 0.006-5.374 mg/kg in the rock, soil, and food crops in Taoyuan County, respectively. The region showing the highest Se content is western Taoyuan County amid the Lower Cambrian and Ediacaran black rock series outcrop, which has banding distributed west to east. A relatively high-Se environment is found in the central and southern areas of Taoyuan County, where Quaternary Limnetic sedimentary facies and Neoproterozoic metamorphic volcanic rocks outcrop, respectively. A relatively low-Se environment includes the central and northern areas of Taoyuan County, where Middle and Upper Cambrian and Ordovician carbonate rocks and Cretaceous sandstones and conglomerates outcrop. These results indicate that Se distribution in Taoyuan County varies markedly and is controlled by the Se content of the bedrock. The Se-enriched Lower Cambrian and Ediacaran black rock series is the primary source of the seleniferous environment observed in Taoyuan County. Potential seleniferous environments are likely to be found near outcrops of the Lower Cambrian and Ediacaran black rock series in southern China. PMID:26563208

  15. Wall-loss distribution of charge breeding ions in an electron cyclotron resonance ion source

    SciTech Connect

    Jeong, S. C.; Oyaizu, M.; Imai, N.; Hirayama, Y.; Ishiyama, H.; Miyatake, H.; Niki, K.; Okada, M.; Watanabe, Y. X.; Otokawa, Y.; Osa, A.; Ichikawa, S.

    2011-03-15

    The ion loss distribution in an electron cyclotron resonance ion source (ECRIS) was investigated to understand the element dependence of the charge breeding efficiency in an electron cyclotron resonance (ECR) charge breeder. The radioactive {sup 111}In{sup 1+} and {sup 140}Xe{sup 1+} ions (typical nonvolatile and volatile elements, respectively) were injected into the ECR charge breeder at the Tokai Radioactive Ion Accelerator Complex to breed their charge states. Their respective residual activities on the sidewall of the cylindrical plasma chamber of the source were measured after charge breeding as functions of the azimuthal angle and longitudinal position and two-dimensional distributions of ions lost during charge breeding in the ECRIS were obtained. These distributions had different azimuthal symmetries. The origins of these different azimuthal symmetries are qualitatively discussed by analyzing the differences and similarities in the observed wall-loss patterns. The implications for improving the charge breeding efficiencies of nonvolatile elements in ECR charge breeders are described. The similarities represent universal ion loss characteristics in an ECR charge breeder, which are different from the loss patterns of electrons on the ECRIS wall.

  16. Long-term trends in California mobile source emissions and ambient concentrations of black carbon and organic aerosol.

    PubMed

    McDonald, Brian C; Goldstein, Allen H; Harley, Robert A

    2015-04-21

    A fuel-based approach is used to assess long-term trends (1970-2010) in mobile source emissions of black carbon (BC) and organic aerosol (OA, including both primary emissions and secondary formation). The main focus of this analysis is the Los Angeles Basin, where a long record of measurements is available to infer trends in ambient concentrations of BC and organic carbon (OC), with OC used here as a proxy for OA. Mobile source emissions and ambient concentrations have decreased similarly, reflecting the importance of on- and off-road engines as sources of BC and OA in urban areas. In 1970, the on-road sector accounted for ∼90% of total mobile source emissions of BC and OA (primary + secondary). Over time, as on-road engine emissions have been controlled, the relative importance of off-road sources has grown. By 2010, off-road engines were estimated to account for 37 ± 20% and 45 ± 16% of total mobile source contributions to BC and OA, respectively, in the Los Angeles area. This study highlights both the success of efforts to control on-road emission sources, and the importance of considering off-road engine and other VOC source contributions when assessing long-term emission and ambient air quality trends. PMID:25793355

  17. Long-term trends in California mobile source emissions and ambient concentrations of black carbon and organic aerosol.

    PubMed

    McDonald, Brian C; Goldstein, Allen H; Harley, Robert A

    2015-04-21

    A fuel-based approach is used to assess long-term trends (1970-2010) in mobile source emissions of black carbon (BC) and organic aerosol (OA, including both primary emissions and secondary formation). The main focus of this analysis is the Los Angeles Basin, where a long record of measurements is available to infer trends in ambient concentrations of BC and organic carbon (OC), with OC used here as a proxy for OA. Mobile source emissions and ambient concentrations have decreased similarly, reflecting the importance of on- and off-road engines as sources of BC and OA in urban areas. In 1970, the on-road sector accounted for ∼90% of total mobile source emissions of BC and OA (primary + secondary). Over time, as on-road engine emissions have been controlled, the relative importance of off-road sources has grown. By 2010, off-road engines were estimated to account for 37 ± 20% and 45 ± 16% of total mobile source contributions to BC and OA, respectively, in the Los Angeles area. This study highlights both the success of efforts to control on-road emission sources, and the importance of considering off-road engine and other VOC source contributions when assessing long-term emission and ambient air quality trends.

  18. Estimation of marine source-term following Fukushima Dai-ichi accident.

    PubMed

    Bailly du Bois, P; Laguionie, P; Boust, D; Korsakissok, I; Didier, D; Fiévet, B

    2012-12-01

    Contamination of the marine environment following the accident in the Fukushima Dai-ichi nuclear power plant represented the most important artificial radioactive release flux into the sea ever known. The radioactive marine pollution came from atmospheric fallout onto the ocean, direct release of contaminated water from the plant and transport of radioactive pollution from leaching through contaminated soil. In the immediate vicinity of the plant (less than 500 m), the seawater concentrations reached 68,000 Bq.L(-1) for (134)Cs and (137)Cs, and exceeded 100,000 Bq.L(-1) for (131)I in early April. Due to the accidental context of the releases, it is difficult to estimate the total amount of radionuclides introduced into seawater from data obtained in the plant. An evaluation is proposed here, based on measurements performed in seawater for monitoring purposes. Quantities of (137)Cs in seawater in a 50-km area around the plant were calculated from interpolation of seawater measurements. The environmental halftime of seawater in this area is deduced from the time-evolution of these quantities. This halftime appeared constant at about 7 days for (137)Cs. These data allowed estimation of the amount of principal marine inputs and their evolution in time: a total of 27 PBq (12 PBq-41 PBq) of (137)Cs was estimated up to July 18. Even though this main release may be followed by residual inputs from the plant, river runoff and leakage from deposited sediments, it represents the principal source-term that must be accounted for future studies of the consequences of the accident on marine systems. The (137)Cs from Fukushima will remain detectable for several years throughout the North Pacific, and (137)Cs/(134)Cs ratio will be a tracer for future studies.

  19. Source term development for the 300 Area Treated Effluent Disposal Facility

    SciTech Connect

    Bendixsen, R.B.

    1994-04-01

    A novel method for developing a source term for radiation and hazardous material content of sludge processing equipment and barrels in a new waste water treatment facility is presented in this paper. The 300 Area Treated Effluent Disposal Facility (TEDF), located at the Hanford Site near Richland, Washington, will treat process sewer waste water from the 300 Area and discharge a permittable effluent flow into the Columbia River. A process information and hazards analysis document needed a process flowsheet detailing the concentrations of radionuclides, inorganics, and organics throughout the process, including the sludge effluent flow. A hazards analysis for a processing facility usually includes a flowsheet showing the process, materials, heat balances, and instrumentation for that facility. The flow sheet estimates stream flow quantities, activities, compositions, and properties. For the 300 Area TEDF, it was necessary to prepare the flow sheet with all of the information so that radiation doses to workers could be estimated. The noble method used to develop the 300 Area TEDF flowsheet included generating recycle factors. To prepare each component in the flowsheet, precipitation, destruction, and two recycle factors were developed. The factors were entered into a spreadsheet and provided a method of estimating the steady-state concentrations of all of the components in the facility. This report describes how the factors were developed, explains how they were used in developing the flowsheet, and presents the results of using these values to estimate radiation doses for personnel working in the facility. The report concludes with a discussion of the effect of estimates of radioactive and hazardous material concentrations on shielding design and the need for containment features for equipment in the facility.

  20. Preparation of Radium and Other Spent Sealed Sources Containing Long-Lived Radonuclides to Long-Term Storage

    SciTech Connect

    Arustamov, A. E.; Ojovan, M. I.; Semenov, K. N.; Sobolev, I. A.

    2003-02-26

    At present time management of radioactive waste containing long-lived a radionuclides, is one of the most serious problems. The complexity of the management this kind of waste is due to extended half-life of these radionuclides. Hence it is difficult to predict not only long-term behavior of packages with waste, but also conditions of containing geological medium. The spent sources containing long-lived radionuclides are not suitable for disposal in shallow ground repositories. They must be temporary stored in special engineered structures. Long terms storage of these sources require application of additional measures for diminishing of risk of incidents with them.

  1. DOES SIZE MATTER? THE UNDERLYING INTRINSIC SIZE DISTRIBUTION OF RADIO SOURCES AND IMPLICATIONS FOR UNIFICATION BY ORIENTATION

    SciTech Connect

    DiPompeo, M. A.; Runnoe, J. C.; Myers, A. D.; Boroson, T. A.

    2013-09-01

    Unification by orientation is a ubiquitous concept in the study of active galactic nuclei. A gold standard of the orientation paradigm is the hypothesis that radio galaxies and radio-loud quasars are intrinsically the same, but are observed over different ranges of viewing angles. Historically, strong support for this model was provided by the projected sizes of radio structure in luminous radio galaxies, which were found to be significantly larger than those of quasars, as predicted due to simple geometric projection. Recently, this test of the simplest prediction of orientation-based models has been revisited with larger samples that cover wider ranges of fundamental properties-and no clear difference in projected sizes of radio structure is found. Cast solely in terms of viewing angle effects, these results provide convincing evidence that unification of these objects solely through orientation fails. However, it is possible that conflicting results regarding the role orientation plays in our view of radio sources simply result from insufficient sampling of their intrinsic size distribution. We test this possibility using Monte Carlo simulations constrained by real sample sizes and properties. We develop models for the real intrinsic size distribution of radio sources, simulate observations by randomly sampling intrinsic sizes and viewing angles, and analyze how likely each sample is to support or dispute unification by orientation. We find that, while it is possible to reconcile conflicting results purely within a simple, orientation-based framework, it is very unlikely. We analyze the effects that sample size, relative numbers of radio galaxies and quasars, the critical angle that separates the two subclasses, and the shape of the intrinsic size distribution have on this type of test.

  2. The development of a realistic source term for sodium-cooled fast reactors : assessment of current status and future needs.

    SciTech Connect

    LaChance, Jeffrey L.; Phillips, Jesse; Parma, Edward J., Jr.; Olivier, Tara Jean; Middleton, Bobby D.

    2011-06-01

    Sodium-cooled fast reactors (SFRs) continue to be proposed and designed throughout the United States and the world. Although the number of SFRs actually operating has declined substantially since the 1980s, a significant interest in advancing these types of reactor systems remains. Of the many issues associated with the development and deployment of SFRs, one of high regulatory importance is the source term to be used in the siting of the reactor. A substantial amount of modeling and experimental work has been performed over the past four decades on accident analysis, sodium coolant behavior, and radionuclide release for SFRs. The objective of this report is to aid in determining the gaps and issues related to the development of a realistic, mechanistically derived source term for SFRs. This report will allow the reader to become familiar with the severe accident source term concept and gain a broad understanding of the current status of the models and experimental work. Further, this report will allow insight into future work, in terms of both model development and experimental validation, which is necessary in order to develop a realistic source term for SFRs.

  3. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, J. L.

    1986-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  4. Concentrations, distribution, sources, and ecological risk assessment of heavy metals in agricultural topsoil of the Three Gorges Dam region, China.

    PubMed

    Liu, Minxia; Yang, Yuyi; Yun, Xiaoyan; Zhang, Miaomiao; Wang, Jun

    2015-03-01

    Concentrations, distribution, sources, and ecological risk of seven heavy metals including chromium (Cr), nickel (Ni), copper (Cu), zinc (Zn), lead (Pb), cadmium (Cd), and mercury (Hg) in agricultural topsoil samples of the Three Gorges Dam region, China were investigated in this study. Among seven heavy metals, Zn had the highest mean concentration (149 mg kg(-1)) in the agricultural topsoil, followed by Cr (66 mg kg(-1)), Cu (52.2 mg kg(-1)), Pb (13.0 mg kg(-1)), Ni (8.5 mg kg(-1)), Cd (0.29 mg kg(-1)), and Hg (0.08 mg kg(-1)). Enrichment factor (EF) values of Zn, Cu, Cd, and Hg were higher than 1.5, indicating that Zn, Cu, Cd, and Hg were the major pollutants in this study area. The average potential ecological risk index (RI) value was 147, suggesting that heavy metals in the agricultural topsoil in the study area had a low ecological risk. The result of factor analysis (FA) and correlation analysis showed that long-term use of chemical fertilizer and pesticides, natural rock weathering, and atmospheric deposition were the several main sources of seven heavy metals in agricultural topsoil of the Three Gorges Dam region. Factor analysis-multiple linear regression (FA-MLR) results indicated that the most important source in this area was long-term use of chemical fertilizer and pesticides, which contributed 70 % for Cu and Zn, 62 % for Cd, and 72 % for Hg. More attention must be paid to the extensive use of chemical fertilizers and pesticides containing heavy metals which have been accumulated in the agricultural soil. PMID:25716527

  5. Concentrations, distribution, sources, and ecological risk assessment of heavy metals in agricultural topsoil of the Three Gorges Dam region, China.

    PubMed

    Liu, Minxia; Yang, Yuyi; Yun, Xiaoyan; Zhang, Miaomiao; Wang, Jun

    2015-03-01

    Concentrations, distribution, sources, and ecological risk of seven heavy metals including chromium (Cr), nickel (Ni), copper (Cu), zinc (Zn), lead (Pb), cadmium (Cd), and mercury (Hg) in agricultural topsoil samples of the Three Gorges Dam region, China were investigated in this study. Among seven heavy metals, Zn had the highest mean concentration (149 mg kg(-1)) in the agricultural topsoil, followed by Cr (66 mg kg(-1)), Cu (52.2 mg kg(-1)), Pb (13.0 mg kg(-1)), Ni (8.5 mg kg(-1)), Cd (0.29 mg kg(-1)), and Hg (0.08 mg kg(-1)). Enrichment factor (EF) values of Zn, Cu, Cd, and Hg were higher than 1.5, indicating that Zn, Cu, Cd, and Hg were the major pollutants in this study area. The average potential ecological risk index (RI) value was 147, suggesting that heavy metals in the agricultural topsoil in the study area had a low ecological risk. The result of factor analysis (FA) and correlation analysis showed that long-term use of chemical fertilizer and pesticides, natural rock weathering, and atmospheric deposition were the several main sources of seven heavy metals in agricultural topsoil of the Three Gorges Dam region. Factor analysis-multiple linear regression (FA-MLR) results indicated that the most important source in this area was long-term use of chemical fertilizer and pesticides, which contributed 70 % for Cu and Zn, 62 % for Cd, and 72 % for Hg. More attention must be paid to the extensive use of chemical fertilizers and pesticides containing heavy metals which have been accumulated in the agricultural soil.

  6. Required distribution of noise sources for Green's function recovery in diffusive fields

    NASA Astrophysics Data System (ADS)

    Shamsalsadati, S.; Weiss, C. J.

    2011-12-01

    In the most general sense, noise is the part of the signal of little or no interest, due to a multitude of reasons such as operator error, imperfect instrumentation, experiment design, or inescapable background interference. Considering the latter, it has been shown that Green's function can be extracted from cross-correlation of the ambient, diffusive wavefields arising from background random noise sources. Pore pressure and low-frequency electromagnetic induction are two such examples of diffusive fields. In theory, applying Green's function method in geophysical exploration requires infinity of volumetrically distributed sources; however, in the real world the number of noise sources in an area is limited, and furthermore, unevenly distributed in time, space and spectral content. Hence, quantification of the requisite noise sources that enable us to calculate Green's function acceptably well remains an open research question. The purpose of this study is to find the area of noise sources that contribute most to the Green's function estimation in diffusive systems. We call such a region the Volume of Relevance (VoR). Our analysis builds upon recent work in 1D homogeneous system where it was shown that sources located between two receivers positions are the most important ones for the purpose of Green's function recovery. Our results confirm the previous finding but we also examine the effect of heterogeneity, dimensionality and receiver location in both 1D and 2D at a fixed frequency. We demonstrate that for receivers located symmetrically across an interface between regions of contrasting diffusivity, the VoR rapidly shifts from one side of the interface to the other, and back again, as receiver separation increases. We also demonstrate that where the receiver pair is located on the interface itself, the shifting is less rapid, and for moderate to high diffusivity contrasts, the VoR remains entirely on the more diffusive side. In addition, because classical

  7. Short-term Music Training Enhances Complex, Distributed Neural Communication during Music and Linguistic Tasks.

    PubMed

    Carpentier, Sarah M; Moreno, Sylvain; McIntosh, Anthony R

    2016-10-01

    Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5-7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects.

  8. Short-term Music Training Enhances Complex, Distributed Neural Communication during Music and Linguistic Tasks

    PubMed Central

    Carpentier, Sarah M.; Moreno, Sylvain; McIntosh, Anthony R.

    2016-01-01

    Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5–7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects. PMID:27243611

  9. Short-term Music Training Enhances Complex, Distributed Neural Communication during Music and Linguistic Tasks.

    PubMed

    Carpentier, Sarah M; Moreno, Sylvain; McIntosh, Anthony R

    2016-10-01

    Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5-7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects. PMID:27243611

  10. Photon-monitoring attack on continuous-variable quantum key distribution with source in middle

    NASA Astrophysics Data System (ADS)

    Wang, Yijun; Huang, Peng; Guo, Ying; Huang, Dazu

    2014-12-01

    Motivated by a fact that the non-Gaussian operation may increase entanglement of an entangled system, we suggest a photon-monitoring attack strategy in the entanglement-based (EB) continuous-variable quantum key distribution (CVQKD) using the photon subtraction operations, where the entangled source originates from the center instead of one of the legal participants. It shows that an eavesdropper, Eve, can steal large information from participants after intercepting the partial beams with the photon-monitoring attach strategy. The structure of the proposed CVQKD protocol is useful in simply analyzing how quantum loss in imperfect channels can decrease the performance of the CVQKD protocol. The proposed attack strategy can be implemented under current technology, where a newly developed and versatile no-Gaussian operation can be well employed with the entangled source in middle in order to access to mass information in the EB CVQKD protocol, as well as in the prepare-and-measure (PM) CVQKD protocol.

  11. Distribution, richness, quality, and thermal maturity of source rock units on the North Slope of Alaska

    USGS Publications Warehouse

    Peters, K.E.; Bird, K.J.; Keller, M.A.; Lillis, P.G.; Magoon, L.B.

    2003-01-01

    Four source rock units on the North Slope were identified, characterized, and mapped to better understand the origin of petroleum in the area: Hue-gamma ray zone (Hue-GRZ), pebble shale unit, Kingak Shale, and Shublik Formation. Rock-Eval pyrolysis, total organic carbon analysis, and well logs were used to map the present-day thickness, organic quantity (TOC), quality (hydrogen index, HI), and thermal maturity (Tmax) of each unit. To map these units, we screened all available geochemical data for wells in the study area and assumed that the top and bottom of the oil window occur at Tmax of ~440° and 470°C, respectively. Based on several assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original organic richness prior to thermal maturation.

  12. Long distance measurement-device-independent quantum key distribution with entangled photon sources

    SciTech Connect

    Xu, Feihu; Qi, Bing; Liao, Zhongfa; Lo, Hoi-Kwong

    2013-08-05

    We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.

  13. Imaging of local temperature distributions in mesas of high-Tc superconducting terahertz sources

    NASA Astrophysics Data System (ADS)

    Tsujimoto, M.; Kambara, H.; Maeda, Y.; Yoshioka, Y.; Nakagawa, Y.; Kakeya, I.

    2014-12-01

    Stacks of intrinsic Josephson junctions in high-Tc superconductors are a promising source of intense, continuous, and monochromatic terahertz waves. In this paer, we establish a fluorescence-based temperature imaging system to directly image the surface temperature on a Bi2Sr2CaCu2O8+δ mesa sample. Intense terahertz emissions are observed in both high- and low-bias regimes, where the mesa voltage satisfies the cavity resonance condition. In the high- bias regime, the temperature distributions are shown to be inhomogeneous with a considerable temperature rise. In contrast, in the low-bias regime, the distributions are rather uniform and the local temperature is close to the bath temperature over the entire sample.

  14. Recurring flood distribution patterns related to short-term Holocene climatic variability

    NASA Astrophysics Data System (ADS)

    Benito, Gerardo; Macklin, Mark G.; Panin, Andrei; Rossato, Sandro; Fontana, Alessandro; Jones, Anna F.; Machado, Maria J.; Matlakhova, Ekaterina; Mozzi, Paolo; Zielhofer, Christoph

    2015-11-01

    Millennial- and multi-centennial scale climate variability during the Holocene has been well documented, but its impact on the distribution and timing of extreme river floods has yet to be established. Here we present a meta-analysis of more than 2000 radiometrically dated flood units to reconstruct centennial-scale Holocene flood episodes in Europe and North Africa. Our data analysis shows a general increase in flood frequency after 5000 cal. yr BP consistent with a weakening in zonal circulation over the second half of the Holocene, and with an increase in winter insolation. Multi-centennial length phases of flooding in UK and central Europe correspond with periods of minimum solar irradiance, with a clear trend of increasing flood frequency over the last 1000 years. Western Mediterranean regions show synchrony of flood episodes associated with negative phases of the North Atlantic Oscillation that are out-of-phase with those evident within the eastern Mediterranean. This long-term flood record reveals complex but geographically highly interconnected climate-flood relationships, and provides a new framework to understand likely future spatial changes of flood frequency.

  15. The geostatistic-based spatial distribution variations of soil salts under long-term wastewater irrigation.

    PubMed

    Wu, Wenyong; Yin, Shiyang; Liu, Honglu; Niu, Yong; Bao, Zhe

    2014-10-01

    The purpose of this study was to determine and evaluate the spatial changes in soil salinity by using geostatistical methods. The study focused on the suburb area of Beijing, where urban development led to water shortage and accelerated wastewater reuse to farm irrigation for more than 30 years. The data were then processed by GIS using three different interpolation techniques of ordinary kriging (OK), disjunctive kriging (DK), and universal kriging (UK). The normality test and overall trend analysis were applied for each interpolation technique to select the best fitted model for soil parameters. Results showed that OK was suitable for soil sodium adsorption ratio (SAR) and Na(+) interpolation; UK was suitable for soil Cl(-) and pH; DK was suitable for soil Ca(2+). The nugget-to-sill ratio was applied to evaluate the effects of structural and stochastic factors. The maps showed that the areas of non-saline soil and slight salinity soil accounted for 6.39 and 93.61%, respectively. The spatial distribution and accumulation of soil salt were significantly affected by the irrigation probabilities and drainage situation under long-term wastewater irrigation.

  16. Recurring flood distribution patterns related to short-term Holocene climatic variability.

    PubMed

    Benito, Gerardo; Macklin, Mark G; Panin, Andrei; Rossato, Sandro; Fontana, Alessandro; Jones, Anna F; Machado, Maria J; Matlakhova, Ekaterina; Mozzi, Paolo; Zielhofer, Christoph

    2015-01-01

    Millennial- and multi-centennial scale climate variability during the Holocene has been well documented, but its impact on the distribution and timing of extreme river floods has yet to be established. Here we present a meta-analysis of more than 2000 radiometrically dated flood units to reconstruct centennial-scale Holocene flood episodes in Europe and North Africa. Our data analysis shows a general increase in flood frequency after 5000 cal. yr BP consistent with a weakening in zonal circulation over the second half of the Holocene, and with an increase in winter insolation. Multi-centennial length phases of flooding in UK and central Europe correspond with periods of minimum solar irradiance, with a clear trend of increasing flood frequency over the last 1000 years. Western Mediterranean regions show synchrony of flood episodes associated with negative phases of the North Atlantic Oscillation that are out-of-phase with those evident within the eastern Mediterranean. This long-term flood record reveals complex but geographically highly interconnected climate-flood relationships, and provides a new framework to understand likely future spatial changes of flood frequency. PMID:26549043

  17. The geostatistic-based spatial distribution variations of soil salts under long-term wastewater irrigation.

    PubMed

    Wu, Wenyong; Yin, Shiyang; Liu, Honglu; Niu, Yong; Bao, Zhe

    2014-10-01

    The purpose of this study was to determine and evaluate the spatial changes in soil salinity by using geostatistical methods. The study focused on the suburb area of Beijing, where urban development led to water shortage and accelerated wastewater reuse to farm irrigation for more than 30 years. The data were then processed by GIS using three different interpolation techniques of ordinary kriging (OK), disjunctive kriging (DK), and universal kriging (UK). The normality test and overall trend analysis were applied for each interpolation technique to select the best fitted model for soil parameters. Results showed that OK was suitable for soil sodium adsorption ratio (SAR) and Na(+) interpolation; UK was suitable for soil Cl(-) and pH; DK was suitable for soil Ca(2+). The nugget-to-sill ratio was applied to evaluate the effects of structural and stochastic factors. The maps showed that the areas of non-saline soil and slight salinity soil accounted for 6.39 and 93.61%, respectively. The spatial distribution and accumulation of soil salt were significantly affected by the irrigation probabilities and drainage situation under long-term wastewater irrigation. PMID:25127658

  18. Recurring flood distribution patterns related to short-term Holocene climatic variability

    PubMed Central

    Benito, Gerardo; Macklin, Mark G.; Panin, Andrei; Rossato, Sandro; Fontana, Alessandro; Jones, Anna F.; Machado, Maria J.; Matlakhova, Ekaterina; Mozzi, Paolo; Zielhofer, Christoph

    2015-01-01

    Millennial- and multi-centennial scale climate variability during the Holocene has been well documented, but its impact on the distribution and timing of extreme river floods has yet to be established. Here we present a meta-analysis of more than 2000 radiometrically dated flood units to reconstruct centennial-scale Holocene flood episodes in Europe and North Africa. Our data analysis shows a general increase in flood frequency after 5000 cal. yr BP consistent with a weakening in zonal circulation over the second half of the Holocene, and with an increase in winter insolation. Multi-centennial length phases of flooding in UK and central Europe correspond with periods of minimum solar irradiance, with a clear trend of increasing flood frequency over the last 1000 years. Western Mediterranean regions show synchrony of flood episodes associated with negative phases of the North Atlantic Oscillation that are out-of-phase with those evident within the eastern Mediterranean. This long-term flood record reveals complex but geographically highly interconnected climate-flood relationships, and provides a new framework to understand likely future spatial changes of flood frequency. PMID:26549043

  19. Distribution and long-term trends in various fog types over South Korea

    NASA Astrophysics Data System (ADS)

    Belorid, Miloslav; Lee, Chong Bum; Kim, Jea-Chul; Cheon, Tae-Hun

    2015-11-01

    This study analyzed the spatial and temporal distributions of various fog types over South Korea. Six types of fogs were identified using a classification algorithm based on simple conceptual models of fog formation. The algorithm was applied to a 25-year record of meteorological observations. The most common fog types were radiation fog, prevailing at inland stations, and precipitation fog at coastal and island stations. Declining temporal trends in the frequency of fog events ranging between 2.1 and 10.9 fog events per decade were found at eight inland and two coastal stations. Long-term trends for each fog type show that the decrease in the frequency of fog events is mainly due to a decrease in the frequency of radiation fogs ranging between 1.1 and 8.5 fog events per decade. To identify the potential factors related to the decrease in radiation fog events, the temporal trends in annual mean nocturnal maximal cooling rates and annual mean nocturnal specific humidity during nights with clear sky and clam winds were examined. The results show that the decrease in the frequency of radiation fog events is associated mainly with the pattern of urbanization occurring during the past two decades.

  20. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    PubMed

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  1. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    PubMed

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication. PMID:21369207

  2. Finite-key security analysis of quantum key distribution with imperfect light sources

    SciTech Connect

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2015-09-09

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called 'rejected data analysis', and showed that its security in the limit of infinitely long keys is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.

  3. Root distribution of Nitraria sibirica with seasonally varying water sources in a desert habitat.

    PubMed

    Zhou, Hai; Zhao, Wenzhi; Zheng, Xinjun; Li, Shoujuan

    2015-07-01

    In water-limited environments, the water sources used by desert shrubs are critical to understanding hydrological processes. Here we studied the oxygen stable isotope ratios (δ (18)O) of stem water of Nitraria sibirica as well as those of precipitation, groundwater and soil water from different layers to identify the possible water sources for the shrub. The results showed that the shrub used a mixture of soil water, recent precipitation and groundwater, with shallow lateral roots and deeply penetrating tap (sinker) roots, in different seasons. During the wet period (in spring), a large proportion of stem water in N. sibirica was from snow melt and recent precipitation, but use of these sources declined sharply with the decreasing summer rain at the site. At the height of summer, N. sibirica mainly utilized deep soil water from its tap roots, not only supporting the growth of shoots but also keeping the shallow lateral roots well-hydrated. This flexibility allowed the plants to maintain normal metabolic processes during prolonged periods when little precipitation occurs and upper soil layers become extremely dry. With the increase in precipitation that occurs as winter approaches, the percentage of water in the stem base of a plant derived from the tap roots (deep soil water or ground water) decreased again. These results suggested that the shrub's root distribution and morphology were the most important determinants of its ability to utilize different water sources, and that its adjustment to water availability was significant for acclimation to the desert habitat. PMID:26003322

  4. Finite-key security analysis of quantum key distribution with imperfect light sources

    NASA Astrophysics Data System (ADS)

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2015-09-01

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called ‘rejected data analysis’, and showed that its security—in the limit of infinitely long keys—is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.

  5. Finite-key security analysis of quantum key distribution with imperfect light sources

    DOE PAGES

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2015-09-09

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called 'rejected data analysis', and showed that its security in the limit of infinitelymore » long keys is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.« less

  6. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  7. Plans for a Collaboratively Developed Distributed Control System for the Spallation Neutron Source

    SciTech Connect

    DeVan, W.R.; Gurd, D.P.; Hammonds, J.; Lewis, S.A.; Smith, J.D.

    1999-03-29

    The Spallation Neutron Source (SNS) is an accelerator-based pulsed neutron source to be built in Oak Ridge, Tennessee. The facility has five major sections - a ''front end'' consisting of a 65 keV H{sup -} ion source followed by a 2.5 MeV RFQ; a 1 GeV linac; a storage ring; a 1MW spallation neutron target (upgradeable to 2 MW); the conventional facilities to support these machines and a suite of neutron scattering instruments to exploit them. These components will be designed and implemented by five collaborating institutions: Lawrence Berkeley National Laboratory (Front End), Los Alamos National Laboratory (Linac); Brookhaven National Laboratory (Storage Ring); Argonne National Laboratory (Instruments); and Oak Ridge National Laboratory (Neutron Source and Conventional Facilities). It is proposed to implement a fully integrated control system for all aspects of this complex. The system will be developed collaboratively, with some degree of local autonomy for distributed systems, but centralized accountability. Technical integration will be based upon the widely-used EPICS control system toolkit, and a complete set of hardware and software standards. The scope of the integrated control system includes site-wide timing and synchronization, networking and machine protection. This paper discusses the technical and organizational issues of planning a large control system to be developed collaboratively at five different institutions, the approaches being taken to address those issues, as well as some of the particular technical challenges for the SNS control system.

  8. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    SciTech Connect

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  9. Rapid, high-order accurate calculation of flows due to free source or vortex distributions

    NASA Technical Reports Server (NTRS)

    Halsey, D.

    1981-01-01

    Fast Fourier transform (FFT) techniques are applied to the problem of finding the flow due to source or vortex distributions in the field outside an airfoil or other two-dimensional body. Either the complex potential or the complex velocity may be obtained to a high order of accuracy, with computational effort similar to that required by second-order fast Poisson solvers. These techniques are applicable to general flow problems with compressibility and rotation. An example is given of their use for inviscid compressible flow.

  10. Electric Field Distribution Excited by Indoor Radio Source for Exposure Compliance Assessment

    NASA Astrophysics Data System (ADS)

    Higashiyama, Junji; Tarusawa, Yoshiaki

    Correction factors are presented for estimating the RF electromagnetic field strength in the compliance assessment of human exposure from an indoor RF radio source in the frequency range from 800MHz to 3.5GHz. The correction factors are derived from the increase in the spatial average electric field strength distribution, which is dependent on the building materials. The spatial average electric field strength is calculated using relative complex dielectric constants of building materials. The relative complex dielectric constant is obtained through measurement of the transmission and reflection losses for eleven kinds of building materials used in business office buildings and single family dwellings.

  11. A multiple step random walk Monte Carlo method for heat conduction involving distributed heat sources

    NASA Astrophysics Data System (ADS)

    Naraghi, M. H. N.; Chung, B. T. F.

    1982-06-01

    A multiple step fixed random walk Monte Carlo method for solving heat conduction in solids with distributed internal heat sources is developed. In this method, the probability that a walker reaches a point a few steps away is calculated analytically and is stored in the computer. Instead of moving to the immediate neighboring point the walker is allowed to jump several steps further. The present multiple step random walk technique can be applied to both conventional Monte Carlo and the Exodus methods. Numerical results indicate that the present method compares well with finite difference solutions while the computation speed is much faster than that of single step Exodus and conventional Monte Carlo methods.

  12. An inverse modeling method to assess the source term of the Fukushima nuclear power plant accident using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.

    2013-06-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2013) but none of them uses dose rate measurements. However, it is the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi nuclear power plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 103 PBq of 131I, 35.5 PBq of 132I, 15.5 PBq of 137Cs and 12 100 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with environmental observations. The result is that the model-measurement agreement for all of the monitoring locations is correct for 80% of simulated dose rates that are within a factor of 2 of the observed values. Changes in dose rates over time have been overall properly reconstructed, especially

  13. Spatial distribution of old and emerging flame retardants in Chinese forest soils: sources, trends and processes.

    PubMed

    Zheng, Qian; Nizzetto, Luca; Li, Jun; Mulder, Marie D; Sáňka, Ondřej; Lammel, Gerhard; Bing, Haijian; Liu, Xin; Jiang, Yishan; Luo, Chunling; Zhang, Gan

    2015-03-01

    The levels and distribution of polybrominated diphenylethers (PBDEs), novel brominated flame retardants (NBFRs) and Dechlorane Plus (DP) in soils and their dependence on environmental and anthropological factors were investigated in 159 soil samples from 30 background forested mountain sites across China. Decabromodiphenylethane (DBDPE) was the most abundant flame retardant (25-18,000 pg g(-1) and 5-13,000 pg g(-1) in O-horizon and A-horizon, respectively), followed by BDE 209 (nd-5900 pg g(-1) and nd-2400 pg g(-1) in O-horizon and A-horizon, respectively). FRs distributions were primarily controlled by source distribution. The distributions of most phasing-out PBDEs, DP isomers and TBPH were in fact correlated to a population density-based index used as proxy of areas with elevated usage and waste of FR containing products. High concentrations of some NBFRs were however observed in industrialized regions and FR manufacturing plants. Strongly positive correlations were observed between PBDEs and their replacement products suggesting similar emission pattern and environmental behavior. Exposure of mineral subsoils depended on precipitations driving leaching of FRs into the soil core. This was especially evident for some emerging BFRs (TBE, TBPH, and TBB etc.) possibly indicating potential for diffuse groundwater contamination.

  14. The occurrence and distribution of a group of organic micropollutants in Mexico City's water sources.

    PubMed

    Félix-Cañedo, Thania E; Durán-Álvarez, Juan C; Jiménez-Cisneros, Blanca

    2013-06-01

    The occurrence and distribution of a group of 17 organic micropollutants in surface and groundwater sources from Mexico City was determined. Water samples were taken from 7 wells, 4 dams and 15 tanks where surface and groundwater are mixed and stored before distribution. Results evidenced the occurrence of seven of the target compounds in groundwater: salicylic acid, diclofenac, di-2-ethylhexylphthalate (DEHP), butylbenzylphthalate (BBP), triclosan, bisphenol A (BPA) and 4-nonylphenol (4-NP). In surface water, 11 target pollutants were detected: same found in groundwater as well as naproxen, ibuprofen, ketoprofen and gemfibrozil. In groundwater, concentration ranges of salicylic acid, 4-NP and DEHP, the most frequently found compounds, were 1-464, 1-47 and 19-232 ng/L, respectively; while in surface water, these ranges were 29-309, 89-655 and 75-2,282 ng/L, respectively. Eleven target compounds were detected in mixed water. Concentrations in mixed water were higher than those determined in groundwater but lower than the detected in surface water. Different to that found in ground and surface water, the pesticide 2,4-D was found in mixed water, indicating that some pollutants can reach areas where they are not originally present in the local water sources. Concentration of the organic micropollutants found in this study showed similar to lower to those reported in water sources from developed countries. This study provides information that enriches the state of the art on the occurrence of organic micropollutants in water sources worldwide, notably in megacities of developing countries. PMID:23542484

  15. Distribution and sources of particulate organic matter in the Indian monsoonal estuaries during monsoon

    NASA Astrophysics Data System (ADS)

    Sarma, V. V. S. S.; Krishna, M. S.; Prasad, V. R.; Kumar, B. S. K.; Naidu, S. A.; Rao, G. D.; Viswanadham, R.; Sridevi, T.; Kumar, P. P.; Reddy, N. P. C.

    2014-11-01

    The distribution and sources of particulate organic carbon (POC) and nitrogen (PN) in 27 Indian estuaries were examined during the monsoon using the content and isotopic composition of carbon and nitrogen. Higher phytoplankton biomass was noticed in estuaries with deeper photic zone than other estuaries receiving higher suspended matter. The δ13CPOC and δ15NPN data suggest that relatively higher δ13CPOC (-27.9 to -22.6‰) and lower δ15NPN (0.7 to 5.8‰) were noticed in the estuaries located in the northern India, north of 16°N, and lower δ13CPOC (-31.4 to -28.2‰) and higher δ15NPN (5 to 10.3‰) in the estuaries located in the southern India. This is associated with higher Chl a in the northern than southern estuaries suggesting that in situ production contributed significantly to the POC pool in the former, whereas terrestrial sources are important in the latter estuaries. The spatial distribution pattern of δ15NPN is consistent with fertilizer consumption in the Indian subcontinent, which is twice as much in the northern India as in the south whereas δ13CPOC suggests that in situ production is a dominant source in the southern and terrestrial sources are important in the northern estuaries. Based on the Stable Isotope Analysis in R model, 40-90% (70-90%) of organic matter is contributed by C3 plants (freshwater algae) in the estuaries located in the northern (southern) India.

  16. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    NASA Astrophysics Data System (ADS)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  17. Quantitative assessment of seismic source performance: Feasibility of small and affordable seismic sources for long term monitoring at the Ketzin CO2 storage site, Germany

    NASA Astrophysics Data System (ADS)

    Sopher, Daniel; Juhlin, Christopher; Huang, Fei; Ivandic, Monika; Lueth, Stefan

    2014-08-01

    We apply a range of quantitative pre-stack analysis techniques to assess the feasibility of using smaller and cheaper seismic sources, than those currently used at the Ketzin CO2 storage site. Results from two smaller land sources are presented alongside those from a larger, more powerful source, typically utilized for seismic acquisition at the Ketzin. The geological target for the study is the Triassic Stuttgart Formation which contains a saline aquifer currently used for CO2 storage. The reservoir lies at a depth of approximately 630 m, equivalent to a travel time of 500 ms along the study profile. The three sources discussed in the study are the Vibsist 3000, Vibsist 500 (using industrial hydraulic driven concrete breaking hammers) and a drop hammer source. Data were collected for the comparison using the three sources in 2011, 2012 and 2013 along a 984 m long line with 24 m receiver spacing and 12 m shot spacing. Initially a quantitative analysis is performed of the noise levels between the 3 surveys. The raw shot gathers are then analyzed quantitatively to investigate the relative energy output, signal to noise ratio, penetration depth, repeatability and frequency content for the different sources. The performance of the sources is also assessed based on stacked seismic sections. Based on the results from this study it appears that both of the smaller sources are capable of producing good images of the target reservoir and can both be considered suitable as lower cost, less invasive sources for use at the Ketzin site or other shallow CO2 storage projects. Finally, the results from the various pre-stack analysis techniques are discussed in terms of how representative they are of the final stacked sections.

  18. Development of Approach for Long-Term Management of Disused Sealed Radioactive Sources - 13630

    SciTech Connect

    Kinker, M.; Reber, E.; Mansoux, H.; Bruno, G.

    2013-07-01

    Radioactive sources are used widely throughout the world in a variety of medical, industrial, research and military applications. When such radioactive sources are no longer used and are not intended to be used for the practice for which an authorization was granted, they are designated as 'disused sources'. Whether appropriate controls are in place during the useful life of a source or not, the end of this useful life is often a turning point after which it is more difficult to ensure the safety and security of the source over time. For various reasons, many disused sources cannot be returned to the manufacturer or the supplier for reuse or recycling. When these attempts fail, disused sources should be declared as radioactive waste and should be managed as such, in compliance with relevant international legal instruments and safety standards. However, disposal remains an unresolved issue in many counties, due to in part to limited public acceptance, insufficient funding, and a lack of practical examples of strategies for determining suitable disposal options. As a result, disused sources are often stored indefinitely at the facilities where they were once used. In order to prevent disused sources from becoming orphan sources, each country must develop and implement a comprehensive waste management strategy that includes disposal of disused sources. The International Atomic Energy Agency (IAEA) fosters international cooperation between countries and encourages the development of a harmonized 'cradle to grave' approach to managing sources consistent with international legal instruments, IAEA safety standards, and international good practices. This 'cradle to grave' approach requires the development of a national policy and implementing strategy, an adequate legal and regulatory framework, and adequate resources and infrastructure that cover the entire life cycle, from production and use of radioactive sources to disposal. (authors)

  19. Foundations for the generalization of the Godunov method to hyperbolic systems with stiff relaxation source terms

    NASA Astrophysics Data System (ADS)

    Hittinger, Jeffrey Alan

    2000-10-01

    Hyperbolic systems of partial differential equations with relaxation source terms arise in the modeling of many physical problems where internal processes return non-equilibrium disturbances to equilibrium. A challenge in numerically approximating such systems is that the relaxation may take place on time scales much shorter than the time scales of the flow evolution. In such cases, it is desirable for numerical methods to accurately approximate the solution even if the relaxation scales are underresolved. High-resolution Godunov methods are very successful shock-capturing algorithms for the solution of hyperbolic systems of conservation laws. It is desirable to extend this methodology to properly preserve the asymptotic behavior of hyperbolic-relaxation systems such that underresolved solutions can be accurately approximated. Godunov schemes solve or approximate Riemann problems at cell interfaces to estimate numerical fluxes that respect the physics, but, due to coupling between relaxation and wave propagation in hyperbolic-relaxation systems, the Riemann problem becomes much more complicated and its exact solution is no longer feasible. Evidence presented here suggests that to obtain robust, non-oscillatory, upwind discretizations that accurately compute underresolved solutions, aspects of this physical coupling must be included in the numerical flux calculations. A simple model system is extensively analyzed using Fourier and asymptotic analysis on both the system and its integral solution for both smooth and discontinuous initial conditions. Specifically, the early- and late-time asymptotic behaviors of the Riemann problem are determined, and the results are generalized to m x m constant-coefficient systems. A nonlinear physical example, a set of eleven macroscopic transport equations for a diatomic gas, is constructed from the Boltzmann equation and is investigated to verify the applicability of the linear analysis. Current numerical methods are reviewed, and

  20. Nonradioactive Environmental Emissions Chemical Source Term for the Double Shell Tank (DST) Vapor Space During Waste Retrieval Operations

    SciTech Connect

    MAY, T.H.

    2000-04-21

    A nonradioactive chemical vapor space source term for tanks on the Phase 1 and the extended Phase 1 delivery, storage, and disposal mission was determined. Operations modeled included mixer pump operation and DST waste transfers. Concentrations of ammonia, specific volatile organic compounds, and quantitative volumes of aerosols were estimated.

  1. Source attribution, physicochemical properties and spatial distribution of wet deposited mercury to the Ohio River valley

    NASA Astrophysics Data System (ADS)

    White, Emily Mae

    Mercury (Hg) is a bioaccumulative neurotoxin that is emitted from anthropogenic sources through fossil fuel combustion. The spatial scale of atmospheric transport prior to deposition is dependent on the chemical and physical form of Hg emissions, and has yet to be quantitatively defined. A five-year comprehensive Hg monitoring and source apportionment study was conducted in Steubenville, Ohio to investigate atmospheric Hg deposition to the highly industrialized Ohio River Valley region. Long-term event-precipitation measurements revealed a significant 30% to three-fold enrichment of Hg concentrations and total Hg deposition flux to the Steubenville site over other Great Lakes regional sites. Multivariate receptor models attributed ˜70% of Hg wet deposition to local coal combustion sources. While local stagnant atmospheric conditions led to moderately high volume-weighted mean Hg concentrations and the majority of Hg wet deposition flux, regional transport from the Chicago/Gary and Detroit/Windsor urban areas also led to elevated precipitation Hg concentrations, but did not contribute significantly to the overall Hg deposition. The degree of local source influence was established during a summertime field intensive study in which a local scale network of concurrently collected rain samples revealed that 42% of Hg wet deposition measured less than one km from the base of coal fired utilities could be attributed to the adjacent source, corresponding to 170% Hg concentration enhancement over regionally representative precipitation collected concurrently. In addition, 69+/-37% of the Hg collected in rain was in a soluble form, entering the precipitation as reactive gas phase or fine particle associated Hg. The Hg scavenging coefficient (rate of concentration reduction throughout a single precipitation event) was particularly low when compared to other trace elements. Furthermore, when compared to an upwind but non-locally source impacted site, the scavenging

  2. Long term response stability of a well-type ionization chamber used in calibration of high dose rate brachytherapy sources

    PubMed Central

    Vandana, S.; Sharma, S. D.

    2010-01-01

    Well-type ionization chamber is often used to measure strength of brachytherapy sources. This study aims to check long term response stability of High Dose Rate (HDR)-1000 Plus well-type ionization chamber in terms of reference air kerma rate (RAKR) of a reference 137Cs brachytherapy source and recommend an optimum frequency of recalibration. An HDR-1000 Plus well-type ionization chamber, a reference 137Cs brachytherapy source (CDCSJ5), and a MAX-4000 electrometer were used in this study. The HDR-1000 Plus well-type chamber was calibrated in terms of reference air kerma rate by the Standards Laboratory of the International Atomic Energy Agency (IAEA), Vienna. The response of the chamber was verified at regular intervals over a period of eight years using the reference 137Cs source. All required correction factors were applied in the calculation of the RAKR of the 137Cs source. This study reveals that the response of the HDR-1000 Plus well-type chamber was well within ±0.5% for about three years after calibration/recalibration. However, it shows deviations larger than ±0.5% after three years of calibration/recalibration and the maximum variation in response of the chamber during an eight year period was 1.71%. The optimum frequency of recalibration of a high dose rate well-type chamber should be three years. PMID:20589119

  3. Long term response stability of a well-type ionization chamber used in calibration of high dose rate brachytherapy sources.

    PubMed

    Vandana, S; Sharma, S D

    2010-04-01

    Well-type ionization chamber is often used to measure strength of brachytherapy sources. This study aims to check long term response stability of High Dose Rate (HDR)-1000 Plus well-type ionization chamber in terms of reference air kerma rate (RAKR) of a reference (137)Cs brachytherapy source and recommend an optimum frequency of recalibration. An HDR-1000 Plus well-type ionization chamber, a reference (137)Cs brachytherapy source (CDCSJ5), and a MAX-4000 electrometer were used in this study. The HDR-1000 Plus well-type chamber was calibrated in terms of reference air kerma rate by the Standards Laboratory of the International Atomic Energy Agency (IAEA), Vienna. The response of the chamber was verified at regular intervals over a period of eight years using the reference (137)Cs source. All required correction factors were applied in the calculation of the RAKR of the (137)Cs source. This study reveals that the response of the HDR-1000 Plus well-type chamber was well within +/-0.5% for about three years after calibration/recalibration. However, it shows deviations larger than +/-0.5% after three years of calibration/recalibration and the maximum variation in response of the chamber during an eight year period was 1.71%. The optimum frequency of recalibration of a high dose rate well-type chamber should be three years.

  4. A review of sources, multimedia distribution and health risks of perfluoroalkyl acids (PFAAs) in China.

    PubMed

    Wang, Tieyu; Wang, Pei; Meng, Jing; Liu, Shijie; Lu, Yonglong; Khim, Jong Seong; Giesy, John P

    2015-06-01

    Perfluoroalkyl acids (PFAAs) have been recognized as emerging pollutants because of their ubiquitous occurrence in the environment, biota and humans. In order to investigate their sources, fate and environmental effects, a great number of surveys have been carried out over the past several years. In the present review, we summarized the status of sources and emission, concentration, distribution and risks of PFAAs in China. Concentrations of PFAAs, especially perfluorooctane sulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) in various environmental media including water, sediment, soil, rain, snow and organisms, as well as human tissues are summarized based on the available data. Concentrations of PFAAs in aquatic systems are higher in relatively more industrialized and urbanized areas than those from the less populated and remote regions in China, indicating that their emission and distribution are closely related to regional urbanization and industrialization. PFAAs and related products have been widely used over the past several decades, which have brought about high concentrations detected in environmental matrixes, biota and even local residents. Ecological risk assessment of PFAAs is still less developed in China. Most existing studies compared concentrations of PFAAs to guideline values derived for single species to evaluate the risk. In order to reveal the transport, partitioning and degradation of PFAAs in the environment, further studies on their behavior, fate, bioaccumulation and adverse effects in different trophic levels should be conducted.

  5. A review of sources, multimedia distribution and health risks of perfluoroalkyl acids (PFAAs) in China.

    PubMed

    Wang, Tieyu; Wang, Pei; Meng, Jing; Liu, Shijie; Lu, Yonglong; Khim, Jong Seong; Giesy, John P

    2015-06-01

    Perfluoroalkyl acids (PFAAs) have been recognized as emerging pollutants because of their ubiquitous occurrence in the environment, biota and humans. In order to investigate their sources, fate and environmental effects, a great number of surveys have been carried out over the past several years. In the present review, we summarized the status of sources and emission, concentration, distribution and risks of PFAAs in China. Concentrations of PFAAs, especially perfluorooctane sulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) in various environmental media including water, sediment, soil, rain, snow and organisms, as well as human tissues are summarized based on the available data. Concentrations of PFAAs in aquatic systems are higher in relatively more industrialized and urbanized areas than those from the less populated and remote regions in China, indicating that their emission and distribution are closely related to regional urbanization and industrialization. PFAAs and related products have been widely used over the past several decades, which have brought about high concentrations detected in environmental matrixes, biota and even local residents. Ecological risk assessment of PFAAs is still less developed in China. Most existing studies compared concentrations of PFAAs to guideline values derived for single species to evaluate the risk. In order to reveal the transport, partitioning and degradation of PFAAs in the environment, further studies on their behavior, fate, bioaccumulation and adverse effects in different trophic levels should be conducted. PMID:25262946

  6. [Distribution and sources of oxygen and sulfur heterocyclic aromatic compounds in surface soil of Beijing, China].

    PubMed

    He, Guang-Xiu; Zhang, Zhi-Huan; Peng, Xu-Yang; Zhu, Lei; Lu, Ling

    2011-11-01

    62 surface soil samples were collected from different environmental function zones in Beijing. Sulfur and oxygen heterocyclic aromatic compounds were detected by GC/MS. The objectives of this study were to identify the composition and distribution of these compounds, and discuss their sources. The results showed that the oxygen and sulfur heterocyclic aromatic compounds in the surface soils mainly contained dibenzofuran, methyl- and C2-dibenzofuran series, dibenzothiophene, methyl-, C2- and C3-dibenzothiophene series and benzonaphthothiophene series. The composition and distribution of the oxygen and sulfur heterocyclic aromatic compounds in the surface soil samples varied in the different environmental function zones, of which some factories and the urban area received oxygen and sulfur heterocyclic aromatic compounds most seriously. In Beijing, the degree of contamination by oxygen and sulfur heterocyclic aromatic compounds in the north surface soil was higher than that in the south. There were preferable linear correlations between the concentration of dibenzofuran series and fluorene series, as well as the concentration of dibenzothiophene series and dibenzofuran series. The oxygen and sulfur heterocyclic aromatic compounds in the surface soil were mainly derived from combustion products of oil and coal and direct input of mineral oil, etc. There were some variations in pollution sources of different environmental function zones.

  7. Pu and 137Cs in the Yangtze River estuary sediments: distribution and source identification.

    PubMed

    Liu, Zhiyong; Zheng, Jian; Pan, Shaoming; Dong, Wei; Yamada, Masatoshi; Aono, Tatsuo; Guo, Qiuju

    2011-03-01

    Pu isotopes and (137)Cs were analyzed using sector field ICP-MS and γ spectrometry, respectively, in surface sediment and core sediment samples from the Yangtze River estuary. (239+240)Pu activity and (240)Pu/(239)Pu atom ratios (>0.18) shows a generally increasing trend from land to sea and from north to south in the estuary. This spatial distribution pattern indicates that the Pacific Proving Grounds (PPG) source Pu transported by ocean currents was intensively scavenged into the suspended sediment under favorable conditions, and mixed with riverine sediment as the water circulated in the estuary. This process is the main control for the distribution of Pu in the estuary. Moreover, Pu is also an important indicator for monitoring the changes of environmental radioactivity in the estuary as the river basin is currently the site of extensive human activities and the sea level is rising because of global climate changes. For core sediment samples the maximum peak of (239+240)Pu activity was observed at a depth of 172 cm. The sedimentation rate was estimated on the basis of the Pu maximum deposition peak in 1963-1964 to be 4.1 cm/a. The contributions of the PPG close-in fallout Pu (44%) and the riverine Pu (45%) in Yangtze River estuary sediments are equally important for the total Pu deposition in the estuary, which challenges the current hypothesis that the riverine Pu input was the major source of Pu budget in this area.

  8. SOILD: A computer model for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil

    SciTech Connect

    Chen, S.Y.; LePoire, D.; Yu, C. ); Schafetz, S. ); Mehta, P. )

    1991-01-01

    The SOLID computer model was developed for calculating the effective dose equivalent from external exposure to distributed gamma sources in soil. It is designed to assess external doses under various exposure scenarios that may be encountered in environmental restoration programs. The models four major functional features address (1) dose versus source depth in soil, (2) shielding of clean cover soil, (3) area of contamination, and (4) nonuniform distribution of sources. The model is also capable of adjusting doses when there are variations in soil densities for both source and cover soils. The model is supported by a data base of approximately 500 radionuclides. 4 refs.

  9. Observational manifestations and intrinsic properties of the RCR sources in terms of a unified model

    NASA Astrophysics Data System (ADS)

    Zhelenkova, O. P.; Majorova, E. K.

    2016-04-01

    We present a summary results of the study of radio sources showing significant variations of integral flux density using the data from the RATAN-600 surveys of 1980-1994 at a frequency of 7.6 cm. The majority of the detected variable sources have flat radio spectra, although there are also all other spectrum types found. Point and compact sources predominate, although all known morphological structures are found in the sample. Variability is detected both in quasars and galaxies. Using the catalog data, we found brightness variations in the optical and/or infrared ranges for a half of host objects of radio sources. We analyzed the properties of nonvariable and variable RCR sources. We compared the ratio of absolute magnitude to radio luminosity for sources with the active nucleus types determined from the optical data. It is found that this parameter is approximately the same for quasars with different radio luminosity. It isminimum for the strongest radio galaxies and grows up to the level characteristic of quasars with the decrease of radio luminosity. Considering that the ratio depends on obscuring properties of a dust torus, such behavior can be explained if we assume that the torus geometry and its optical depth depend on the source long. This parameter is slightly higher among variable sources than among nonvariable ones which counts in favor of the nucleus more open to an observer.

  10. Exact calculation of the angular momentum loss, recoil force, and radiation intensity for an arbitrary source in terms of electric, magnetic, and toroid multipoles.

    PubMed

    Radescu, E E; Vaman, G

    2002-04-01

    An exact calculation of the radiation intensity, angular momentum loss, and the recoil force for the most general type of source, characterized by electric, magnetic, and toroid multipole moments and radii of any multipolarity and an arbitrary time dependence, is presented. The results are expressed in terms of time derivatives of the multipole moments and mean radii of the corresponding distributions. Although quite cumbersome, the formulas found by us represent exact results in the correct multipole analysis of configurations of charges and currents that contain toroidal sources. So the longstanding problem in classical electrodynamics of relating the radiation properties of a system to quantities completely describing its internal electromagnetic structure is thereby exactly solved. By particularizations to the first multipole contributions, corrections to the familiar formulas from books are found, mostly on account of the toroid moments and their interference with the usual electric and magnetic ones.

  11. An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi

    2014-10-01

    In this paper, an approach of estimating signal parameters via rotational invariance technique (ESPRIT) is proposed for two-dimensional (2-D) localization of incoherently distributed (ID) sources in large-scale/massive multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based methods are valid only for one-dimensional (1-D) localization of the ID sources. By contrast, in the proposed approach the signal subspace is constructed for estimating the nominal azimuth and elevation direction-of-arrivals and the angular spreads. The proposed estimator enjoys closed-form expressions and hence it bypasses the searching over the entire feasible field. Therefore, it imposes significantly lower computational complexity than the conventional 2-D estimation approaches. Our analysis shows that the estimation performance of the proposed approach improves when the large-scale/massive MIMO systems are employed. The approximate Cram\\'{e}r-Rao bound of the proposed estimator for the 2-D localization is also derived. Numerical results demonstrate that albeit the proposed estimation method is comparable with the traditional 2-D estimators in terms of performance, it benefits from a remarkably lower computational complexity.

  12. Benchmarking the New RESRAD-OFFSITE Source Term Model with DUST-MS and GoldSim - 13377

    SciTech Connect

    Cheng, J.J.; Kamboj, S.; Gnanapragasam, E.; Yu, C.

    2013-07-01

    RESRAD-OFFSITE is a computer code developed by Argonne National Laboratory under the sponsorship of U.S. Department of Energy (DOE) and U.S. Nuclear Regulatory Commission (NRC). It is designed on the basis of RESRAD (onsite) code, a computer code designated by DOE and NRC for evaluating soil-contaminated sites for compliance with human health protection requirements pertaining to license termination or environmental remediation. RESRAD-OFFSITE has enhanced capabilities of modeling radionuclide transport to offsite locations and calculating potential radiation exposure to offsite receptors. Recently, a new source term model was incorporated into RESRAD-OFFSITE to enhance its capability further. This new source term model allows simulation of radionuclide releases from different waste forms, in addition to the soil sources originally considered in RESRAD (onsite) and RESRAD-OFFSITE codes. With this new source term model, a variety of applications can be achieved by using RESRAD-OFFSITE, including but not limited to, assessing the performance of radioactive waste disposal facilities. This paper presents the comparison of radionuclide release rates calculated by the new source term model of RESRAD-OFFSITE versus those calculated by DUST-MS and GoldSim, respectively. The focus of comparison is on the release rates of radionuclides from the bottom of the contaminated zone that was assumed to contain radioactive source materials buried in soil. The transport of released contaminants outside of the primary contaminated zone is beyond the scope of this paper. Overall, the agreement between the RESRAD-OFFSITE results and the DUST-MS and GoldSim results is fairly good, with all three codes predicting identical or similar radionuclide release profiles over time. Numerical dispersion in the DUST-MS and GoldSim results was identified as potentially contributing to the disagreement in the release rates. In general, greater discrepancy in the release rates was found for short

  13. Field Trial of Distributed Acoustic Sensing Using Active Sources at Garner Valley, California

    NASA Astrophysics Data System (ADS)

    Wang, H. F.; Lord, N. E.; Chalari, A.; Lancelle, C.; Baldwin, J. A.; Castongia, E.; Fratta, D.; Nigbor, R. L.; Karaulanov, R.

    2014-12-01

    An optical fiber Distributed Acoustic Sensor array was deployed in a shallow trench at the site of the Garner Valley Downhole Array (GVDA) in southern California. The site was operated as a collaborator of the Network for Earthquake Engineering Simulation (NEES) by UCSB. The fiber-optic cable layout approximated a rectangle whose dimensions were roughly 160 meters by 80 meters. The layout included two subdiagonals to provide a variety of orientations of the cable relative to source locations. The study included different seismic sources deployed at a number of surveyed positions: a 45 kN shear shaker operated at the site by NEES@UCLA, a portable 450 N shaker, a small Vibroseis truck, and hammer blows on a steel plate to map cable locations. Several dozen separate tests were recorded in which each test typically included ten repeats. The data were utilized for several studies. First, the characteristics of the recorded signals were analyzed for directivity and sensitivity of the cable response (Lancelle et al., 2014, this meeting). The DAS system recorded dynamic ground events in the direction of the cable and hence comparisons with geophones required signal processing. The one-meter spacing of DAS traces could be well correlated over distances of a few meters. Second, swept-sine sources were used to obtain surface-wave velocity dispersion to determine near-surface shear-wave velocity distribution using Multispectral Analysis of Surface Waves (MASW) (Baldwin et al., 2014, this meeting). The results were in good agreement with previous Vibroseis results at the site (Stokoe et al. 2004). Third, a new method for time-frequency filtering was developed for extracting the surface-wave phase velocities from uncorrelated receiver traces (Lord et al., 2014, this meeting).

  14. Distribution and sources of oxygenated non-hydrocarbons in topsoil of Beijing, China.

    PubMed

    Zhang, Zhihuan; Wan, Tiantian; Peng, Xuyang; He, Guangxiu; Liu, Yu; Zeng, Li

    2016-08-01

    The oxygenated non-hydrocarbon compounds are widely distributed in soil. To investigate the distribution and origin of these compounds in topsoil of Beijing, their contents and compositions were measured in topsoil from 62 sites in Beijing. The research results showed that oxygenated non-hydrocarbons were composed primarily of C6∼C28 n-fatty acids, C12∼C28 n-fatty alcohols, n-fatty acid methyl esters, phthalates, sterols, and dehydroabietic acid in the topsoil of Beijing. The contents and compositions of these compounds varied with the sampling site. The concentrations of n-fatty acids and phthalate esters were the highest at all sites, followed by sterols, n-fatty acid methyl esters, fatty alcohols, and dehydroabietic acid in order. The n-fatty acids had a main peak of C16, followed by C18. An odd or even carbon number predominance was not observed in the low-molecular-weight n-fatty acids, indicating a fossil fuel or organic matter source. However, some high-molecular-weight n-fatty acids with an even carbon predominance may derive from a biomass. The n-fatty alcohols showed a main peak of C22 and were predominated by an even carbon number, suggesting plant, microbial, or other natural origins. Phthalates, including diethyl phthalate (DEP), diisobutyl phthalate (DIBP), dibutyl phthalate (DBP), diethylhexyl phthalate (DEHP), and dimethylphthalate (DMP), were detected. The content of phthalate esters was higher in the samples collected from dense human activity areas. The concentrations of DBP, DEHP, and DIBP were relatively high, indicating an anthropogenic source. The sterols (predominantly β-sitosterol) originated from biological sources, especially plants. The n-fatty acid methyl esters and dehydroabietic acid in topsoil showed apparent even carbon predominance with the former mainly derived from microorganisms or plants and the latter from cork combustion products. PMID:27172982

  15. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source.

    PubMed

    Armoundas, A A; Feldman, A B; Sherman, D A; Cohen, R J

    2001-09-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius

  16. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius

  17. Distribution, source and chemical speciation of phosphorus in surface sediments of the central Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Ni, Jianyu; Lin, Peng; Zhen, Yang; Yao, Xuying; Guo, Laodong

    2015-11-01

    The abundance of five forms of phosphorus (P) in surface sediments from the central Pacific Ocean (4.5-15ºN, 154-143ºW) was determined using a sequential extraction procedure (SEDEX) to examine the distribution and source of different P species. Total P (TP) concentrations ranged from 13.2 to 119 μmol-P/g with an average of 48.6±27.4 μmol-P/g. Within the TP pool, total inorganic P (TIP) concentrations varied from 11.1 to 121 μmol-P/g, while total organic P (TOP) concentrations ranged from undetectable to 4.8 μmol-P/g. Inorganic P was generally the predominant form in surface sediments, comprising on average up to 93% of sedimentary TP, leaving <16% as TOP. Among the five P species, the authigenic or CaCO3-bound P and detrital P were the two major P species (comprising on average 43.4±13.5% and 45.7±14.8% of TP, respectively), followed by the refractory organic P, representing 6.7±2.4% of TP. Fe-bound P accounted for 3.3±1.3% of TP, and exchangeable or adsorbed P made up less than 1% of TP. The spatial distribution of different sedimentary P species showed that higher concentrations of detrital P and Fe-bound P were both found at around 11°N, suggesting similar sources for these two P species. Much of the detrital P was derived from atmospheric sources in the study area, where heavy rainfall in the intertropical convergence zone between 3°N and 11°N has been widely reported. Compared with other marine environments, the central Pacific Ocean had relatively higher detrital P, but lower abundance of adsorbed-P and Fe-bound P. These unquine results suggested that most of the labile P could have been released into the water column during its settling from the surface to the seafloor, or that atmospheric inputs of refractory P were an important source for sedimentary P, accounting for an average of 63% of the TP, in the central Pacific Ocean. High proportions of authigenic P in deep-sea sediments, on the other hand, implied that oceanic sediments are an

  18. Elemental distribution of metals in urban river sediments near an industrial effluent source.

    PubMed

    Tamim, Umma; Khan, Rahat; Jolly, Yeasmin Nahar; Fatema, Kanij; Das, Sopan; Naher, Kamrun; Islam, Mohammad Amirul; Islam, S M Azharul; Hossain, Syed Mohammod

    2016-07-01

    To study the compositional trends associated with the spatial and layer wise distribution of heavy metals as well as the sediment response towards the untreated chemical wastes, we have analyzed river (Buriganga, Bangladesh) sediments by instrumental neutron activation analysis (INAA) and energy dispersive X-ray fluorescence (EDXRF). In nine sediment samples 27 elements were determined where Na, Al, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Dy, Hf, Th and U were determined by INAA and Cu, Sr, Ba, Hg and Pb were determined by EDXRF. Pollution level and the origin of pollutants were evaluated by the aid of geo-accumulation index (Igeo), enrichment factor (EF), pollution load index (PLI) and the inter-element correlation analysis. Major elements are somehow buffered even though the pollution level is severe while the trace metals seem to be highly responsive. Among the heavy metals, Cr is the dominant pollutant, though the pollution level varies systematically with the sampling depth and the distance from the contamination source. Positive linear correlation between Cr and Zn (0.94) ensures the similar anthropogenic source(s) for these two metals, but the sediments of this study respond differently depending upon their geochemical behavior. Rare earth elements (here La, Ce, Sm and Dy), Th and U seem to have crustal origin and the Th/U ratio varies from 2.58 to 4.96. PMID:27151427

  19. Comparison between Two Practical Methods of Light Source Monitoring in Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Wang, Gan; Chen, Ziyang; Xu, Bingjie; Li, Zhengyu; Peng, Xiang; Guo, Hong

    2016-05-01

    The practical security of a quantum key distribution (QKD) is a critical issue due to the loopholes opened by the imperfections of practical devices. The untrusted source problem is a fundamental issue that exists in almost every protocol, including the loss-tolerant protocol and the measurement-device-independent protocol. Two practical light source monitoring methods were proposed, i.e., two-threshold detector scheme and photon-number-resolving (PNR) detector scheme. In this work, we test the fluctuation level of different gain-switched pulsed lasers, i.e., the ratio between the standard deviation and the mean of the pulse energy (noted as γ) changes from 1% to 7%. Moreover, we propose an improved practical PNR detector scheme, and discuss in what circumstances one should use which light source monitoring method, i.e., generally speaking when the fluctuation is large the PNR detector method performs better. This provides an instruction of selecting proper monitoring module for different practical systems. This work is supported by the National Science Fund for Distinguished Young Scholars of China (Grant No. 61225003), the State Key Project of National Natural Science Foundation of China (Grant No. 61531003).

  20. Planck Early Results. XV. Spectral Energy Distributions and Radio Continuum Spectra of Northern Extragalactic Radio Sources

    NASA Technical Reports Server (NTRS)

    Aatrokoski, J.; Ade, P. A. R.; Aghanim, N.; Aller, H. D.; Aller, M. F.; Angelakis, E.; Amaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoit, A.; Berdyugin, A.; Bernard, J. P.; Bersanelli, M.; Bhatia, R.; Bonaldi, A.; Bonavera, L.; Gehrels, N.

    2011-01-01

    Spectral energy distributions (SEDs) and radio continuum spectra are presented for a northern sample of 104 extragalactic radio sources. based on the Planck Early Release Compact Source Catalogue (ERCSC) and simultaneous multi frequency data. The nine Planck frequencies, from 30 to 857 GHz, are complemented by a set of simultaneous observations ranging from radio to gamma-rays. This is the first extensive frequency coverage in the radio and millimetre domains for an essentially complete sample of extragalactic radio sources, and it shows how the individual shocks, each in their own phase of development, shape the radio spectra as they move in the relativistic jet. The SEDs presented in this paper were fitted with second and third degree polynomials to estimate the frequencies of the synchrotron and inverse Compton (IC) peaks, and the spectral indices of low and high frequency radio data, including the Planck ERCSC data, were calculated. SED modelling methods are discussed, with an emphasis on proper. physical modelling of the synchrotron bump using multiple components. Planck ERCSC data also suggest that the original accelerated electron energy spectrum could be much harder than commonly thought, with power-law index around 1.5 instead of the canonical 2.5. The implications of this are discussed for the acceleration mechanisms effective in blazar shock. Furthermore in many cases the Planck data indicate that gamma-ray emission must originate in the same shocks that produce the radio emission.