Science.gov

Sample records for distributed source term

  1. Spatial distribution of HTO activity in unsaturated soil depth in the vicinity of long-term release source

    SciTech Connect

    Golubev, A.; Golubeva, V.; Mavrin, S.

    2015-03-15

    Previous studies reported about a correlation between HTO activity distribution in unsaturated soil layer and atmospheric long-term releases of HTO in the vicinity of Savannah River Site. The Tritium Working Group of BIOMASS Programme has performed a model-model intercomparison study of HTO transport from atmosphere to unsaturated soil and has evaluated HTO activity distribution in the unsaturated soil layer in the vicinity of permanent atmospheric sources. The Tritium Working Group has also reported about such a correlation, however the conclusion was that experimental data sets are needed to confirm this conclusion and also to validate appropriate computer models. (authors)

  2. Design parameters and source terms: Volume 3, Source terms

    SciTech Connect

    Not Available

    1987-10-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report by Stearns Catalytic Corporation (SCC), entitled ''Design Parameters and Source Terms for a Two-Phase Repository in Salt,'' 1985, to the level of the Site Characterization Plan - Conceptual Design Report. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible Salt Repository sites. 11 refs., 9 tabs.

  3. Infrared image processing devoted to thermal non-contact characterization-Applications to Non-Destructive Evaluation, Microfluidics and 2D source term distribution for multispectral tomography

    NASA Astrophysics Data System (ADS)

    Batsale, Jean-Christophe; Pradere, Christophe

    2015-11-01

    The cost of IR cameras is more and more decreasing. Beyond the preliminary calibration step and the global instrumentation, the infrared image processing is then one of the key step for achieving in very broad domains. Generally the IR images are coming from the transient temperature field related to the emission of a black surface in response to an external or internal heating (active IR thermography). The first applications were devoted to the so called thermal Non-Destructive Evaluation methods by considering a thin sample and 1D transient heat diffusion through the sample (transverse diffusion). With simplified assumptions related to the transverse diffusion, the in-plane diffusion and transport phenomena can be also considered. A general equation can be applied in order to balance the heat transfer at the pixel scale or between groups of pixels in order to estimate several fields of thermophysical properties (heterogeneous field of in-plane diffusivity, flow distributions, source terms). There is a lot of possible strategies to process the space and time distributed big amount of data (previous integral transformation of the images, compression, elimination of the non useful areas...), generally based on the necessity to analyse the derivative versus space and time of the temperature field. Several illustrative examples related to the Non-Destructive Evaluation of heterogeneous solids, the thermal characterization of chemical reactions in microfluidic channels and the design of systems for multispectral tomography, will be presented.

  4. HTGR Mechanistic Source Terms White Paper

    SciTech Connect

    Wayne Moe

    2010-07-01

    The primary purposes of this white paper are: (1) to describe the proposed approach for developing event specific mechanistic source terms for HTGR design and licensing, (2) to describe the technology development programs required to validate the design methods used to predict these mechanistic source terms and (3) to obtain agreement from the NRC that, subject to appropriate validation through the technology development program, the approach for developing event specific mechanistic source terms is acceptable

  5. Source term calculations for assessing radiation dose to equipment

    SciTech Connect

    Denning, R.S.; Freeman-Kelly, R.; Cybulskis, P.; Curtis, L.A.

    1989-07-01

    This study examines results of analyses performed with the Source Term Code Package to develop updated source terms using NUREG-0956 methods. The updated source terms are to be used to assess the adequacy of current regulatory source terms used as the basis for equipment qualification. Time-dependent locational distributions of radionuclides within a containment following a severe accident have been developed. The Surry reactor has been selected in this study as representative of PWR containment designs. Similarly, the Peach Bottom reactor has been used to examine radionuclide distributions in boiling water reactors. The time-dependent inventory of each key radionuclide is provided in terms of its activity in curies. The data are to be used by Sandia National Laboratories to perform shielding analyses to estimate radiation dose to equipment in each containment design. See NUREG/CR-5175, Beta and Gamma Dose Calculations for PWR and BWR Containments.'' 6 refs., 11 tabs.

  6. Calculation of source terms for NUREG-1150

    SciTech Connect

    Breeding, R.J.; Williams, D.C.; Murfin, W.B.; Amos, C.N.; Helton, J.C.

    1987-10-01

    The source terms estimated for NUREG-1150 are generally based on the Source Term Code Package (STCP), but the actual source term calculations used in computing risk are performed by much smaller codes which are specific to each plant. This was done because the method of estimating the uncertainty in risk for NUREG-1150 requires hundreds of source term calculations for each accident sequence. This is clearly impossible with a large, detailed code like the STCP. The small plant-specific codes are based on simple algorithms and utilize adjustable parameters. The values of the parameters appearing in these codes are derived from the available STCP results. To determine the uncertainty in the estimation of the source terms, these parameters were varied as specified by an expert review group. This method was used to account for the uncertainties in the STCP results and the uncertainties in phenomena not considered by the STCP.

  7. SOURCE TERMS FOR HLW GLASS CANISTERS

    SciTech Connect

    J.S. Tang

    2000-08-15

    This calculation is prepared by the Monitored Geologic Repository (MGR) Waste Package Design Section. The objective of this calculation is to determine the source terms that include radionuclide inventory, decay heat, and radiation sources due to gamma rays and neutrons for the high-level radioactive waste (HLW) from the, West Valley Demonstration Project (WVDP), Savannah River Site (SRS), Hanford Site (HS), and Idaho National Engineering and Environmental Laboratory (INEEL). This calculation also determines the source terms of the canister containing the SRS HLW glass and immobilized plutonium. The scope of this calculation is limited to source terms for a time period out to one million years. The results of this calculation may be used to carry out performance assessment of the potential repository and to evaluate radiation environments surrounding the waste packages (WPs). This calculation was performed in accordance with the Development Plan ''Source Terms for HLW Glass Canisters'' (Ref. 7.24).

  8. Mechanistic facility safety and source term analysis

    SciTech Connect

    PLYS, M.G.

    1999-06-09

    A PC-based computer program was created for facility safety and source term analysis at Hanford The program has been successfully applied to mechanistic prediction of source terms from chemical reactions in underground storage tanks, hydrogen combustion in double contained receiver tanks, and proccss evaluation including the potential for runaway reactions in spent nuclear fuel processing. Model features include user-defined facility room, flow path geometry, and heat conductors, user-defined non-ideal vapor and aerosol species, pressure- and density-driven gas flows, aerosol transport and deposition, and structure to accommodate facility-specific source terms. Example applications are presented here.

  9. Supernate source term analysis: Revision 1

    SciTech Connect

    Aponte, C.I.

    1994-10-13

    The HM Process (modified PUREX) has been used in the H-Canyon since 1959 to recover uranium and byproduct neptunium. The PUREX process has been used in the Separation facilities in F and H-Area. This report analyzes both the inhalation and ingestion radionuclide dose impact of the HM and PUREX process soluble portion of their waste streams. The spent fuel assemblies analyzed are the Mark 16B, Mar 22 for the HM process, and the Mark 31A, Mark 31B for the PUREX process. The results from this analysis are combined with an analysis of the current Safety Analysis Report SAR source term to evaluate source terms for HLW supernate. Analysis of fission yield data and SAR source term values demonstrates that a limited number of radionuclides contribute 1% or more to the total dose and that cesium and plutonium isotopes are the radionuclides with major impact in the supernate source term. This report analyses both volatile and evaporative impact as recommended by DOE guidance. In reality, the only radionuclide volatilized during evaporative conditions is tritium. No evidence of selective volatility occurs during forced evaporation in HLW. The results obtained permit reducing the list of radionuclides to be considered in the development of source terms to support the High Level Waste Safety Analysis Report.

  10. Dose distributions in regions containing beta sources: Irregularly shaped source distributions in homogeneous media

    SciTech Connect

    Werner, B.L. )

    1991-11-01

    Methods are introduced by which dose rate distributions due to nonuniform, irregularly shaped distributions of beta emitters can be calculated using dose rate distributions for uniform, spherical source distributions. The dose rate distributions can be written in the MIRD formalism.

  11. A Bayesian Algorithm for Assessing Uncertainty in Radionuclide Source Terms

    NASA Astrophysics Data System (ADS)

    Robins, Peter

    2015-04-01

    Inferring source term parameters for a radionuclide release is difficult, due to the large uncertainties in forward dispersion modelling as a consequence of imperfect knowledge pertaining to wind vector fields and turbulent diffusion in the Earth's atmosphere. Additional sources of error include the radionuclide measurements obtained from sensors. These measurements may either be subject to random fluctuations or are simple indications that the true, unobserved quantity is below a detection limit. Consequent large reconstruction uncertainties can render a "best" estimate meaningless. A Markov Chain Monte Carlo (MCMC) Bayesian Algorithm is presented that attempts to account for uncertainties in atmospheric transport modelling and radionuclide sensor measurements to quantify uncertainties in radionuclide release source term parameters. Prior probability distributions are created for likely release locations at existing nuclear facilities and seismic events. Likelihood models are constructed using CTBTO adjoint modelling output and probability distributions of sensor response. Samples from the resulting multi-isotope source term parameters posterior probability distribution are generated that can be used to make probabilistic statements about the source term. Examples are given of marginal probability distributions obtained from simulated sensor data. The consequences of errors in numerical weather prediction wind fields are demonstrated with a reconstruction of the Fukushima nuclear reactor accident from International Monitoring System radionuclide particulate sensor data.

  12. BWR Source Term Generation and Evaluation

    SciTech Connect

    J.C. Ryman

    2003-07-31

    This calculation is a revision of a previous calculation (Ref. 7.5) that bears the same title and has the document identifier BBAC00000-01717-0210-00006 REV 01. The purpose of this revision is to remove TBV (to-be-verified) -41 10 associated with the output files of the previous version (Ref. 7.30). The purpose of this and the previous calculation is to generate source terms for a representative boiling water reactor (BWR) spent nuclear fuel (SNF) assembly for the first one million years after the SNF is discharged from the reactors. This calculation includes an examination of several ways to represent BWR assemblies and operating conditions in SAS2H in order to quantify the effects these representations may have on source terms. These source terms provide information characterizing the neutron and gamma spectra in particles per second, the decay heat in watts, and radionuclide inventories in curies. Source terms are generated for a range of burnups and enrichments (see Table 2) that are representative of the waste stream and stainless steel (SS) clad assemblies. During this revision, it was determined that the burnups used for the computer runs of the previous revision were actually about 1.7% less than the stated, or nominal, burnups. See Section 6.6 for a discussion of how to account for this effect before using any source terms from this calculation. The source term due to the activation of corrosion products deposited on the surfaces of the assembly from the coolant is also calculated. The results of this calculation support many areas of the Monitored Geologic Repository (MGR), which include thermal evaluation, radiation dose determination, radiological safety analyses, surface and subsurface facility designs, and total system performance assessment. This includes MGR items classified as Quality Level 1, for example, the Uncanistered Spent Nuclear Fuel Disposal Container (Ref. 7.27, page 7). Therefore, this calculation is subject to the requirements of the

  13. SUBURFACE SHIELDING-SPECIFIC SOURCE TERM EVALUATION

    SciTech Connect

    S. Su

    1999-08-24

    The purpose of this work is to provide supporting calculations for determination of the radiation source terms specific to subsurface shielding design and analysis. These calculations are not intended to provide the absolute values of the source terms, which are under the charter of the Waste Package Operations (WPO) Group. Rather, the calculations focus on evaluation of the various combinations of fuel enrichment, burnup and cooling time for a given decay heat output, consistent with the waste package (WP) thermal design basis. The objective is to determine the worst-case combination of the fuel characteristics (enrichment, burnup and cooling time) which would give the maximum radiation fields for subsurface shielding considerations. The calculations are limited to PWR fuel only, since the WP design is currently evolving with thinner walls and a reduced heat load as compared to the viability assessment (VA) reference design. The results for PWR fuel will provide a comparable indication of the trend for BWR fuel, as their characteristics are similar. The source term development for defense high-level waste and other spent nuclear fuel (SNF) is the responsibility of the WPO Group, and therefore, is not included this work. This work includes the following items responsive to the stated purpose and objective: (1) Determine the possible fuel parameters (initial enrichment, burnup and cooling time), that give the same decay heat value as specified for the waste package thermal design; (2) Obtain the neutron and gamma source terms for the various combinations of the fuel parameters for use in radiation field calculations; and (3) Calculate radiation fields on the surfaces of the waste package and its transporter to quantify the effects of the fuel parameters with the same decay heat value for use in identifying the worst-case combination of the fuel parameters.

  14. Design parameters and source terms: Volume 2, Source terms: Revision 0

    SciTech Connect

    Not Available

    1987-10-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report by Stearns Catalytic Corporation (SCC), entitled ''Design Parameters and Source Terms for a Two-Phase Repository Salt,'' 1985, to the level of the Site Characterization Plan - Conceptual Design Report. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible Salt Repository sites. 2 tabs.

  15. Design parameters and source terms: Volume 2, Source terms: Revision 0

    SciTech Connect

    Not Available

    1987-09-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report to the level of the Site Characterization Plan---Conceptual Design Report SCP-CDR. The previous study identifies the data needs for the Environmental Assessment effort for seven possible salt repository sites. Volume 2 contains tables of source terms.

  16. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  17. Atmospheric distribution and sources of nonmethane hydrocarbons

    NASA Technical Reports Server (NTRS)

    Singh, Hanwant B.; Zimmerman, Patrick B.

    1992-01-01

    The paper discusses the atmospheric distribution of natural and man-made nonmethane hydrocarbons (NMHCs), the major species of airborne NMHCs, and their sources and sinks. Particular attention is given to the techniques for measuring atmospheric NMHCs; diurnal and seasonal variations of atmospheric NMHCs and differences between rural, urban, and marine environments; latitudinal and vertical distributions; and available stratospheric NMHC measurements. A formula defining the atmospheric lifetime of a NMHC from its reaction rates with OH and O3 is presented.

  18. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The... to January 10, 1997, who seek to revise the current accident source term used in their design...

  19. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The... to January 10, 1997, who seek to revise the current accident source term used in their design...

  20. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The... to January 10, 1997, who seek to revise the current accident source term used in their design...

  1. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The... to January 10, 1997, who seek to revise the current accident source term used in their design...

  2. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The... to January 10, 1997, who seek to revise the current accident source term used in their design...

  3. Bayesian Estimation of Prior Variance in Source Term Determination

    NASA Astrophysics Data System (ADS)

    Smidl, Vaclav; Hofman, Radek

    2015-04-01

    The problem of determination of source term of an atmospheric release is studied. We assume that the observations y are obtained as linear combination of the source term, x, and source-receptor sensitivities, which can be written in matrix notation as y = Mx with source receptor sensitivity matrix M. Direct estimation of the source term vector x is not possible since the system is often ill-conditioned. The solution is thus found by minimization of a cost function with regularization terms. A typical cost function is: C (x) = (y - M x)TR-1(y- M x) + αxTDT Dx, (1) where the first term minimizes the error of the measurements with covariance matrix R, and the second term is the regularization with weight α. Various types of regularization arise for different choices of matrix D. For example, Tikhonov regularization arises for D in the form of identity matrix, and smoothing regularization for D in the form of a tri-diagonal matrix (Laplacian operator). Typically, the form of matrix D is assumed to be known, and the weight α is optimized manually by a trial and error procedure. In this contribution, we use the probabilistic formulation of the problem, where term (αDTD)-1 is interpreted as a covariance matrix of the prior distribution of x. Following the Bayesian approach, we relax the assumption of known α and D and assume that these are unknown and estimated from the data. The general problem is not analytically tractable and approximate estimation techniques has to be used. We present Variational Bayesian solution of two special cases of the prior covariance matrix. First, the structure of D is assumed to be known and only the weight α is estimated. Application of the Variational Bayes method to this case yields an iterative estimation algorithm. In the first step, the usual optimization problem is solved for an estimate of α. In the next step, the value of α is re-estimated and the procedure returns to the first step. Positivity of the solution is guaranteed

  4. Distributed transform coding via source-splitting

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2012-12-01

    Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

  5. Calculation of external dose from distributed source

    SciTech Connect

    Kocher, D.C.

    1986-01-01

    This paper discusses a relatively simple calculational method, called the point kernel method (Fo68), for estimating external dose from distributed sources that emit photon or electron radiations. The principles of the point kernel method are emphasized, rather than the presentation of extensive sets of calculations or tables of numerical results. A few calculations are presented for simple source geometries as illustrations of the method, and references and descriptions are provided for other caluclations in the literature. This paper also describes exposure situations for which the point kernel method is not appropriate and other, more complex, methods must be used, but these methods are not discussed in any detail.

  6. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  7. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  8. Design parameters and source terms: Volume 3, Source terms: Revision 0

    SciTech Connect

    Not Available

    1987-09-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report to the level of the Site Characterization Plan /endash/ Conceptual Design Report, SCP-CDR. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible salt repository sites.

  9. Particle size distribution of indoor aerosol sources

    SciTech Connect

    Shah, K.B.

    1990-10-24

    As concern about Indoor Air Quality (IAQ) has grown in recent years, it has become necessary to determine the nature of particles produced by different indoor aerosol sources and the typical concentration that these sources tend to produce. These data are important in predicting the dose of particles to people exposed to these sources and it will also enable us to take effective mitigation procedures. Further, it will also help in designing appropriate air cleaners. A new state of the art technique, DMPS (Differential Mobility Particle Sizer) System is used to determine the particle size distributions of a number of sources. This system employs the electrical mobility characteristics of these particles and is very effective in the 0.01--1.0 {mu}m size range. A modified system that can measure particle sizes in the lower size range down to 3 nm was also used. Experimental results for various aerosol sources is presented in the ensuing chapters. 37 refs., 20 figs., 2 tabs.

  10. Guide to Sources: Term Paper Strategy.

    ERIC Educational Resources Information Center

    White, Lucinda M.

    This two-page guide suggests an eight-step term paper research strategy for students using the Fogler Library at the University of Maine. The student is first guided to encyclopedias for overview articles with bibliographies, then directed to the card catalog; periodical indexes; and indexes for books, journal articles, and newspaper articles.…

  11. Experimental quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong

    2015-09-01

    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.

  12. State of the hydrologic source term

    SciTech Connect

    Kersting, A.

    1996-12-01

    The Underground Test Area (UGTA) Operable Unit was defined by the U.S. Department of energy, Nevada operations Office to characterize and potentially remediate groundwaters impacted by nuclear testing at the Nevada Test Site (NTS). Between 1955 and 1992, 828 nuclear devices were detonated underground at the NTS (DOE), 1994. Approximately one third of the nuclear tests were detonated at or below the standing water table and the remainder were located above the water table in the vadose zone. As a result, the distribution of radionuclides in the subsurface and, in particular, the availability of radionuclides for transport away from individual test cavities are major concerns at the NTS. The approach taken is to carry out field-based studies of both groundwaters and host rocks within the near-field in order to develop a detailed understanding of the present-day concentration and spatial distribution of constituent radionuclides. Understanding the current distribution of contamination within the near-field and the conditions under and processes by which the radionuclides were transported make it possible to predict future transport behavior. The results of these studies will be integrated with archival research, experiments and geochemical modeling for complete characterization.

  13. CONSTRAINING SOURCE REDSHIFT DISTRIBUTIONS WITH GRAVITATIONAL LENSING

    SciTech Connect

    Wittman, D.; Dawson, W. A.

    2012-09-10

    We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that {approx}40 massive ({sigma}{sub v} = 1200 km s{sup -1}) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to {approx}11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20%-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as N{sub lens}{sup -1/2}, but the improvement slows as one is forced to use lenses further down the mass function. Future large surveys contain enough clusters to reach 1% precision in the bin fractions if the tight lens-mass priors can be maintained for large samples of lenses. In practice this will be difficult to achieve, but the method may be valuable as a complement to other more precise methods because it is based on different physics and therefore has different systematic errors.

  14. Terrestrial sources and distribution of atmospheric sulphur

    PubMed Central

    Lelieveld, J.; Roelofs, G.-J.; Ganzeveld, L.; Feichter, J.; Rodhe, H.

    1997-01-01

    The general circulation model ECHAM has been coupled to a chemistry and sulphur cycle model to study the impact of terrestrial, i.e. mostly anthropogenic sulphur dioxide (SO2), sources on global distributions of sulphur species in the atmosphere. We briefly address currently available source inventories. It appears that global estimates of natural emissions are associated with uncertainties up to a factor of 2, while anthropogenic emissions have uncertainty ranges of about +/- 30 per cent. Further, some recent improvements in the model descriptions of multiphase chemistry and deposition processes are presented. Dry deposition is modelled consistently with meteorological processes and surface properties. The results indicate that surface removal of SO2 is less efficient than previously assumed, and that the SO2 lifetime is thus longer. Coupling of the photochemistry and sulphur chemistry schemes in the model improves the treatment of multiphase processes such as oxidant (hydrogen peroxide) supply in aqueous phase SO2 oxidation. The results suggest that SO2 oxidation by ozone (O3) in the aqueous phase is more important than indicated in earlier work. However, it appears that we still overestimate atmospheric SO2 concentrations near the surface in the relatively polluted Northern Hemisphere. On the other hand, we somewhat underestimate sulphate levels in these regions, which suggests that additional heterogeneous reaction mechanisms, e.g. on aerosols, enhance SO2 oxidation.

  15. Stochastic Models for the Distribution of Index Terms.

    ERIC Educational Resources Information Center

    Nelson, Michael J.

    1989-01-01

    Presents a probability model of the occurrence of index terms used to derive discrete distributions which are mixtures of Poisson and negative binomial distributions. These distributions give better fits than the simpler Zipf distribution, have the advantage of being more explanatory, and can incorporate a time parameter if necessary. (25…

  16. Panchromatic spectral energy distributions of Herschel sources

    NASA Astrophysics Data System (ADS)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  17. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-01-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  18. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-10-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  19. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-02-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  20. Estimating source terms for far field dredge plume modelling.

    PubMed

    Becker, Johannes; van Eekelen, Erik; van Wiechen, Joost; de Lange, William; Damsma, Thijs; Smolders, Tijmen; van Koningsveld, Mark

    2015-02-01

    Far field modelling of dredging induced suspended sediment plumes is important while assessing the environmental aspects of dredging. Realistic estimation of source terms, that define the suspended sediment input for far field dredge plume modelling, is key to any assessment. This paper describes a generic method for source term estimation as it is used in practice in the dredging industry. It is based on soil characteristics and dredge production figures, combined with empirically derived, equipment and condition specific 'source term fractions'. A source term fraction relates the suspended fine sediment that is available for dispersion, to the amount of fine sediment that is present in the soil and the way it is dredged. The use of source term fractions helps to circumvent modelling of complicated near field processes, at least initially, enabling quick assessments. When further detail is required and extra information is available, the applicability of the source term fractions can/should be evaluated by characterisation monitoring and/or near field modelling. An example of a fictitious yet realistic dredging project demonstrates how two different work methods can trigger two distinctly different types of stress to the environmental system in terms of sediment concentration and duration.

  1. Incorporation of Melcor source term predictions into probabilistic risk assessments

    SciTech Connect

    Summers, R.M.; Helton, J.C.; Leigh, C.D.

    1989-01-01

    The MELCOR code has been developed as an advanced computational tool for performing primary source term analyses that will incorporate current phenomenological understanding into probabilistic risk assessments (PRAs). Although MELCOR is reasonably fast running, it is not feasible to perform a MELCOR calculation for each of the thousands of sets of conditions requiring a source term estimate in an integrated PRA. Therefore, the RELTRAC code is being developed to generate secondary source term estimates for use directly in a PRA for the LaSalle nuclear power plant by appropriately manipulating results from calculations by a primary source term code such as MELCOR. This paper describes the MELCOR and RELTRAC models and the manner in which MELCOR calculations are used to provide input to the RELTRAC model. 26 refs., 2 figs., 1 tab.

  2. Source Term Model for an Array of Vortex Generator Vanes

    NASA Technical Reports Server (NTRS)

    Buning, P. G. (Technical Monitor); Waithe, Kenrick A.

    2003-01-01

    A source term model was developed for numerical simulations of an array of vortex generators. The source term models the side force created by a vortex generator being modeled. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on a local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low-profile vortex generator vane, which is only a fraction of the boundary layer thickness, over a flat plate. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data.

  3. Revised accident source terms for light-water reactors

    SciTech Connect

    Soffer, L.

    1995-02-01

    This paper presents revised accident source terms for light-water reactors incorporating the severe accident research insights gained in this area over the last 15 years. Current LWR reactor accident source terms used for licensing date from 1962 and are contained in Regulatory Guides 1.3 and 1.4. These specify that 100% of the core inventory of noble gases and 25% of the iodine fission products are assumed to be instantaneously available for release from the containment. The chemical form of the iodine fission products is also assumed to be predominantly elemental iodine. These assumptions have strongly affected present nuclear air cleaning requirements by emphasizing rapid actuation of spray systems and filtration systems optimized to retain elemental iodine. A proposed revision of reactor accident source terms and some im implications for nuclear air cleaning requirements was presented at the 22nd DOE/NRC Nuclear Air Cleaning Conference. A draft report was issued by the NRC for comment in July 1992. Extensive comments were received, with the most significant comments involving (a) release fractions for both volatile and non-volatile species in the early in-vessel release phase, (b) gap release fractions of the noble gases, iodine and cesium, and (c) the timing and duration for the release phases. The final source term report is expected to be issued in late 1994. Although the revised source terms are intended primarily for future plants, current nuclear power plants may request use of revised accident source term insights as well in licensing. This paper emphasizes additional information obtained since the 22nd Conference, including studies on fission product removal mechanisms, results obtained from improved severe accident code calculations and resolution of major comments, and their impact upon the revised accident source terms. Revised accident source terms for both BWRS and PWRS are presented.

  4. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  5. Source term and radiological consequences of the Chernobyl accident

    SciTech Connect

    Mourad, R.; Snell, V.

    1987-01-01

    The objective of this work is to assess the source term and to evaluate the maximum hypothetical individual doses in European countries (including the Soviet Union) from the Chernobyl accident through the analyses of measurements of meteorological data, radiation fields, and airborne and deposited activity in these countries. Applying this information to deduce the source term involves a reversal of the techniques of nuclear accident analysis, which estimate the off-site consequences of postulated accidents. In this study the authors predict the quantities of radionuclides that, if released at Chernobyl and following the calculated trajectories, would explain and unify the observed radiation levels and radionuclide concentrations as measured by European countries and the Soviet Union. The simulation uses the PEAR microcomputer program following the methodology described in Canadian Standards Association standard N288.2. The study was performed before the Soviets published their estimate of the source term and the two results are compared.

  6. Problem solving as intelligent retrieval from distributed knowledge sources

    NASA Technical Reports Server (NTRS)

    Chen, Zhengxin

    1987-01-01

    Distributed computing in intelligent systems is investigated from a different perspective. From the viewpoint that problem solving can be viewed as intelligent knowledge retrieval, the use of distributed knowledge sources in intelligent systems is proposed.

  7. A parameter model for dredge plume sediment source terms

    NASA Astrophysics Data System (ADS)

    Decrop, Boudewijn; De Mulder, Tom; Toorman, Erik; Sas, Marc

    2017-01-01

    The presented model allows for fast simulations of the near-field behaviour of overflow dredging plumes. Overflow dredging plumes occur when dredging vessels employ a dropshaft release system to discharge the excess sea water, which is pumped into the trailing suction hopper dredger (TSHD) along with the dredged sediments. The fine sediment fraction in the loaded water-sediment mixture does not fully settle before it reaches the overflow shaft. By consequence, the released water contains a fine sediment fraction of time-varying concentration. The sediment grain size is in the range of clays, silt and fine sand; the sediment concentration varies roughly between 10 and 200 g/l in most cases, peaking at even higher value with short duration. In order to assess the environmental impact of the increased turbidity caused by this release, plume dispersion predictions are often carried out. These predictions are usually executed with a large-scale model covering a complete coastal zone, bay, or estuary. A source term of fine sediments is implemented in the hydrodynamic model to simulate the fine sediment dispersion. The large-scale model mesh resolution and governing equations, however, do not allow to simulate the near-field plume behaviour in the vicinity of the ship hull and propellers. Moreover, in the near-field, these plumes are under influence of buoyancy forces and air bubbles. The initial distribution of sediments is therefore unknown and has to be based on crude assumptions at present. The initial (vertical) distribution of the sediment source is indeed of great influence on the final far-field plume dispersion results. In order to study this near-field behaviour, a highly-detailed computationally fluid dynamics (CFD) model was developed. This model contains a realistic geometry of a dredging vessel, buoyancy effects, air bubbles and propeller action, and was validated earlier by comparing with field measurements. A CFD model requires significant simulation times

  8. Flowsheets and source terms for radioactive waste projections

    SciTech Connect

    Forsberg, C.W.

    1985-03-01

    Flowsheets and source terms used to generate radioactive waste projections in the Integrated Data Base (IDB) Program are given. Volumes of each waste type generated per unit product throughput have been determined for the following facilities: uranium mining, UF/sub 6/ conversion, uranium enrichment, fuel fabrication, boiling-water reactors (BWRs), pressurized-water reactors (PWRs), and fuel reprocessing. Source terms for DOE/defense wastes have been developed. Expected wastes from typical decommissioning operations for each facility type have been determined. All wastes are also characterized by isotopic composition at time of generation and by general chemical composition. 70 references, 21 figures, 53 tables.

  9. Spallation Neutron Source Accident Terms for Environmental Impact Statement Input

    SciTech Connect

    Devore, J.R.; Harrington, R.M.

    1998-08-01

    This report is about accidents with the potential to release radioactive materials into the environment surrounding the Spallation Neutron Source (SNS). As shown in Chap. 2, the inventories of radioactivity at the SNS are dominated by the target facility. Source terms for a wide range of target facility accidents, from anticipated events to worst-case beyond-design-basis events, are provided in Chaps. 3 and 4. The most important criterion applied to these accident source terms is that they should not underestimate potential release. Therefore, conservative methodology was employed for the release estimates. Although the source terms are very conservative, excessive conservatism has been avoided by basing the releases on physical principles. Since it is envisioned that the SNS facility may eventually (after about 10 years) be expanded and modified to support a 4-MW proton beam operational capability, the source terms estimated in this report are applicable to a 4-MW operating proton beam power unless otherwise specified. This is bounding with regard to the 1-MW facility that will be built and operated initially. See further discussion below in Sect. 1.2.

  10. Common Calibration Source for Monitoring Long-term Ozone Trends

    NASA Technical Reports Server (NTRS)

    Kowalewski, Matthew

    2004-01-01

    Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.

  11. BWR ASSEMBLY SOURCE TERMS FOR WASTE PACKAGE DESIGN

    SciTech Connect

    T.L. Lotz

    1997-02-15

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide boiling water reactor (BWR) assembly radiation source term data for use during Waste Package (WP) design. The BWR assembly radiation source terms are to be used for evaluation of radiolysis effects at the WP surface, and for personnel shielding requirements during assembly or WP handling operations. The objectives of this evaluation are to generate BWR assembly radiation source terms that bound selected groupings of BWR assemblies, with regard to assembly average burnup and cooling time, which comprise the anticipated MGDS BWR commercial spent nuclear fuel (SNF) waste stream. The source term data is to be provided in a form which can easily be utilized in subsequent shielding/radiation dose calculations. Since these calculations may also be used for Total System Performance Assessment (TSPA), with appropriate justification provided by TSPA, or radionuclide release rate analysis, the grams of each element and additional cooling times out to 25 years will also be calculated and the data included in the output files.

  12. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 23.1310 Section 23.1310 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose...

  13. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 23.1310 Section 23.1310 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose...

  14. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 23.1310 Section 23.1310 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose...

  15. Distributed source coding using chaos-based cryptosystem

    NASA Astrophysics Data System (ADS)

    Zhou, Junwei; Wong, Kwok-Wo; Chen, Jianyong

    2012-12-01

    A distributed source coding scheme is proposed by incorporating a chaos-based cryptosystem in the Slepian-Wolf coding. The punctured codeword generated by the chaos-based cryptosystem results in ambiguity at the decoder side. This ambiguity can be removed by the maximum a posteriori decoding with the help of side information. In this way, encryption and source coding are performed simultaneously. This leads to a simple encoder structure with low implementation complexity. Simulation results show that the encoder complexity is lower than that of existing distributed source coding schemes. Moreover, at small block size, the proposed scheme has a performance comparable to existing distributed source coding schemes.

  16. Open source portal to distributed image repositories

    NASA Astrophysics Data System (ADS)

    Tao, Wenchao; Ratib, Osman M.; Kho, Hwa; Hsu, Yung-Chao; Wang, Cun; Lee, Cason; McCoy, J. M.

    2004-04-01

    In large institution PACS, patient data may often reside in multiple separate systems. While most systems tend to be DICOM compliant, none of them offer the flexibility of seamless integration of multiple DICOM sources through a single access point. We developed a generic portal system with a web-based interactive front-end as well as an application programming interface (API) that allows both web users and client applications to query and retrieve image data from multiple DICOM sources. A set of software tools was developed to allow accessing several DICOM archives through a single point of access. An interactive web-based front-end allows user to search image data seamlessly from the different archives and display the results or route the image data to another DICOM compliant destination. An XML-based API allows other software programs to easily benefit from this portal to query and retrieve image data as well. Various techniques are employed to minimize the performance overhead inherent in the DICOM. The system is integrated with a hospital-wide HIPAA-compliant authentication and auditing service that provides centralized management of access to patient medical records. The system is provided under open source free licensing and developed using open-source components (Apache Tomcat for web server, MySQL for database, OJB for object/relational data mapping etc.). The portal paradigm offers a convenient and effective solution for accessing multiple image data sources in a given healthcare enterprise and can easily be extended to multi-institution through appropriate security and encryption mechanisms.

  17. IMPACTS OF SOURCE TERM HETEROGENEITIES ON WATER PATHWAY DOSE.

    SciTech Connect

    SULLIVAN, T.; GUSKOV, A.; POSKAS, P.; RUPERTI, N.; HANUSIK, V.; ET AL.

    2004-09-15

    and for which a solution has to be found in term of long-term disposal. Together with their casing and packaging, they are one form of heterogeneous waste; many other forms of waste with heterogeneous properties exist. They may arise in very small quantities and with very specific characteristics in the case of small producers, or in larger streams with standard characteristics in others. This wide variety of waste induces three main different levels of waste heterogeneity: (1) hot spot (e.g. disused sealed sources); (2) large item inside a package (e.g. metal components); and (3) very large items to be disposed of directly in the disposal unit (e.g. irradiated pipes, vessels). Safety assessments generally assume a certain level of waste homogeneity in most of the existing or proposed disposal facilities. There is a need to evaluate the appropriateness of such an assumption and the influence on the results of safety assessment. This need is especially acute in the case of sealed sources. There are many cases where are storage conditions are poor, or there is improper management leading to a radiological accident, some with significant or detrimental impacts. Disposal in a near surface disposal facility has been used in the past for some disused sealed sources. This option is currently in use for others sealed sources, or is being studied for the rest of them. The regulatory framework differs greatly between countries. In some countries, large quantities of disused sealed sources have been disposed of without any restriction, in others their disposal is forbidden by law. In any case, evaluation of the acceptability of disposal of disused sealed sources in near surface disposal facility is of utmost importance.

  18. Sourcing and Global Distribution of Medical Supplies

    DTIC Science & Technology

    2014-01-01

    and ships it to OCONUS treatment facilities and operational units. Procuring and distributing medical materiel carries a large annual cost : DoD...Chapter 55, Medical and Dental Care, January 7, 2011. U.S. Code, Title 21—Food and Drugs, Chapter 9—Federal Food, Drug, and Cosmetic Act...the United States Army under Contract No. W74V8H-06-C-0001. iii Preface Concerned with rising Department of Defense (DoD) costs , the Office of

  19. Dose distributions in regions containing beta sources: Uniform spherical source regions in homogeneous media

    SciTech Connect

    Werner, B.L.; Rahman, M.; Salk, W.N. ); Kwok, C.S. )

    1991-11-01

    The energy-averaged transport model for the calculation of dose rate distributions is applied to uniform, spherical source distributions in homogeneous media for radii smaller than the electron range. The model agrees well with Monte Carlo based calculations for source distributions with radii greater than half the continuous slowing down approximation range. The dose rate distributions can be written in the medical internal radiation dose (MIRD) formalism.

  20. PST - a new method for estimating PSA source terms

    SciTech Connect

    1996-12-31

    The Parametric Source Term (PST) code has been developed for estimating radioactivity release fractions. The PST code is a framework of equations based on activity transport between volumes in the release pathway from the core, through the vessel, through the containment, and to the environment. The code is fast-running because it obtains exact solutions to differential equations for activity transport in each volume for each time interval. It has successfully been applied to estimate source terms for the six Pressurized Water Reactors (PWRs) that were selected for initial consideration in the Accident Sequence Precursor (ASP) Level 2 model development effort. This paper describes the PST code and the manner in which it has been applied to estimate radioactivity release fractions for the six PWRs initially considered in the ASP Program.

  1. A nuclear source term analysis for spacecraft power systems

    SciTech Connect

    McCulloch, W.H.

    1998-12-01

    All US space missions involving on board nuclear material must be approved by the Office of the President. To be approved the mission and the hardware systems must undergo evaluations of the associated nuclear health and safety risk. One part of these evaluations is the characterization of the source terms, i.e., the estimate of the amount, physical form, and location of nuclear material, which might be released into the environment in the event of credible accidents. This paper presents a brief overview of the source term analysis by the Interagency Nuclear Safety Review Panel for the NASA Cassini Space Mission launched in October 1997. Included is a description of the Energy Interaction Model, an innovative approach to the analysis of potential releases from high velocity impacts resulting from launch aborts and reentries.

  2. Basic repository source term and data sheet report: Lavender Canyon

    SciTech Connect

    Not Available

    1988-01-01

    This report is one of a series describing studies undertaken in support of the US Department of Energy Civilian Radioactive Waste Management (CRWM) Program. This study contains the derivation of values for environmental source terms and resources consumed for a CRWM repository. Estimates include heavy construction equipment; support equipment; shaft-sinking equipment; transportation equipment; and consumption of fuel, water, electricity, and natural gas. Data are presented for construction and operation at an assumed site in Lavender Canyon, Utah. 3 refs; 6 tabs.

  3. The source and distribution of Galactic positrons

    NASA Technical Reports Server (NTRS)

    Purcell, W. R.; Dixon, D. D.; Cheng, L.-X.; Leventhal, M.; Kinzer, R. L.; Kurfess, J. D.; Skibo, J. G.; Smith, D. M.; Tueller, J.

    1997-01-01

    The oriented scintillation spectrometer experiment (OSSE) observations of the Galactic plane and the Galactic center region were combined with observations acquired with other instruments in order to produce a map of the Galactic 511 keV annihilation radiation. Two mapping techniques were applied to the data: the maximum entropy method, and the basis pursuit inversion method. The resulting maps are qualitatively similar and show evidence for a central bulge and a weak galactic disk component. The weak disk is consistent with that expected from positrons produced by the decay of radioactive Al-26 in the interstellar medium. Both maps suggest an enhanced region of emission near l = -4 deg, b = 7 deg, with a flux of approximately 50 percent of that of the bulge. The existence of this emission appears significant, although the location is not well determined. The source of this enhanced emission is presently unknown.

  4. Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection

    PubMed Central

    Bane, Vaishali; Lehane, Mary; Dikshit, Madhurima; O’Riordan, Alan; Furey, Ambrose

    2014-01-01

    Tetrodotoxin (TTX) is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection. PMID:24566728

  5. Sources and distributions of dark matter

    SciTech Connect

    Sikivie, P. |

    1995-12-31

    In the first section, the author tries to convey a sense of the variety of observational inputs that tell about the existence and the spatial distribution of dark matter in the universe. In the second section, he briefly reviews the four main dark matter candidates, taking note of each candidate`s status in the world of particle physics, its production in the early universe, its effect upon large scale structure formation and the means by which it may be detected. Section 3 concerns the energy spectrum of (cold) dark matter particles on earth as may be observed some day in a direct detection experiment. It is a brief account of work done in collaboration with J. Ipser and, more recently, with I. Tkachev and Y. Wang.

  6. Apparent LFE Magnitude-Frequency Distributions and the Tremor Source

    NASA Astrophysics Data System (ADS)

    Rubin, A. M.; Bostock, M. G.

    2015-12-01

    Over a decade since its discovery, it is disconcerting that we know so little about the kinematics of the tremor source. One could say we are hampered by low signal-to-noise ratio, but often the LFE signal is large and the "noise" is just other LFEs, often nearly co-located. Here we exploit this feature to better characterize the tremor source. A quick examination of LFE catalogs shows, unsurprisingly, that detected magnitudes are large when the background tremor amplitude is large. A simple interpretation is that small LFEs are missed when tremor is loud. An unanswered question is whether, in addition, there is a paucity of small LFEs when tremor is loud. Because we have both the LFE Green's function (from stacks) and some minimum bound on the overall LFE rate (from our catalogs), tremor waveforms provide a consistency check on any assumed magnitude-frequency (M-f) distribution. Beneath southern Vancouver Island, the magnitudes of >10^5 LFEs range from about 1.2-2.4 (Bostock et al. 2015). Interpreted in terms of a power-law distribution, the b-value is >5. But missed small events make even this large value only a lower bound. Binning by background tremor amplitude, and assuming a time-invariant M-f distribution, the b-value increases to >7, implying (e.g.) more than 10 million M>1.2 events for every M=2.2 event. Such numbers are inconsistent with the observed modest increase in tremor amplitude with LFE magnitude, as well as with geodetically-allowable slips. Similar considerations apply to exponential and log-normal moment-frequency distributions. Our preliminary interpretation is that when LFE magnitudes are large, the same portion of the fault is producing larger LFEs, rather than a greater rate of LFEs pulled from the same distribution. If correct, this distinguishes LFEs from repeating earthquakes, where larger background fault slip rates lead not to larger earthquakes but to more frequent earthquakes of similar magnitude. One possible explanation, that LFEs

  7. Actinide Source Term Program, position paper. Revision 1

    SciTech Connect

    Novak, C.F.; Papenguth, H.W.; Crafts, C.C.; Dhooge, N.J.

    1994-11-15

    The Actinide Source Term represents the quantity of actinides that could be mobilized within WIPP brines and could migrate with the brines away from the disposal room vicinity. This document presents the various proposed methods for estimating this source term, with a particular focus on defining these methods and evaluating the defensibility of the models for mobile actinide concentrations. The conclusions reached in this document are: the 92 PA {open_quotes}expert panel{close_quotes} model for mobile actinide concentrations is not defensible; and, although it is extremely conservative, the {open_quotes}inventory limits{close_quotes} model is the only existing defensible model for the actinide source term. The model effort in progress, {open_quotes}chemical modeling of mobile actinide concentrations{close_quotes}, supported by a laboratory effort that is also in progress, is designed to provide a reasonable description of the system and be scientifically realistic and supplant the {open_quotes}Inventory limits{close_quotes} model.

  8. Contamination on LDEF: Sources, distribution, and history

    NASA Technical Reports Server (NTRS)

    Pippin, Gary; Crutcher, Russ

    1993-01-01

    An introduction to contamination effects observed on the Long Duration Exposure Facility (LDEF) is presented. The activities reported are part of Boeing's obligation to the LDEF Materials Special Investigation Group. The contamination films and particles had minimal influence on the thermal performance of the LDEF. Some specific areas did have large changes in optical properties. Films also interfered with recession rate determination by reacting with the oxygen or physically shielding underlying material. Generally, contaminant films lessen the measured recession rate relative to 'clean' surfaces. On orbit generation of particles may be an issue for sensitive optics. Deposition on lenses may lead to artifacts on photographic images or cause sensors to respond inappropriately. Particles in the line of sight of sensors can cause stray light to be scattered into sensors. Particles also represent a hazard for mechanisms in that they can physically block and/or increase friction or wear on moving surfaces. LDEF carried a rather complex mixture of samples and support hardware into orbit. The experiments were assembled under a variety of conditions and time constraints and stored for up to five years before launch. The structure itself was so large that it could not be baked after the interior was painted with chemglaze Z-306 polyurethane based black paint. Any analysis of the effects of molecular and particulate contamination must account for a complex array of sources, wide variation in processes over time, and extreme variation in environment from ground to launch to flight. Surface conditions at certain locations on LDEF were established by outgassing of molecular species from particular materials onto adjacent surfaces, followed by alteration of those species due to exposure to atomic oxygen and/or solar radiation.

  9. Short and long term representation of an unfamiliar tone distribution

    PubMed Central

    Diercks, Charlette; Troje, Nikolaus F.; Cuddy, Lola L.

    2016-01-01

    We report on a study conducted to extend our knowledge about the process of gaining a mental representation of music. Several studies, inspired by research on the statistical learning of language, have investigated statistical learning of sequential rules underlying tone sequences. Given that the mental representation of music correlates with distributional properties of music, we tested whether participants are able to abstract distributional information contained in tone sequences to form a mental representation. For this purpose, we created an unfamiliar music genre defined by an underlying tone distribution, to which 40 participants were exposed. Our stimuli allowed us to differentiate between sensitivity to the distributional properties contained in test stimuli and long term representation of the distributional properties of the music genre overall. Using a probe tone paradigm and a two-alternative forced choice discrimination task, we show that listeners are able to abstract distributional properties of music through mere exposure into a long term representation of music. This lends support to the idea that statistical learning is involved in the process of gaining musical knowledge. PMID:27635355

  10. Trace Metal Source Terms in Carbon Sequestration Environments

    SciTech Connect

    Karamalidis, Athanasios; Torres, Sharon G.; Hakala, Jacqueline A.; Shao, Hongbo; Cantrell, Kirk J.; Carroll, Susan A.

    2013-01-01

    ABSTRACT: Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising; however, possible CO2 or CO2-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define to provide a range of concentrations that can be used as the trace element source term for reservoirs and leakage pathways in risk simulations. Storage source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from cements and sandstones, shales, carbonates, evaporites, and basalts from the Frio, In Salah, Illinois Basin, Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands, and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution was tracked by measuring solution concentrations over time under conditions (e.g., pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for maximum contaminant levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments because of the presence of CO2. Results indicate that Cr and Pb released from sandstone reservoir and shale cap rocks exceed the MCLs byan order of magnitude, while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the trace element source term for reservoirs and leakage pathways in risk simulations to further evaluate the impact of leakage on groundwater quality.

  11. Trace Metal Source Terms in Carbon Sequestration Environments

    SciTech Connect

    Karamalidis, Athanasios K; Torres, Sharon G; Hakala, J Alexandra; Shao, Hongbo; Cantrell, Kirk J; Carroll, Susan

    2012-02-05

    Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising, however, possible CO₂ or CO₂-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define trace metal source terms from the reaction of supercritical CO₂, storage reservoir brines, reservoir and cap rocks. Storage reservoir source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from sandstones, shales, carbonates, evaporites, basalts and cements from the Frio, In Salah, Illinois Basin – Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution is tracked by measuring solution concentrations over time under conditions (e.g. pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for Maximum Contaminant Levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments due to the presence of CO₂. Results indicate that Cr and Pb released from sandstone reservoir and shale cap rock exceed the MCLs by an order of magnitude while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the reservoir and caprock source term to further evaluate the impact of leakage on groundwater quality.

  12. Trace metal source terms in carbon sequestration environments.

    PubMed

    Karamalidis, Athanasios K; Torres, Sharon G; Hakala, J Alexandra; Shao, Hongbo; Cantrell, Kirk J; Carroll, Susan

    2013-01-02

    Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising; however, possible CO(2) or CO(2)-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define a range of concentrations that can be used as the trace element source term for reservoirs and leakage pathways in risk simulations. Storage source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from cements and sandstones, shales, carbonates, evaporites, and basalts from the Frio, In Salah, Illinois Basin, Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands, and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution was tracked by measuring solution concentrations over time under conditions (e.g., pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for maximum contaminant levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments because of the presence of CO(2). Results indicate that Cr and Pb released from sandstone reservoir and shale cap rocks exceed the MCLs by an order of magnitude, while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the trace element source term for reservoirs and leakage pathways in risk simulations to further evaluate the impact of leakage on groundwater quality.

  13. Methodology for a bounding estimate of activation source-term.

    PubMed

    Culp, Todd

    2013-02-01

    Sandia National Laboratories' Z-Machine is the world's most powerful electrical device, and experiments have been conducted that make it the world's most powerful radiation source. Because Z-Machine is used for research, an assortment of materials can be placed into the machine; these materials can be subjected to a range of nuclear reactions, producing an assortment of activation products. A methodology was developed to provide a systematic approach to evaluate different materials to be introduced into the machine as wire arrays. This methodology is based on experiment specific characteristics, physical characteristics of specific radionuclides, and experience with Z-Machine. This provides a starting point for bounding calculations of radionuclide source-term that can be used for work planning, development of work controls, and evaluating materials for introduction into the machine.

  14. Distributed joint source-channel coding in wireless sensor networks.

    PubMed

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency.

  15. Characteristics of releases from TREAT source term experiment STEP-3

    SciTech Connect

    Fink, J.K.; Schlenger, B.J.; Baker, L. Jr.; Ritzman, R.L.

    1987-01-01

    Four in-pile experiments designed to characterize the radiological source term associated with postulated severe light water reactor accidents were performed at the Transient Reactor Test Facility. STEP-3 simulated a high-pressure TMLB' pressurized water reactor accident sequence that includes the extended loss of all ac power and leads to the loss of long-term decay heat removal. In STEP-3, four fuel elements from the Belgonucleaire BR3 reactor were subjected to temperature and pressures approaching those of a TMLB' accident. A description of the experiment and thermal-hydraulic analysis is reported elsewhere. The aerosols released into the flow stream were collected on coupons, settling plates, and wire impactors. Examination of the collected aerosol deposits was performed using scanning electron microscopy, electron microprobe microanalysis, and secondary ion mass spectroscopy (SIMS), to provide information about the chemical composition and morphology of the release. This paper describes the aerosol deposits and elemental composition of the release.

  16. Development of alternate methods of determining integrated SMR source terms

    SciTech Connect

    Barry, Kenneth

    2014-06-10

    The Nuclear Energy Institute (NEI) Small Modular Reactor (SMR) Licensing Task Force (TF) has been evaluating licensing issues unique and important to iPWRs, ranking these issues, and developing NEI position papers for submittal to the U.S. Nuclear Regulatory Commission (NRC) during the past three years. Papers have been developed and submitted to the NRC in a range of areas including: Price-Anderson Act, NRC annual fees, security, modularity, and staffing. In December, 2012, NEI completed a draft position paper on SMR source terms and participated in an NRC public meeting presenting a summary of this paper, which was subsequently submitted to the NRC. One important conclusion of the source term paper was the evaluation and selection of high importance areas where additional research would have a significant impact on source terms. The highest ranked research area was iPWR containment aerosol natural deposition. The NRC accepts the use of existing aerosol deposition correlations in Regulatory Guide 1.183, but these were developed for large light water reactor (LWR) containments. Application of these correlations to an iPWR design has resulted in greater than a ten-fold reduction of containment airborne aerosol inventory as compared to large LWRs. Development and experimental justification of containment aerosol natural deposition correlations specifically for the unique iPWR containments is expected to result in a large reduction of design basis and beyond-design-basis accident source terms with concomitantly smaller dose to workers and the public. Therefore, NRC acceptance of iPWR containment aerosol natural deposition correlations will directly support the industry’s goal of reducing the Emergency Planning Zone (EPZ) for SMRs. Based on the results in this work, it is clear that thermophoresis is relatively unimportant for iPWRs. Gravitational settling is well understood, and may be the dominant process for a dry environment. Diffusiophoresis and enhanced

  17. Fourth order wave equations with nonlinear strain and source terms

    NASA Astrophysics Data System (ADS)

    Liu, Yacheng; Xu, Runzhang

    2007-07-01

    In this paper we study the initial boundary value problem for fourth order wave equations with nonlinear strain and source terms. First we introduce a family of potential wells and prove the invariance of some sets and vacuum isolating of solutions. Then we obtain a threshold result of global existence and nonexistence. Finally we discuss the global existence of solutions for the problem with critical initial condition I(u0)[greater-or-equal, slanted]0, E(0)=d. So the Esquivel-Avila's results are generalized and improved.

  18. Alternative estimate of source distribution in microbial source tracking using posterior probabilities.

    PubMed

    Greenberg, Joshua; Price, Bertram; Ware, Adam

    2010-04-01

    Microbial source tracking (MST) is a procedure used to determine the relative contributions of humans and animals to fecal microbial contamination of surface waters in a given watershed. Studies of MST methodology have focused on optimizing sampling, laboratory, and statistical analysis methods in order to improve the reliability of determining which sources contributed most to surface water fecal contaminant. The usual approach for estimating a source distribution of microbial contamination is to classify water sample microbial isolates into discrete source categories and calculate the proportion of these isolates in each source category. The set of proportions is an estimate of the contaminant source distribution. In this paper we propose and compare an alternative method for estimating a source distribution-averaging posterior probabilities of source identity across isolates. We conducted a Monte Carlo simulation covering a wide variety of watershed scenarios to compare the two methods. The results show that averaging source posterior probabilities across isolates leads to more accurate source distribution estimates than proportions that follow classification.

  19. Long-Term Stability of the NIST Standard Ultrasonic Source.

    PubMed

    Fick, Steven E

    2008-01-01

    The National Institute of Standards and Technology (NIST) Standard Ultrasonic Source (SUS) is a system comprising a transducer capable of output power levels up to 1 W at multiple frequencies between 1 MHz and 30 MHz, and an electrical impedance-matching network that allows the system to be driven by a conventional 50 Ω rf (radio-frequency) source. It is designed to allow interlaboratory replication of ultrasonic power levels with high accuracy using inexpensive readily available ancillary equipment. The SUS was offered for sale for 14 years (1985 to 1999). Each system was furnished with data for the set of calibration points (combinations of power level and frequency) specified by the customer. Of the systems that had been ordered with some calibration points in common, three were returned more than once to NIST for recalibration. Another system retained at NIST has been recalibrated periodically since 1984. The collective data for these systems comprise 9 calibration points and 102 measurements spanning a 17 year interval ending in 2001, the last year NIST ultrasonic power measurement services were available to the public. These data have been analyzed to compare variations in output power with frequency, power level, and time elapsed since the first calibration. The results verify the claim, made in the instruction sheet furnished with every SUS, that "long-term drift, if any, in the calibration of NIST Standard Sources is insignificant compared to the uncertainties associated with a single measurement of ultrasonic power by any method available at NIST."

  20. The distribution of Infrared point sources in nearby elliptical galaxies

    NASA Astrophysics Data System (ADS)

    Gogoi, Rupjyoti; Misra, Ranjeev; Puthiyaveettil, Shalima

    Infra-red point sources in nearby early-type galaxies are often counterparts of sources in other wavebands such as optical and X-rays. In particular, the IR counterpart of X-ray sources may be due to a globular cluster hosting the X-ray source or could be associated directly with the binary, providing crucial information regarding their environment. In general, the IR sources would be from globular clusters and their IR colors would provide insight into their stellar composition. However, many of the IR sources maybe background objects and it is important to identify them or at least quantify the level of background contamination. Archival Spitzer IRAC images provide a unique opportunity to study these sources in nearby Ellipticals and in particular to estimate the distributions of their IR luminosity, color and distance from the center. We will present the results of such an analysis for three nearby galaxies. We have also estimated the background contamination using several blank fields. Our preliminary results suggest that IR colors can be effectively used to differentiate between the background and sources in the galaxy, and that the distribution of sources are markedly different for different Elliptical galaxies.

  1. Near term climate projections for invasive species distributions

    USGS Publications Warehouse

    Jarnevich, C.S.; Stohlgren, T.J.

    2009-01-01

    Climate change and invasive species pose important conservation issues separately, and should be examined together. We used existing long term climate datasets for the US to project potential climate change into the future at a finer spatial and temporal resolution than the climate change scenarios generally available. These fine scale projections, along with new species distribution modeling techniques to forecast the potential extent of invasive species, can provide useful information to aide conservation and invasive species management efforts. We created habitat suitability maps for Pueraria montana (kudzu) under current climatic conditions and potential average conditions up to 30 years in the future. We examined how the potential distribution of this species will be affected by changing climate, and the management implications associated with these changes. Our models indicated that P. montana may increase its distribution particularly in the Northeast with climate change and may decrease in other areas. ?? 2008 Springer Science+Business Media B.V.

  2. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases

  3. A comparison of world-wide uses of severe reactor accident source terms

    SciTech Connect

    Ang, M.L.; Frid, W.; Kersting, E.J.; Friederichs, H.G.; Lee, R.Y.; Meyer-Heine, A.; Powers, D.A.; Soda, K.; Sweet, D.

    1994-09-01

    The definitions of source terms to reactor containments and source terms to the environment are discussed. A comparison is made between the TID-14844 example source term and the alternative source term described in NUREG-1465. Comparisons of these source terms to the containments and those used in France, Germany, Japan, Sweden, and the United Kingdom are made. Source terms to the environment calculated in NUREG-1500 and WASH-1400 are discussed. Again, these source terms are compared to those now being used in France, Germany, Japan, Sweden, and the United Kingdom. It is concluded that source terms to the containment suggested in NUREG-1465 are not greatly more conservative than those used in other countries. Technical bases for the source terms are similar. The regulatory use of the current understanding of radionuclide behavior varies among countries.

  4. Accident source terms for light-water nuclear power plants using high-burnup or MOX fuel.

    SciTech Connect

    Salay, Michael; Gauntt, Randall O.; Lee, Richard Y.; Powers, Dana Auburn; Leonard, Mark Thomas

    2011-01-01

    Representative accident source terms patterned after the NUREG-1465 Source Term have been developed for high burnup fuel in BWRs and PWRs and for MOX fuel in a PWR with an ice-condenser containment. These source terms have been derived using nonparametric order statistics to develop distributions for the timing of radionuclide release during four accident phases and for release fractions of nine chemical classes of radionuclides as calculated with the MELCOR 1.8.5 accident analysis computer code. The accident phases are those defined in the NUREG-1465 Source Term - gap release, in-vessel release, ex-vessel release, and late in-vessel release. Important differences among the accident source terms derived here and the NUREG-1465 Source Term are not attributable to either fuel burnup or use of MOX fuel. Rather, differences among the source terms are due predominantly to improved understanding of the physics of core meltdown accidents. Heat losses from the degrading reactor core prolong the process of in-vessel release of radionuclides. Improved understanding of the chemistries of tellurium and cesium under reactor accidents changes the predicted behavior characteristics of these radioactive elements relative to what was assumed in the derivation of the NUREG-1465 Source Term. An additional radionuclide chemical class has been defined to account for release of cesium as cesium molybdate which enhances molybdenum release relative to other metallic fission products.

  5. Continuous-variable quantum key distribution with Gaussian source noise

    SciTech Connect

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-05-15

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  6. Tank waste source term inventory validation. Volume 1. Letter report

    SciTech Connect

    Brevick, C.H.; Gaddis, L.A.; Johnson, E.D.

    1995-04-28

    The sample data for selection of 11 radionuclides and 24 chemical analytes were extracted from six separate sample data sets, were arranged in a tabular format and were plotted on scatter plots for all of the 149 single-shell tanks, the 24 double-shell tanks and the four aging waste tanks. The solid and liquid sample data was placed in separate tables and plots. The sample data and plots were compiled from the following data sets: characterization raw sample data, recent core samples, D. Braun data base, Wastren (Van Vleet) data base, TRAC and HTCE inventories. This document is Volume I of the Letter Report entitled Tank Waste Source Term Inventory Validation.

  7. Source-term evaluations from recent core-melt experiments

    SciTech Connect

    Parker, G.W.; Creek, G.E.; Sutton, A.L. Jr.

    1985-01-01

    Predicted consequences of hypothetical severe reactor accidents resulting in core meltdown appear to be too conservatively projected because of the simplistic concepts often assumed for the intricate and highly variable phenomena involved. Recent demonstration work on a modest scale (1-kg) has already revealed significant variations in the mode and temperature for clad failure, in the rates of formation of zirconium alloys, in the nature of the UO/sub 2/-ZrO/sub 2/ eutectic mixtures, and in aerosol generation rates. The current series of core-melt demonstration experiments (at the 10-kg scale) seem to confirm that an increase in size of the meltdown mass will lead to an even further reduction in the amount of vaporized components. Source terms that are based on older release evaluations could be up to an order of magnitude too large. 6 refs., 6 figs., 2 tabs.

  8. 5.0. Depletion, activation, and spent fuel source terms

    SciTech Connect

    Wieselquist, William A.

    2016-04-01

    SCALE’s general depletion, activation, and spent fuel source terms analysis capabilities are enabled through a family of modules related to the main ORIGEN depletion/irradiation/decay solver. The nuclide tracking in ORIGEN is based on the principle of explicitly modeling all available nuclides and transitions in the current fundamental nuclear data for decay and neutron-induced transmutation and relies on fundamental cross section and decay data in ENDF/B VII. Cross section data for materials and reaction processes not available in ENDF/B-VII are obtained from the JEFF-3.0/A special purpose European activation library containing 774 materials and 23 reaction channels with 12,617 neutron-induced reactions below 20 MeV. Resonance cross section corrections in the resolved and unresolved range are performed using a continuous-energy treatment by data modules in SCALE. All nuclear decay data, fission product yields, and gamma-ray emission data are developed from ENDF/B-VII.1 evaluations. Decay data include all ground and metastable state nuclides with half-lives greater than 1 millisecond. Using these data sources, ORIGEN currently tracks 174 actinides, 1149 fission products, and 974 activation products. The purpose of this chapter is to describe the stand-alone capabilities and underlying methodology of ORIGEN—as opposed to the integrated depletion capability it provides in all coupled neutron transport/depletion sequences in SCALE, as described in other chapters.

  9. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  10. Depositional controls, distribution, and effectiveness of world's petroleum source rocks

    SciTech Connect

    Klemme, H.D.; Ulmishek, G.F.

    1989-03-01

    Six stratigraphic intervals representing one-third of Phanerozoic time contain source rocks that have provided more than 90% of the world's discovered oil and gas reserves (in barrels of oil equivalent). The six intervals include (1) Silurian (generated 9% of the world's reserves); (2) Upper Devonian-Tournaisian (8% of reserves); (3) Pennsylvanian-Lower Permian (8% of reserves); (4) Upper Jurassic (25% of reserves); (5) middle Cretaceous (29% of reserves); and (6) Oligocene-Miocene (12.5% of reserves). This uneven distribution of source rocks in time has no immediately obvious cyclicity, nor are the intervals exactly repeatable in the commonality of factors that controlled the formation of source rocks. In this study, source rocks of the six intervals have been mapped worldwide together with oil and gas reserves generated by these rocks. Analysis of the maps shows that the main factors affecting deposition of these source rocks and their spatial distribution and effectiveness in generating hydrocarbon reserves are geologic age, global and regional tectonics, paleogeography, climate, and biologic evolution. The effect of each of the factors on geologic setting and quality of source rocks has been analyzed. Compilation of data on maturation time for these source rocks demonstrated that the majority of discovered oil and gas is very young, more than 80% of the world's oil and gas reserves have been generated since Aptian time, and nearly half of the world's hydrocarbons have been generated and trapped since the Oligocene.

  11. Long-term staff scheduling with regular temporal distribution.

    PubMed

    Carrasco, Rafael C

    2010-11-01

    Although optimal staff scheduling often requires elaborate computational methods, those cases which are not highly constrained can be efficiently solved using simpler approaches. This paper describes how a simple procedure, combining random and greedy strategies with heuristics, has been successfully applied in a Spanish hospital to assign guard shifts to the physicians in a department. In this case, the employees prefer that their guard duties are regularly distributed in time. The workload distribution must also satisfy some constraints: in particular, the distribution of duties among the staff must be uniform when a number of tasks and shift types (including some unfrequent and aperiodic types, such as those scheduled during long weekends) are considered. Furthermore, the composition of teams should be varied, in the sense that no particular pairing should dominate the assignments. The procedure proposed is able to find suitable solutions when the number of employees available for every task is not small compared to the number required at every shift. The software is distributed under the terms of the GNU General Public License.

  12. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  13. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  14. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    SciTech Connect

    Yu, Charley; Gnanapragasam, Emmanuel; Cheng, Jing-Jy; Kamboj, Sunita; Chen, Shih-Yew

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  15. Do forests represent a long-term source of contaminated particulate matter in the Fukushima Prefecture?

    PubMed

    Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier

    2016-12-01

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a

  16. Production, Distribution, and Applications of Californium-252 Neutron Sources

    SciTech Connect

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-10-03

    The radioisotope {sup 252}Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10{sup 11} neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells The radioisotope {sup 252}Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6- year half-life. A source the size of a person's little finger can emit up to 10 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory(ORNL). DOE sells {sup 252}Cf to commercial

  17. Production, distribution and applications of californium-252 neutron sources.

    PubMed

    Martin, R C; Knauer, J B; Balo, P A

    2000-01-01

    The radioisotope 252Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-yr half-life. A source the size of a person's little finger can emit up to 10(11) neutrons s(-1). Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement and minerals, as well as for detection and identification of explosives, land mines and unexploded military ordinance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 yr of experience and by US Bureau of Mines tests of source survivability during explosions. The production and distribution center for the US Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252Cf to commercial reencapsulators domestically and internationally. Sealed 252Cf sources are also available for loan to agencies and subcontractors of the US government and to universities for educational, research and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments and irradiation of rice to induce genetic mutations.

  18. Influence of the source distribution on the age distribution of galactic cosmic rays

    NASA Technical Reports Server (NTRS)

    Lerche, I.; Schlickeiser, R.

    1985-01-01

    The age distribution of galactic cosmic rays in the diffusion approximation is calculated. The influence of the scale height of the spatial source distribution on the mean age of particles arriving at the solar system is discussed. The broader the source distribution with respect to the galactic plane, the longer the mean age. This result provides a natural explanation for the shorter mean age of secondary cosmic rays compared to primary cosmic rays necessary for the understanding of the observed secondary/primary ratio.

  19. Security of quantum key distribution with light sources that are not independently and identically distributed

    NASA Astrophysics Data System (ADS)

    Nagamatsu, Yuichi; Mizutani, Akihiro; Ikuta, Rikizo; Yamamoto, Takashi; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2016-04-01

    Although quantum key distribution (QKD) is theoretically secure, there is a gap between the theory and practice. In fact, real-life QKD may not be secure because component devices in QKD systems may deviate from the theoretical models assumed in security proofs. To solve this problem, it is necessary to construct the security proof under realistic assumptions on the source and measurement unit. In this paper, we prove the security of a QKD protocol under practical assumptions on the source that accommodate fluctuation of the phase and intensity modulations. As long as our assumptions hold, it does not matter at all how the phase and intensity distribute or whether or not their distributions over different pulses are independently and identically distributed. Our work shows that practical sources can be safely employed in QKD experiments.

  20. Effect of distributed heat source on low frequency thermoacoustic instabilities

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yang, Lijun; Sun, Xiaofeng

    2013-06-01

    The problem of thermoacoustic instabilities in the combustor of modern air-breathing engines has become a topic of concern, which occurs as a result of unstable coupling between the heat release fluctuations and acoustic perturbations. A three-dimensional thermoacoustic model including the distributed non-uniform heat source and non-uniform flow is developed based on the domain decomposition spectral method. The importance of distributed heat source on combustion instabilities of longitudinal modes is analyzed with the help of a simplified geometrical configuration of combustor. The results show that the longitudinal distribution of heat source has a crucial effect on instabilities. In addition, the effect of circumferentially non-uniform heat source and non-uniform flow on longitudinal instabilities is also investigated. It can be found that the influence of circumferential non-uniformity can become significant on the lowest frequency instabilities, in particular, the oscillation frequency and growth rate are all evidently affected by temperature non-uniformity and time delay non-uniformity.

  1. Source terms for plutonium aerosolization from nuclear weapon accidents

    SciTech Connect

    Stephens, D.R.

    1995-07-01

    The source term literature was reviewed to estimate aerosolized and respirable release fractions for accidents involving plutonium in high-explosive (HE) detonation and in fuel fires. For HE detonation, all estimates are based on the total amount of Pu. For fuel fires, all estimates are based on the amount of Pu oxidized. I based my estimates for HE detonation primarily upon the results from the Roller Coaster experiment. For hydrocarbon fuel fire oxidation of plutonium, I based lower bound values on laboratory experiments which represent accident scenarios with very little turbulence and updraft of a fire. Expected values for aerosolization were obtained from the Vixen A field tests, which represent a realistic case for modest turbulence and updraft, and for respirable fractions from some laboratory experiments involving large samples of Pu. Upper bound estimates for credible accidents are based on experiments involving combustion of molten plutonium droplets. In May of 1991 the DOE Pilot Safety Study Program established a group of experts to estimate the fractions of plutonium which would be aerosolized and respirable for certain nuclear weapon accident scenarios.

  2. Verification test calculations for the Source Term Code Package

    SciTech Connect

    Denning, R S; Wooton, R O; Alexander, C A; Curtis, L A; Cybulskis, P; Gieseke, J A; Jordan, H; Lee, K W; Nicolosi, S L

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs.

  3. Challenges in defining a radiologic and hydrologic source term for underground nuclear test centers, Nevada Test Site, Nye County, Nevada

    SciTech Connect

    Smith, D.K.

    1995-06-01

    The compilation of a radionuclide inventory for long-lived radioactive contaminants residual from nuclear testing provides a partial measure of the radiologic source term at the Nevada Test Site. The radiologic source term also includes potentially mobile short-lived radionuclides excluded from the inventory. The radiologic source term for tritium is known with accuracy and is equivalent to the hydrologic source term within the saturated zone. Definition of the total hydrologic source term for fission and activation products that have high activities for decades following underground testing involves knowledge and assumptions which are presently unavailable. Systematic investigation of the behavior of fission products, activation products and actinides under saturated or Partially saturated conditions is imperative to define a representative total hydrologic source term. This is particularly important given the heterogeneous distribution of radionuclides within testing centers. Data quality objectives which emphasize a combination of measurements and credible estimates of the hydrologic source term are a priority for near-field investigations at the Nevada Test Site.

  4. Distributed policy based access to networked heterogeneous ISR data sources

    NASA Astrophysics Data System (ADS)

    Bent, G.; Vyvyan, D.; Wood, David; Zerfos, Petros; Calo, Seraphin

    2010-04-01

    Within a coalition environment, ad hoc Communities of Interest (CoI's) come together, perhaps for only a short time, with different sensors, sensor platforms, data fusion elements, and networks to conduct a task (or set of tasks) with different coalition members taking different roles. In such a coalition, each organization will have its own inherent restrictions on how it will interact with the others. These are usually stated as a set of policies, including security and privacy policies. The capability that we want to enable for a coalition operation is to provide access to information from any coalition partner in conformance with the policies of all. One of the challenges in supporting such ad-hoc coalition operations is that of providing efficient access to distributed sources of data, where the applications requiring the data do not have knowledge of the location of the data within the network. To address this challenge the International Technology Alliance (ITA) program has been developing the concept of a Dynamic Distributed Federated Database (DDFD), also know as a Gaian Database. This type of database provides a means for accessing data across a network of distributed heterogeneous data sources where access to the information is controlled by a mixture of local and global policies. We describe how a network of disparate ISR elements can be expressed as a DDFD and how this approach enables sensor and other information sources to be discovered autonomously or semi-autonomously and/or combined, fused formally defined local and global policies.

  5. Long-term variations of muon flux angular distribution

    NASA Astrophysics Data System (ADS)

    Shutenko, V. V.; Astapov, I. I.; Barbashina, N. S.; Dmitrieva, A. N.; Kokoulin, R. P.; Kompaniets, K. G.; Petrukhin, A. A.; Yashin, I. I.

    2013-02-01

    Intensity of the atmospheric muon flux depends on a number of factors: energy spectrum of primary cosmic rays (PCR), heliospheric conditions, state of the magnetosphere and atmosphere of the Earth. The wide-aperture muon hodoscope URAGAN (Moscow, Russia, 55.7° N, 37.7° E, 173 m a.s.l.) makes it possible to investigate not only variations of the intensity of muon flux, but also temporal changes of its angular distribution. For the analysis of angular distribution variations, the vector of local anisotropy is used. The vector of local anisotropy is the sum of individual vectors (directions of the reconstructed muon tracks) normalized to the total number of reconstructed tracks. The vector of local anisotropy and its projections show different sensitivities to parameters of the processes of modulation of PCR in the heliosphere and the Earth's magnetosphere, and the passage of secondary cosmic rays through the terrestrial atmosphere. In the work, results of the analysis of long-term variations of hourly average projections of the vector of local anisotropy obtained from the URAGAN data during experimental series of 2007-2011 are presented.

  6. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.

  7. New passive decoy-state quantum key distribution with thermal distributed parametric down-conversion source

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Zhang, Chun-Hui; Wang, Qin

    2017-02-01

    We present a new scheme on implementing the passive quantum key distribution with thermal distributed parametric down-conversion source. In this scheme, only one-intensity decoy state is employed, but we can achieve very precise estimation on the single-photon-pulse contribution by utilizing those built-in decoy states. Moreover, we compare the new scheme with other practical methods, i.e., the standard three-intensity decoy-state BB84 protocol using either weak coherent states or parametric down-conversion source. Through numerical simulations, we demonstrate that our new scheme can drastically improve both the secure transmission distance and the key generation rate.

  8. Distribution and Sources of Black Carbon in the Arctic

    NASA Astrophysics Data System (ADS)

    Qi, Ling

    scavenging efficiency. In this dissertation, we relate WBF with temperature and ice mass fraction based on long-term observations in mixed-phase clouds. We find that WBF reduces BC scavenging efficiency globally, with larger decrease at higher latitude and altitude (from 8% in the tropics to 76% in the Arctic). WBF slows down and reduces wet deposition of BC and leave more BC in the atmosphere. Higher BC air results in larger dry deposition. The resulting total deposition is lower in mid-latitudes (by 12-34%) and higher in the Arctic (2-29%). Globally, including WBF significantly reduces the discrepancy of BCsnow (by 50%), BCair (by 50%), and washout ratios (by a factor of two to four). The remaining discrepancies in these variables suggest that in-cloud removal is likely still excessive over land. In the last part, we identify sources of surface atmospheric BC in the Arctic in springtime, when radiative forcing is the largest due to the high insolation and surface albedo. We find a large contribution from Asian anthropogenic sources (40-43%) and open biomass burning emissions from forest fires in South Siberia (29-41%). Outside the Arctic front, BC is strongly enhanced by episodic, direct transport events from Asia and Siberia after 12 days of transport. In contrast, in the Arctic front, a large fraction of the Asian contribution is in the form of 'chronic' pollution on 1-2 month timescale. As such, it is likely that previous studies using 5- or 10-day trajectory analyses strongly underestimated the contribution from Asia to surface BC in the Arctic. Our results point toward an urgent need for better characterization of flaring emissions of BC (e.g. the emission factors, temporal and spatial distribution), extensive measurements of both the dry deposition of BC over snow and ice, and the scavenging efficiency of BC in mixed-phase clouds, particularly over Ocean. More measurements of 14C are needed to better understand sources of BC (fossil fuel combustion versus biomass

  9. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    NASA Astrophysics Data System (ADS)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  10. The Impact of Source Distribution on Scalar Transport over Forested Hills

    NASA Astrophysics Data System (ADS)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  11. Extending Marine Species Distribution Maps Using Non-Traditional Sources

    PubMed Central

    Moretzsohn, Fabio; Gibeaut, James

    2015-01-01

    Abstract Background Traditional sources of species occurrence data such as peer-reviewed journal articles and museum-curated collections are included in species databases after rigorous review by species experts and evaluators. The distribution maps created in this process are an important component of species survival evaluations, and are used to adapt, extend and sometimes contract polygons used in the distribution mapping process. New information During an IUCN Red List Gulf of Mexico Fishes Assessment Workshop held at The Harte Research Institute for Gulf of Mexico Studies, a session included an open discussion on the topic of including other sources of species occurrence data. During the last decade, advances in portable electronic devices and applications enable 'citizen scientists' to record images, location and data about species sightings, and submit that data to larger species databases. These applications typically generate point data. Attendees of the workshop expressed an interest in how that data could be incorporated into existing datasets, how best to ascertain the quality and value of that data, and what other alternate data sources are available. This paper addresses those issues, and provides recommendations to ensure quality data use. PMID:25941453

  12. Diversity, distribution and sources of bacteria in residential kitchens

    PubMed Central

    Flores, Gilberto E.; Bates, Scott T.; Caporaso, J. Gregory; Lauber, Christian L.; Leff, Jonathan W.; Knight, Rob; Fierer, Noah

    2016-01-01

    Summary Bacteria readily colonize kitchen surfaces, and the exchange of microbes between humans and the kitchen environment can impact human health. However, we have a limited understanding of the overall diversity of these communities, how they differ across surfaces, and sources of bacteria to kitchen surfaces. Here we used high-throughput sequencing of the 16S rRNA gene to explore biogeographical patterns of bacteria across >80 surfaces within the kitchens of each of four households. In total, 34 bacterial and two archaeal phyla were identified, with most sequences belonging to the Actinobacteria, Bacteriodetes, Firmicutes and Proteobacteria. Genera known to contain common food-borne pathogens were low in abundance but broadly distributed throughout the kitchens, with different taxa exhibiting distinct distribution patterns. The most diverse communities were associated with infrequently cleaned surfaces such as fans above stoves, refrigerator/freezer door seals, and floors. In contrast, the least diverse communities were observed in and around sinks, which were dominated by biofilm-forming gram-negative lineages. Community composition was influenced by conditions on individual surfaces, usage patterns, and dispersal from source environments. Human skin was the primary source of bacteria across all kitchen surfaces, with contributions from food and faucet water dominating in a few specific locations. This study demonstrates that diverse bacterial communities are widely distributed in residential kitchens and that the composition of these communities is often predictable. These results also illustrate the ease with which human- and food-associated bacteria can be transferred in residential settings to kitchen surfaces. PMID:23171378

  13. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  14. Theoretical discussion for electron-density distribution in multicusp ion source

    SciTech Connect

    Zhan Hualin; Hu Chundong; Xie Yahong; Wu Bin; Wang Jinfang; Liang Lizheng; Wei Jianglong

    2011-03-21

    By introducing some ideas of magnetohydrodynamics (MHD) and kinetic theories, some useful solutions for electron-density distribution in the radial direction in multicusp ion source are obtained. Therefore, some conclusions are made in this perspective: 1, the electron-density distributions in a specific region in the sheath are the same with or without magnetic field; 2, the influence of magnetic field on the electron density obeys exponential law, which should take into account the collision term as well if the magnetic field is strong; 3, the result derived from the Boltzmann equation is qualitatively consistent with some given experimental results.

  15. Long-term Trend of Solar Coronal Hole Distribution from 1975 to 2014

    NASA Astrophysics Data System (ADS)

    Fujiki, K.; Tokumaru, M.; Hayashi, K.; Satonaka, D.; Hakamada, K.

    2016-08-01

    We developed an automated prediction technique for coronal holes using potential magnetic field extrapolation in the solar corona to construct a database of coronal holes appearing from 1975 February to 2015 July (Carrington rotations from 1625 to 2165). Coronal holes are labeled with the location, size, and average magnetic field of each coronal hole on the photosphere and source surface. As a result, we identified 3335 coronal holes and found that the long-term distribution of coronal holes shows a similar pattern known as the magnetic butterfly diagram, and polar/low-latitude coronal holes tend to decrease/increase in the last solar minimum relative to the previous two minima.

  16. A vortex-source combination, a source, and a vortex with distributed heat supply

    NASA Astrophysics Data System (ADS)

    Kucherov, A. N.

    1983-04-01

    An analysis is made of the effect of distributed heat supply on the gasdynamic characteristics of a vortex-source (vortex-sink) combination, a source (sink), and a vortex. It is shown that in all the cases considered, there is a minimum radius for which the radial component of M is equal to unity. It is also shown that there is a critical intensity of heat release (for a fixed similarity parameter) separating two families of integral curves and that for this critical value a solution exists only under certain conditions.

  17. 78 FR 56685 - SourceGas Distribution LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Application Take notice that on August 27, 2013, SourceGas Distribution LLC (SourceGas), 600 12th Street, Suite 300, Golden, Colorado 80401..., without further commission authorization, provide natural gas distribution service. SourceGas...

  18. Long-term optical behavior of 114 extragalactic sources

    NASA Astrophysics Data System (ADS)

    Pica, A. J.; Pollock, J. T.; Smith, A. G.; Leacock, R. J.; Edwards, P. L.; Scott, R. L.

    1980-11-01

    Photographic observations of over 200 quasars and related objects have been obtained at the Rosemary Hill Observatory since 1968. Twenty that are optically violent variables were reported on by Pollock et al. (1979). This paper presents data for 114 less active sources, 58 of which exhibit optical variations at a confidence level of 95% or greater. Light curves are given for the 26 most active sources. In addition, the overall monitoring program at the Observatory is reviewed, and information on the status of 206 objects is provided.

  19. Testing contamination source identification methods for water distribution networks

    DOE PAGES

    Seth, Arpan; Klise, Katherine A.; Siirola, John D.; ...

    2016-04-01

    In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less

  20. Testing contamination source identification methods for water distribution networks

    SciTech Connect

    Seth, Arpan; Klise, Katherine A.; Siirola, John D.; Haxton, Terranna; Laird, Carl D.

    2016-04-01

    In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections, and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.

  1. Reservoir, seal, and source rock distribution in Essaouira Rift Basin

    SciTech Connect

    Ait Salem, A. )

    1994-07-01

    The Essaouira onshore basin is an important hydrocarbon generating basin, which is situated in western Morocco. There are seven oil and gas-with-condensate fields; six are from Jurassic reservoirs and one from a Triassic reservoir. As a segment of the Atlantic passive continental margin, the Essaouira basin was subjected to several post-Hercynian basin deformation phases, which resulted in distribution, in space and time, of reservoir, seal, and source rock. These basin deformations are synsedimentary infilling of major half grabens with continental red buds and evaporite associated with the rifting phase, emplacement of a thick postrifting Jurassic and Cretaceous sedimentary wedge during thermal subsidence, salt movements, and structural deformations in relation to the Atlas mergence. The widely extending lower Oxfordian shales are the only Jurassic shale beds penetrated and recognized as potential and mature source rocks. However, facies analysis and mapping suggested the presence of untested source rocks in Dogger marine shales and Triassic to Liassic lacustrine shales. Rocks with adequate reservoir characteristics were encountered in Triassic/Liassic fluvial sands, upper Liassic dolomites, and upper Oxfordian sandy dolomites. The seals are provided by Liassic salt for the lower reservoirs and Middle to Upper Jurassic anhydrite for the upper reservoirs. Recent exploration studies demonstrate that many prospective structure reserves remain untested.

  2. Homogenization of the Brush Problem with a Source Term in L 1

    NASA Astrophysics Data System (ADS)

    Gaudiello, Antonio; Guibé, Olivier; Murat, François

    2017-03-01

    We consider a domain which has the form of a brush in 3D or the form of a comb in 2D, i.e. an open set which is composed of cylindrical vertical teeth distributed over a fixed basis. All the teeth have a similar fixed height; their cross sections can vary from one tooth to another and are not supposed to be smooth; moreover the teeth can be adjacent, i.e. they can share parts of their boundaries. The diameter of every tooth is supposed to be less than or equal to ɛ, and the asymptotic volume fraction of the teeth (as ɛ tends to zero) is supposed to be bounded from below away from zero, but no periodicity is assumed on the distribution of the teeth. In this domain we study the asymptotic behavior (as ɛ tends to zero) of the solution of a second order elliptic equation with a zeroth order term which is bounded from below away from zero, when the homogeneous Neumann boundary condition is satisfied on the whole of the boundary. First, we revisit the problem where the source term belongs to L 2. This is a classical problem, but our homogenization result takes place in a geometry which is more general that the ones which have been considered before. Moreover we prove a corrector result which is new. Then, we study the case where the source term belongs to L 1. Working in the framework of renormalized solutions and introducing a definition of renormalized solutions for degenerate elliptic equations where only the vertical derivative is involved (such a definition is new), we identify the limit problem and prove a corrector result.

  3. CHALLENGES IN SOURCE TERM MODELING OF DECONTAMINATION AND DECOMMISSIONING WASTES.

    SciTech Connect

    SULLIVAN, T.M.

    2006-08-01

    Development of real-time predictive modeling to identify the dispersion and/or source(s) of airborne weapons of mass destruction including chemical, biological, radiological, and nuclear material in urban environments is needed to improve response to potential releases of these materials via either terrorist or accidental means. These models will also prove useful in defining airborne pollution dispersion in urban environments for pollution management/abatement programs. Predicting gas flow in an urban setting on a scale of less than a few kilometers is a complicated and challenging task due to the irregular flow paths that occur along streets and alleys and around buildings of different sizes and shapes, i.e., ''urban canyons''. In addition, air exchange between the outside and buildings and subway areas further complicate the situation. Transport models that are used to predict dispersion of WMD/CBRN materials or to back track the source of the release require high-density data and need defensible parameterizations of urban processes. Errors in the data or any of the parameter inputs or assumptions will lead to misidentification of the airborne spread or source release location(s). The need for these models to provide output in a real-time fashion if they are to be useful for emergency response provides another challenge. To improve the ability of New York City's (NYC's) emergency management teams and first response personnel to protect the public during releases of hazardous materials, the New York City Urban Dispersion Program (UDP) has been initiated. This is a four year research program being conducted from 2004 through 2007. This paper will discuss ground level and subway Perfluorocarbon tracer (PFT) release studies conducted in New York City. The studies released multiple tracers to study ground level and vertical transport of contaminants. This paper will discuss the results from these tests and how these results can be used for improving transport models

  4. [Spatial distribution and pollution source identification of agricultural non-point source pollution in Fujiang watershed].

    PubMed

    Ding, Xiao-Wen; Shen, Zhen-Yao

    2012-11-01

    In order to provide regulatory support for management and control of non-point source (NPS) pollution in Fujiang watershed, agricultural NPS pollution is simulated, spatial distribution characteristics of NPS pollution are analyzed, and the primary pollution sources are also identified, by export coefficient model (ECM) and geographic information system (GIS). Agricultural NPS total nitrogen (TN) loading was of research area was 9.11 x 10(4) t in 2010, and the average loading was intensity was 3.10 t x km(-2). Agricultural NPS TN loading mainly distributed over dry lands, Mianyang city and gentle slope areas; high loading intensity areas were dry lands, Deyang city and gentle slope areas. Agricultural land use, of which contribution rate was 62. 12%, was the most important pollution source; fertilizer loss in dry lands, of which contribution rate was 50.49%, was the prominent. Improving methods of agricultural cultivation, implementing "farm land returning to woodland" policy, and enhancing treatment efficiency of domestic sewage and livestock waster wate are effective measures.

  5. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... under Public Law 103-354 will be the lower of the interest rates in effect at the time of loan approval... 7 Agriculture 12 2011-01-01 2011-01-01 false Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds....

  6. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... under Public Law 103-354 will be the lower of the interest rates in effect at the time of loan approval... 7 Agriculture 12 2014-01-01 2013-01-01 true Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds....

  7. Source-rock distribution model of the periadriatic region

    SciTech Connect

    Zappaterra, E. )

    1994-03-01

    The Periadriatic area is a mosaic of geological provinces comprised of spatially and temporally similar tectonic-sedimentary cycles. Tectonic evolution progressed from a Triassic-Early Jurassic (Liassic) continental rifting stage on the northern edge of the African craton, through an Early Jurassic (Middle Liassic)-Late Cretaceous/Eocene oceanic rifting stage and passive margin formation, to a final continental collision and active margin deformation stage in the Late Cretaceous/Eocene to Holocene. Extensive shallow-water carbonate platform deposits covered large parts of the Periadriatic region in the Late Triassic. Platform breakup and development of a platform-to-basin carbonate shelf morphology began in the Late Triassic and extended through the Cretaceous. On the basis of this paleogeographic evolution, the regional geology of the Periadriatic region can be expressed in terms of three main Upper Triassic-Paleogene sedimentary sequences: (A), the platform sequence; (B), the platform to basin sequence; and (C), the basin sequence. These sequences developed during the initial rifting and subsequent passive-margin formation tectonic stages. The principal Triassic source basins and most of the surface hydrocarbon indications and economically important oil fields of the Periadriatic region are associated with sequence B areas. No major hydrocarbon accumulations can be directly attributed to the Jurassic-Cretaceous epioceanic and intraplatform source rock sequences. The third episode of source bed deposition characterizes the final active margin deformation stage and is represented by Upper Tertiary organic-rich terrigenous units, mostly gas-prone. These are essentially associated with turbiditic and flysch sequences of foredeep basins and have generated the greater part of the commercial biogenic gases of the Periadriatic region. 82 refs., 11 figs., 2 tabs.

  8. Plutonium isotopes and 241Am in the atmosphere of Lithuania: A comparison of different source terms

    NASA Astrophysics Data System (ADS)

    Lujanienė, G.; Valiulis, D.; Byčenkienė, S.; Šakalys, J.; Povinec, P. P.

    2012-12-01

    137Cs, 241Am and Pu isotopes collected in aerosol samples during 1994-2011 were analyzed with special emphasis on better understanding of Pu and Am behavior in the atmosphere. The results from long-term measurements of 240Pu/239Pu atom ratios showed a bimodal frequency distribution with median values of 0.195 and 0.253, indicating two main sources contributing to the Pu activities at the Vilnius sampling station. The low Pu atom ratio of 0.141 could be attributed to the weapon-grade plutonium derived from the nuclear weapon test sites. The frequency of air masses arriving from the North-West and North-East correlated with the Pu atom ratio indicating the input from the sources located in these regions (the Novaya Zemlya test site, Siberian nuclear plants), while no correlation with the Chernobyl region was observed. Measurements carried out during the Fukushima accident showed a negligible impact of this source with Pu activities by four orders of magnitude lower as compared to the Chernobyl accident. The activity concentration of actinides measured in the integrated sample collected in March-April, 2011 showed a small contribution of Pu with unusual activity and atom ratios indicating the presence of the spent fuel of different origin than that of the Chernobyl accident.

  9. Source term estimation during incident response to severe nuclear power plant accidents

    SciTech Connect

    McKenna, T.J.; Glitter, J.G.

    1988-10-01

    This document presents a method of source term estimation that reflects the current understanding of source term behavior and that can be used during an event. The various methods of estimating radionuclide release to the environment (source terms) as a result of an accident at a nuclear power reactor are discussed. The major factors affecting potential radionuclide releases off site (source terms) as a result of nuclear power plant accidents are described. The quantification of these factors based on plant instrumentation also is discussed. A range of accident conditions from those within the design basis to the most severe accidents possible are included in the text. A method of gross estimation of accident source terms and their consequences off site is presented. 39 refs., 48 figs., 19 tabs.

  10. Spatial distribution and migration of nonylphenol in groundwater following long-term wastewater irrigation.

    PubMed

    Wang, Shiyu; Wu, Wenyong; Liu, Fei; Yin, Shiyang; Bao, Zhe; Liu, Honglu

    2015-01-01

    Seen as a solution to water shortages, wastewater reuse for crop irrigation does however poses a risk owing to the potential release of organic contaminants into soil and water. The frequency of detection (FOD), concentration, and migration of nonylphenol (NP) isomers in reclaimed water (FODRW), surface water (FODSW), and groundwater (FODGW) were investigated in a long-term wastewater irrigation area in Beijing. The FODRW, FODSW and FODGW of any or all of 12 NP isomers were 66.7% to 100%, 76.9% to 100% and 13.3% to 60%, respectively. The mean (±standard deviation) NP concentrations of the reclaimed water, surface water, and groundwater (NPRW, NPSW, NPGW, repectively) were 469.4±73.4 ng L(-1), 694.6±248.7 ng(-1) and 244.4±230.8 ng(-1), respectively. The existence of external pollution sources during water transmission and distribution resulted in NPSW exceeding NPRW. NP distribution in groundwater was related to the duration and quantity of wastewater irrigation, the sources of aquifer recharge, and was seen to decrease with increasing aquifer depth. Higher riverside infiltration rate nearby leads to higher FODGW values. The migration rate of NP isomers was classified as high, moderate or low.

  11. Source term model evaluations for the low-level waste facility performance assessment

    SciTech Connect

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  12. Source-term reevaluation for US commercial nuclear power reactors: a status report

    SciTech Connect

    Herzenberg, C.L.; Ball, J.R.; Ramaswami, D.

    1984-12-01

    Only results that had been discussed publicly, had been published in the open literature, or were available in preliminary reports as of September 30, 1984, are included here. More than 20 organizations are participating in source-term programs, which have been undertaken to examine severe accident phenomena in light-water power reactors (including the chemical and physical behavior of fission products under accident conditions), update and reevaluate source terms, and resolve differences between predictions and observations of radiation releases and related phenomena. Results from these source-term activities have been documented in over 100 publications to date.

  13. Source terms and attenuation lengths for estimating shielding requirements or dose analyses of proton therapy accelerators.

    PubMed

    Sheu, Rong-Jiun; Lai, Bo-Lun; Lin, Uei-Tyng; Jiang, Shiang-Huei

    2013-08-01

    Proton therapy accelerators in the energy range of 100-300 MeV could potentially produce intense secondary radiation, which must be carefully evaluated and shielded for the purpose of radiation safety in a densely populated hospital. Monte Carlo simulations are generally the most accurate method for accelerator shielding design. However, simplified approaches such as the commonly used point-source line-of-sight model are usually preferable on many practical occasions, especially for scoping shielding design or quick sensitivity studies. This work provides a set of reliable shielding data with reasonable coverage of common target and shielding materials for 100-300 MeV proton accelerators. The shielding data, including source terms and attenuation lengths, were derived from a consistent curve fitting process of a number of depth-dose distributions within the shield, which were systematically calculated by using MCNPX for various beam-target shield configurations. The general characteristics and qualities of this data set are presented. Possible applications in cases of single- and double-layer shielding are considered and demonstrated.

  14. Spatial Distribution of Soil Fauna In Long Term No Tillage

    NASA Astrophysics Data System (ADS)

    Corbo, J. Z. F.; Vieira, S. R.; Siqueira, G. M.

    2012-04-01

    The soil is a complex system constituted by living beings, organic and mineral particles, whose components define their physical, chemical and biological properties. Soil fauna plays an important role in soil and may reflect and interfere in its functionality. These organisms' populations may be influenced by management practices, fertilization, liming and porosity, among others. Such changes may reduce the composition and distribution of soil fauna community. Thus, this study aimed to determine the spatial variability of soil fauna in consolidated no-tillage system. The experimental area is located at Instituto Agronômico in Campinas (São Paulo, Brazil). The sampling was conducted in a Rhodic Eutrudox, under no tillage system and 302 points distributed in a 3.2 hectare area in a regular grid of 10.00 m x 10.00 m were sampled. The soil fauna was sampled with "Pitfall Traps" method and traps remained in the area for seven days. Data were analyzed using descriptive statistics to determine the main statistical moments (mean variance, coefficient of variation, standard deviation, skewness and kurtosis). Geostatistical tools were used to determine the spatial variability of the attributes using the experimental semivariogram. For the biodiversity analysis, Shannon and Pielou indexes and richness were calculated for each sample. Geostatistics has proven to be a great tool for mapping the spatial variability of groups from the soil epigeal fauna. The family Formicidae proved to be the most abundant and dominant in the study area. The parameters of descriptive statistics showed that all attributes studied showed lognormal frequency distribution for groups from the epigeal soil fauna. The exponential model was the most suited for the obtained data, for both groups of epigeal soil fauna (Acari, Araneae, Coleoptera, Formicidae and Coleoptera larva), and the other biodiversity indexes. The sampling scheme (10.00 m x 10.00 m) was not sufficient to detect the spatial

  15. An Investigation of the Influence of Indexing Exhaustivity and Term Distributions on a Document Space.

    ERIC Educational Resources Information Center

    Wolfram, Dietmar; Zhang, Jin

    2002-01-01

    Investigates the influence of index term distributions and indexing exhaustivity on the document space within a visual information retrieval environment called DARE (Distance Angle Retrieval Environment). Discusses results that demonstrate the importance of term distribution and exhaustivity on the density of document spaces and their implications…

  16. Accident source terms for boiling water reactors with high burnup cores.

    SciTech Connect

    Gauntt, Randall O.; Powers, Dana Auburn; Leonard, Mark Thomas

    2007-11-01

    The primary objective of this report is to provide the technical basis for development of recommendations for updates to the NUREG-1465 Source Term for BWRs that will extend its applicability to accidents involving high burnup (HBU) cores. However, a secondary objective is to re-examine the fundamental characteristics of the prescription for fission product release to containment described by NUREG-1465. This secondary objective is motivated by an interest to understand the extent to which research into the release and behaviors of radionuclides under accident conditions has altered best-estimate calculations of the integral response of BWRs to severe core damage sequences and the resulting radiological source terms to containment. This report, therefore, documents specific results of fission product source term analyses that will form the basis for the HBU supplement to NUREG-1465. However, commentary is also provided on observed differences between the composite results of the source term calculations performed here and those reflected NUREG-1465 itself.

  17. Source terms: an investigation of uncertainties, magnitudes, and recommendations for research. [PWR; BWR

    SciTech Connect

    Levine, S.; Kaiser, G. D.; Arcieri, W. C.; Firstenberg, H.; Fulford, P. J.; Lam, P. S.; Ritzman, R. L.; Schmidt, E. R.

    1982-03-01

    The purpose of this document is to assess the state of knowledge and expert opinions that exist about fission product source terms from potential nuclear power plant accidents. This is so that recommendations can be made for research and analyses which have the potential to reduce the uncertainties in these estimated source terms and to derive improved methods for predicting their magnitudes. The main reasons for writing this report are to indicate the major uncertainties involved in defining realistic source terms that could arise from severe reactor accidents, to determine which factors would have the most significant impact on public risks and emergency planning, and to suggest research and analyses that could result in the reduction of these uncertainties. Source terms used in the conventional consequence calculations in the licensing process are not explicitly addressed.

  18. Future prospects for ECR ion sources with improved charge state distributions

    SciTech Connect

    Alton, G.D.

    1995-12-31

    Despite the steady advance in the technology of the ECR ion source, present art forms have not yet reached their full potential in terms of charge state and intensity within a particular charge state, in part, because of the narrow band width. single-frequency microwave radiation used to heat the plasma electrons. This article identifies fundamentally important methods which may enhance the performances of ECR ion sources through the use of: (1) a tailored magnetic field configuration (spatial domain) in combination with single-frequency microwave radiation to create a large uniformly distributed ECR ``volume`` or (2) the use of broadband frequency domain techniques (variable-frequency, broad-band frequency, or multiple-discrete-frequency microwave radiation), derived from standard TWT technology, to transform the resonant plasma ``surfaces`` of traditional ECR ion sources into resonant plasma ``volume``. The creation of a large ECR plasma ``volume`` permits coupling of more power into the plasma, resulting in the heating of a much larger electron population to higher energies, thereby producing higher charge state ions and much higher intensities within a particular charge state than possible in present forms of` the source. The ECR ion source concepts described in this article offer exciting opportunities to significantly advance the-state-of-the-art of ECR technology and as a consequence, open new opportunities in fundamental and applied research and for a variety of industrial applications.

  19. 78 FR 6318 - SourceGas Distribution LLC; Notice of Petition for Rate Approval

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Petition for Rate Approval Take notice that on January 15, 2013, SourceGas Distribution LLC (SourceGas) filed a rate election pursuant...

  20. 78 FR 41398 - SourceGas Distribution LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Filing Take notice that on June 27, 2013, SourceGas Distribution LLC (SourceGas) filed a Rate Election and revised Statement of...

  1. 77 FR 28374 - SourceGas Distribution LLC; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Compliance Filing Take notice that on April 30, 2012, SourceGas Distribution LLC (SourceGas) filed a revised Statement of Operating...

  2. Source and distribution of metals in urban soil of Bombay, India, using multivariate statistical techniques

    NASA Astrophysics Data System (ADS)

    Ratha, D. S.; Sahu, B. K.

    1993-11-01

    Simplification of a complex system of geochemical variables obtained from the soils of an industrialized area of Bombay is attempted by means of R-mode factor analysis. Prior to factor analysis, discriminant analysis was carried out taking rock and soil chemical data to establish the anthropogenic contribution of metals in soil. Trace elements (Cd, Co, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) are expressed in terms of three rotated factors. The factors mostly indicate anthropogenic sources of metals such as atmospheric fallout, emission from different industrial chimneys, crushing operations in quarries, and sewage sludges. Major elements (Na, Mg, Al, Si, P, K, Ca, Ti, Mn, and Fe) are also expressed in terms of three rotated factors indicating natural processes such as chemical weathering, presence of clay minerals, and contribution from sewage sludges and municipal refuse. Summary statistics (mean, standard deviation, skewness, and kurtosis) for the particle size distribution were interpreted as moderate dominance of fine particles. Mineralogical studies revealed the presence of montmorillonite, kaolinite, and illite types of clay minerals. Thus the present study provides information about the metal content entering into the soil and their level, sources, and distribution in the area.

  3. Correlating Pluto's Albedo Distribution to Long Term Insolation Patterns

    NASA Astrophysics Data System (ADS)

    Earle, Alissa M.; Binzel, Richard P.; Stern, S. Alan; Young, Leslie A.; Buratti, Bonnie J.; Ennico, Kimberly; Grundy, Will M.; Olkin, Catherine B.; Spencer, John R.; Weaver, Hal A.

    2015-11-01

    NASA's New Horizons' reconnaissance of the Pluto system has revealed striking albedo contrasts from polar to equatorial latitudes on Pluto, as well as sharp boundaries for longitudinal variations. These contrasts suggest Pluto undergoes dynamic evolution that drives the redistribution of volatiles. Using the New Horizons results as a template, in this talk we will explore the volatile migration process driven seasonally on Pluto considering multiple timescales. These timescales include the current orbit (248 years) as well as the timescales for obliquity precession (amplitude of 23 degrees over 3 Myrs) and regression of the orbital longitude of perihelion (3.7 Myrs). We will build upon the long-term insolation history model described by Earle and Binzel (2015, Icarus 250, 405-412) with the goal of identifying the most critical timescales that drive the features observed in Pluto’s current post-perihelion epoch. This work was supported by the NASA New Horizons Project.

  4. Long Term 2 Second Round Source Water Monitoring and Bin Placement Memo

    EPA Pesticide Factsheets

    The Long Term 2 Enhanced Surface Water Treatment Rule (LT2ESWTR) applies to all public water systems served by a surface water source or public water systems served by a ground water source under the direct influence of surface water.

  5. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    PubMed

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators.

  6. Source term balance in a severe storm in the Southern North Sea

    NASA Astrophysics Data System (ADS)

    van Vledder, Gerbrant Ph.; Hulst, Sander Th. C.; McConochie, Jason D.

    2016-12-01

    This paper presents the results of a wave hindcast of a severe storm in the Southern North Sea to verify recently developed deep and shallow water source terms. The work was carried out in the framework of the ONR funded NOPP project (Tolman et al. 2013) in which deep and shallow water source terms were developed for use in third-generation wave prediction models. These deep water source terms for whitecapping, wind input and nonlinear interactions were developed, implemented and tested primarily in the WAVEWATCH III model, whereas shallow water source terms for depth-limited wave breaking and triad interactions were developed, implemented and tested primarily in the SWAN wave model. So far, the new deep-water source terms for whitecapping were not fully tested in shallow environments. Similarly, the shallow water source terms were not yet tested in large inter-mediate depth areas like the North Sea. As a first step in assessing the performance of these newly developed source terms, the source term balance and the effect of different physical settings on the prediction of wave heights and wave periods in the relatively shallow North Sea was analysed. The December 2013 storm was hindcast with a SWAN model implementation for the North Sea. Spectral wave boundary conditions were obtained from an Atlantic Ocean WAVEWATCH III model implementation and the model was driven by hourly CFSR wind fields. In the southern part of the North Sea, current and water level effects were included. The hindcast was performed with five different settings for whitecapping, viz. three Komen type whitecapping formulations, the saturation-based whitecapping by Van der Westhuysen et al. (2007) and the recently developed ST6 whitecapping as described by Zieger et al. (2015). Results of the wave hindcast were compared with buoy measurements at location K13 collected by the Dutch Ministry of Transport and Public Works. An analysis was made of the source term balance at three locations, the deep

  7. Chemotaxis Increases the Residence Time Distribution of Bacteria in Granular Media Containing Distributed Contaminant Sources

    NASA Astrophysics Data System (ADS)

    Adadevoh, J.; Triolo, S.; Ramsburg, C. A.; Ford, R.

    2015-12-01

    The use of chemotactic bacteria in bioremediation has the potential to increase access to, and biotransformation of, contaminant mass within the subsurface environment. This laboratory-scale study aimed to understand and quantify the influence of chemotaxis on residence times of pollutant-degrading bacteria within homogeneous treatment zones. Focus was placed on a continuous flow sand-packed column system in which a uniform distribution of naphthalene crystals created distributed sources of dissolved phase contaminant. A 10 mL pulse of Pseudomonas putida G7, which is chemotactic to naphthalene, and Pseudomonas putida G7 Y1, a non-chemotactic mutant strain, were simultaneously introduced into the sand-packed column at equal concentrations. Breakthrough curves obtained for the bacteria from column experiments conducted with and without naphthalene were used to quantify the effect of chemotaxis on transport parameters. In the presence of the chemoattractant, longitudinal dispersivity of PpG7 increased by a factor of 3 and percent recovery decreased from 21% to 12%. The results imply that pore-scale chemotaxis responses are evident at an interstitial fluid velocity of 1.7 m/d, which is within the range of typical groundwater flow. Within the context of bioremediation, chemotaxis may work to enhance bacterial residence times in zones of contamination thereby improving treatment.

  8. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve

  9. WATER QUALITY IN SOURCE WATER, TREATMENT, AND DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Most drinking water utilities practice the multiple-barrier concept as the guiding principle for providing safe water. This chapter discusses multiple barriers as they relate to the basic criteria for selecting and protecting source waters, including known and potential sources ...

  10. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  11. Distribution, sources and health risk assessment of mercury in kindergarten dust

    NASA Astrophysics Data System (ADS)

    Sun, Guangyi; Li, Zhonggen; Bi, Xiangyang; Chen, Yupeng; Lu, Shuangfang; Yuan, Xin

    2013-07-01

    Mercury (Hg) contamination in urban area is a hot issue in environmental research. In this study, the distribution, sources and health risk of Hg in dust from 69 kindergartens in Wuhan, China, were investigated. In comparison with most other cities, the concentrations of total mercury (THg) and methylmercury (MeHg) were significantly elevated, ranging from 0.15 to 10.59 mg kg-1 and from 0.64 to 3.88 μg kg-1, respectively. Among the five different urban areas, the educational area had the highest concentrations of THg and MeHg. The GIS mapping was used to identify the hot-spot areas and assess the potential pollution sources of Hg. The emissions of coal-power plants and coking plants were the main sources of THg in the dust, whereas the contributions of municipal solid waste (MSW) landfills and iron and steel smelting related industries were not significant. However, the emission of MSW landfills was considered to be an important source of MeHg in the studied area. The result of health risk assessment indicated that there was a high adverse health effect of the kindergarten dust in terms of Hg contamination on the children living in the educational area (Hazard index (HI) = 6.89).

  12. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  13. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1988-01-01

    The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.

  14. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  15. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  16. 77 FR 10490 - SourceGas Distribution LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-22

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Filing Take notice that on February 14, 2012, SourceGas Distribution LLC submitted a revised baseline filing of their Statement of...

  17. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  18. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  19. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  20. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Where do moneys distributed from the Fund and other sources go? 872.12 Section 872.12 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... AND INDIAN TRIBES § 872.12 Where do moneys distributed from the Fund and other sources go? (a)...

  1. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Where do moneys distributed from the Fund and other sources go? 872.12 Section 872.12 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... AND INDIAN TRIBES § 872.12 Where do moneys distributed from the Fund and other sources go? (a)...

  2. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Where do moneys distributed from the Fund and other sources go? 872.12 Section 872.12 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND... AND INDIAN TRIBES § 872.12 Where do moneys distributed from the Fund and other sources go? (a)...

  3. The long-term problems of contaminated land: Sources, impacts and countermeasures

    SciTech Connect

    Baes, C.F. III

    1986-11-01

    This report examines the various sources of radiological land contamination; its extent; its impacts on man, agriculture, and the environment; countermeasures for mitigating exposures; radiological standards; alternatives for achieving land decontamination and cleanup; and possible alternatives for utilizing the land. The major potential sources of extensive long-term land contamination with radionuclides, in order of decreasing extent, are nuclear war, detonation of a single nuclear weapon (e.g., a terrorist act), serious reactor accidents, and nonfission nuclear weapons accidents that disperse the nuclear fuels (termed ''broken arrows'').

  4. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1990-01-01

    In the present study of the behavior of typical numerical methods in the case of a model advection equation having a parameter-dependent source term, two approaches to the incorporation of the source terms are used: MacCormack-type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. The latter are found to perform slightly better. The model scalar equation is used to show that the incorrectness of the propagation speeds of discontinuities observed in the stiff case is due to the introduction of nonequilibrium values through numerical dissipation in the advection step.

  5. High Order Finite Difference Methods with Subcell Resolution for Advection Equations with Stiff Source Terms

    DTIC Science & Technology

    2011-06-16

    introduce this anti-diffusive WENO scheme for Eq . (11). 5 Let xi, i = 1, . . . , N be a uniform (for simplicity) mesh of the computational domain, with mesh...example is the model problem of [23]. Consider Eq . (4) with f(u) = u, the source term given by Eq . ( 5 ), and the initial condition: u(x, 0) = { 1, x ≤ 0.3...should be always zero. However, if µ in the source term Eq . ( 5 ) is very large, the numerical errors of u in the transition region can result in large

  6. Design parameters and source terms: Volume 1, Design parameters: Revision 0

    SciTech Connect

    Not Available

    1987-10-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report by Stearns Catalytic Corporation (SCC), entitled ''Design Parameters and Source Terms for a Two-Phase Repository in Salt,'' 1985, to the level of the Site Characterization Plan - Conceptual Design Report. The previous unpublished SCC Study identifies the data needs for the Environmental Assessment effort for seven possible Salt Repository sites.

  7. Monitoring Design for Source Identification in Water Distribution Systems

    EPA Science Inventory

    The design of sensor networks for the purpose of monitoring for contaminants in water distribution systems is currently an active area of research. Much of the effort has been directed at the contamination detection problem and the expression of public health protection objective...

  8. Using natural archives to track sources and long-term trends of pollution: an introduction

    USGS Publications Warehouse

    Jules Blais,; Rosen, Michael R.; John Smol,

    2015-01-01

    This book explores the myriad ways that environmental archives can be used to study the distribution and long-term trajectories of contaminants. The volume first focuses on reviews that examine the integrity of the historic record, including factors related to hydrology, post-depositional diffusion, and mixing processes. This is followed by a series of chapters dealing with the diverse archives available for long-term studies of environmental pollution.

  9. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    SciTech Connect

    Y. Chen

    2001-12-19

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long-term

  10. The distribution and source of boulders on asteroid 4179 Toutatis

    NASA Astrophysics Data System (ADS)

    Jiang, Yun; Ji, Jianghui; Huang, Jiangchuan; Marchi, Simone; Li, Yuan; Ip, Wing-Huen

    2016-01-01

    Boulders are ubiquitous on the surfaces of asteroids and their spatial and size distributions provide information for the geological evolution and collisional history of parent bodies. We identify more than 200 boulders on near-Earth asteroid 4179 Toutatis based on images obtained by Chang'e-2 flyby. The cumulative boulder size frequency distribution (SFD) gives a power-index of -4.4 +/- 0.1, which is clearly steeper than those of boulders on Itokawa and Eros, indicating much high degree of fragmentation. Correlation analyses with craters suggest that most boulders cannot solely be produced as products of cratering, but are probably survived fragments from the parent body of Toutatis, accreted after its breakup. Similar to Itokawa, Toutatis probably has a rubble-pile structure, but owns a different preservation state of boulders.

  11. Open-Source, Distributed Computational Environment for Virtual Materials Exploration

    DTIC Science & Technology

    2015-01-01

    Statement A. Approved for public release; distribution unlimited. See additional restrictions described on inside pages STINFO COPY AIR FORCE...RESEARCH LABORATORY MATERIALS AND MANUFACTURING DIRECTORATE WRIGHT-PATTERSON AIR FORCE BASE, OH 45433-7750 AIR FORCE MATERIEL COMMAND UNITED STATES...specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture , use, or

  12. High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array

    PubMed Central

    Qian, Xin; Tucker, Andrew; Gidcumb, Emily; Shan, Jing; Yang, Guang; Calderon-Colon, Xiomara; Sultana, Shabana; Lu, Jianping; Zhou, Otto; Spronk, Derrek; Sprenger, Frank; Zhang, Yiheng; Kennedy, Don; Farbizio, Tom; Jing, Zhenxue

    2012-01-01

    binning, the projection resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images. Conclusions: A prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years. PMID:22482630

  13. Risk comparisons based on representative source terms with the NUREG-1150 results

    SciTech Connect

    Mubayi, V.; Davis, R.E.; Hanson, A.L.

    1993-12-01

    Standardized source terms, based on a specified release of fission products during potential accidents at commercial light water nuclear reactors, have been used for a long time for regulatory purposes. The siting of nuclear power plants, for example, which is governed by Part 100 of the Code of Federal Regulations Title 10, has utilized the source term recommended in TID-14844 supplemented by Regulatory Guides 1.3 and 1.4 and the Standard Review Plan. With the introduction of probabilistic risk assessment (PRA) methods, the source terms became characterized not only by the amount of fission products released, but also by the probability of the release. In the Reactor Safety Study, for example, several categories of source terms, characterized by release severity and probability, were developed for both pressurized and boiling water reactors (PWRs and BWRs). These categories were based on an understanding of the likely paths and associated phenomenology of accident progression following core damage to possible failure of the containment and release to the environment.

  14. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 12 2013-01-01 2013-01-01 false Rates, terms, and source of funds. 1822.268 Section 1822.268 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING SERVICE... AGRICULTURE LOANS AND GRANTS PRIMARILY FOR REAL ESTATE PURPOSES RURAL HOUSING LOANS AND GRANTS Rural...

  15. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    ERIC Educational Resources Information Center

    Hall, Matthew L.; Bavelier, Daphne

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory--perception, encoding, and recall--in this effect. The present study…

  16. Understanding emergency medical dispatch in terms of distributed cognition: a case study.

    PubMed

    Furniss, Dominic; Blandford, Ann

    Emergency medical dispatch (EMD) is typically a team activity, requiring fluid coordination and communication between team members. Such working situations have often been described in terms of distributed cognition (DC), a framework for understanding team working. DC takes account of factors such as shared representations and artefacts to support reasoning about team working. Although the language of DC has been developed over several years, little attention has been paid to developing a methodology or reusable representation which supports reasoning about an interactive system from a DC perspective. We present a case study in which we developed a method for constructing a DC account of team working in the domain of EMD, focusing on the use of the method for describing an existing EMD work system, identifying sources of weakness in that system, and reasoning about the likely consequences of redesign of the system. The resulting DC descriptions have yielded new insights into the design of EMD work and of tools to support that work within a large EMD centre.

  17. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  18. Laboratory experiments designed to provide limits on the radionuclide source term for the NNWSI Project

    SciTech Connect

    Oversby, V.M.; McCright, R.D.

    1984-11-01

    The Nevada Nuclear Waste Storage Investigations Project is investigating the suitability of the tuffaceous rocks at Yucca Mountain Nevada for potential use as a high-level nuclear waste repository. The horizon under investigation lies above the water table, and therefore offers a setting that differs substantially from other potential repository sites. The unsaturated zone environment allows a simple, but effective, waste package design. The source term for radionuclide release from the waste package will be based on laboratory experiments that determine the corrosion rates and mechanisms for the metal container and the dissolution rate of the waste form under expected long term conditions. This paper describes the present status of laboratory results and outlines the approach to be used in combining the data to develop a realistic source term for release of radionuclides from the waste package. 16 refs., 3 figs., 1 tab.

  19. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  20. Regulatory Technology Development Plan - Sodium Fast Reactor: Mechanistic Source Term – Trial Calculation

    SciTech Connect

    Grabaskas, David; Bucknor, Matthew; Jerden, James; Brunett, Acacia J.; Denman, Matthew; Clark, Andrew; Denning, Richard S.

    2016-10-01

    The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is not without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized, as shown below. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.

  1. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    DOE PAGES

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    2016-01-18

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/137Cs versus 134Cs/137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain

  2. Clinical Application of Spatiotemporal Distributed Source Analysis in Presurgical Evaluation of Epilepsy

    PubMed Central

    Tanaka, Naoaki; Stufflebeam, Steven M.

    2014-01-01

    Magnetoencephalography (MEG), which acquires neuromagnetic fields in the brain, is a useful diagnostic tool in presurgical evaluation of epilepsy. Previous studies have shown that MEG affects the planning intracranial electroencephalography placement and correlates with surgical outcomes by using a single dipole model. Spatiotemporal source analysis using distributed source models is an advanced method for analyzing MEG, and has been recently introduced for analyzing epileptic spikes. It has advantages over the conventional single dipole analysis for obtaining accurate sources and understanding the propagation of epileptic spikes. In this article, we review the source analysis methods, describe the techniques of the distributed source analysis, interpretation of source distribution maps, and discuss the benefits and feasibility of this method in evaluation of epilepsy. PMID:24574999

  3. Clinical application of spatiotemporal distributed source analysis in presurgical evaluation of epilepsy.

    PubMed

    Tanaka, Naoaki; Stufflebeam, Steven M

    2014-01-01

    Magnetoencephalography (MEG), which acquires neuromagnetic fields in the brain, is a useful diagnostic tool in presurgical evaluation of epilepsy. Previous studies have shown that MEG affects the planning intracranial electroencephalography placement and correlates with surgical outcomes by using a single dipole model. Spatiotemporal source analysis using distributed source models is an advanced method for analyzing MEG, and has been recently introduced for analyzing epileptic spikes. It has advantages over the conventional single dipole analysis for obtaining accurate sources and understanding the propagation of epileptic spikes. In this article, we review the source analysis methods, describe the techniques of the distributed source analysis, interpretation of source distribution maps, and discuss the benefits and feasibility of this method in evaluation of epilepsy.

  4. Simulation of dose distribution for iridium-192 brachytherapy source type-H01 using MCNPX

    SciTech Connect

    Purwaningsih, Anik

    2014-09-30

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result of calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.

  5. Simulation of dose distribution for iridium-192 brachytherapy source type-H01 using MCNPX

    NASA Astrophysics Data System (ADS)

    Purwaningsih, Anik

    2014-09-01

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result of calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.

  6. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    PubMed Central

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W.; Dong, Fengzhong

    2016-01-01

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges. PMID:27275822

  7. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  8. Distribution and source of the UV absorption in Venus' atmosphere

    NASA Technical Reports Server (NTRS)

    Pollack, J. B.; Toon, O. B.; Whitten, R. C.; Boese, R.; Ragent, B.; Tomasko, M.; Esposito, L.; Travis, L.; Wiedman, D.

    1980-01-01

    The model predictions were compared with the Pioneer Venus probes and orbiter to determine the composition of the UV absorbing materials. The simulations were carried out with radiative transfer codes which included spacecraft constraints on the aerosol and gas characteristics in the Venus atmosphere; gaseous SO2 (a source of opacity at the wavelengths below 0.32 microns), and a second absorber (which dominates above 0.32 microns) were required. The UV contrast variations are due to the optical depth changes in the upper haze layer producing brightness variations between equatorial and polar areas, and to differences in the depth over which the second UV absorber is depleted in the highest portion of the main clouds.

  9. Controlling temporal solitary waves in the generalized inhomogeneous coupled nonlinear Schrödinger equations with varying source terms

    NASA Astrophysics Data System (ADS)

    Yang, Yunqing; Yan, Zhenya; Mihalache, Dumitru

    2015-05-01

    In this paper, we study the families of solitary-wave solutions to the inhomogeneous coupled nonlinear Schrödinger equations with space- and time-modulated coefficients and source terms. By means of the similarity reduction method and Möbius transformations, many types of novel temporal solitary-wave solutions of this nonlinear dynamical system are analytically found under some constraint conditions, such as the bright-bright, bright-dark, dark-dark, periodic-periodic, W-shaped, and rational wave solutions. In particular, we find that the localized rational-type solutions can exhibit both bright-bright and bright-dark wave profiles by choosing different families of free parameters. Moreover, we analyze the relationships among the group-velocity dispersion profiles, gain or loss distributions, external potentials, and inhomogeneous source profiles, which provide the necessary constraint conditions to control the emerging wave dynamics. Finally, a series of numerical simulations are performed to show the robustness to propagation of some of the analytically obtained solitary-wave solutions. The vast class of exact solutions of inhomogeneous coupled nonlinear Schrödinger equations with source terms might be used in the study of the soliton structures in twin-core optical fibers and two-component Bose-Einstein condensates.

  10. The potential distribution in the Radial Plasma Source

    NASA Astrophysics Data System (ADS)

    Fruchtman, Amnon; Makrinich, Gennady

    2011-10-01

    The Radial Plasma Source (RPS) is based on plasma acceleration by an applied voltage across a magnetic field. Here we report the recent progress in understanding the mechanism of plasma acceleration in the RPS. The RPS has a cylindrical symmetry. The accelerating electric field is radial and the magnetic field is axial. Most of the potential drop between the inner anode and the outer cathode is expected to be located where the magnetic field intensity is large. We employ an emissive probe and a Langmuir probe in order to evaluate the radial dependence of the potential. For inferring the plasma potential from the measured emissive probe potential, we employ our recently developed theory for a cylindrical emissive probe. Using the theory and the probe measurements, we plot the radial profiles in the RPS of the plasma potential as well as of the electron density and temperature. The possible modification of the geometry for propulsion applications will be discussed. Partially supported by the Israel Science Foundation, Grant 864/07.

  11. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  12. Reconstruction of Far-Field Tsunami Amplitude Distributions from Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2016-12-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  13. Decoy-state quantum key distribution with a leaky source

    NASA Astrophysics Data System (ADS)

    Tamaki, Kiyoshi; Curty, Marcos; Lucamarini, Marco

    2016-06-01

    In recent years, there has been a great effort to prove the security of quantum key distribution (QKD) with a minimum number of assumptions. Besides its intrinsic theoretical interest, this would allow for larger tolerance against device imperfections in the actual implementations. However, even in this device-independent scenario, one assumption seems unavoidable, that is, the presence of a protected space devoid of any unwanted information leakage in which the legitimate parties can privately generate, process and store their classical data. In this paper we relax this unrealistic and hardly feasible assumption and introduce a general formalism to tackle the information leakage problem in most of existing QKD systems. More specifically, we prove the security of optical QKD systems using phase and intensity modulators in their transmitters, which leak the setting information in an arbitrary manner. We apply our security proof to cases of practical interest and show key rates similar to those obtained in a perfectly shielded environment. Our work constitutes a fundamental step forward in guaranteeing implementation security of quantum communication systems.

  14. Measurements of Infrared and Acoustic Source Distributions in Jet Plumes

    NASA Technical Reports Server (NTRS)

    Agboola, Femi A.; Bridges, James; Saiyed, Naseem

    2004-01-01

    The aim of this investigation was to use the linear phased array (LPA) microphones and infrared (IR) imaging to study the effects of advanced nozzle-mixing techniques on jet noise reduction. Several full-scale engine nozzles were tested at varying power cycles with the linear phased array setup parallel to the jet axis. The array consisted of 16 sparsely distributed microphones. The phased array microphone measurements were taken at a distance of 51.0 ft (15.5 m) from the jet axis, and the results were used to obtain relative overall sound pressure levels from one nozzle design to the other. The IR imaging system was used to acquire real-time dynamic thermal patterns of the exhaust jet from the nozzles tested. The IR camera measured the IR radiation from the nozzle exit to a distance of six fan diameters (X/D(sub FAN) = 6), along the jet plume axis. The images confirmed the expected jet plume mixing intensity, and the phased array results showed the differences in sound pressure level with respect to nozzle configurations. The results show the effects of changes in configurations to the exit nozzles on both the flows mixing patterns and radiant energy dissipation patterns. By comparing the results from these two measurements, a relationship between noise reduction and core/bypass flow mixing is demonstrated.

  15. Differential dose contributions on total dose distribution of 125I brachytherapy source

    PubMed Central

    Camgöz, B.; Yeğin, G.; Kumru, M.N.

    2010-01-01

    This work provides an improvement of the approach using Monte Carlo simulation for the Amersham Model 6711 125I brachytherapy seed source, which is well known by many theoretical and experimental studies. The source which has simple geometry was researched with respect to criteria of AAPM Tg-43 Report. The approach offered by this study involves determination of differential dose contributions that come from virtual partitions of a massive radioactive element of the studied source to a total dose at analytical calculation point. Some brachytherapy seeds contain multi-radioactive elements so the dose at any point is a total of separate doses from each element. It is momentous to know well the angular and radial dose distributions around the source that is located in cancerous tissue for clinical treatments. Interior geometry of a source is effective on dose characteristics of a distribution. Dose information of inner geometrical structure of a brachytherapy source cannot be acquired by experimental methods because of limits of physical material and geometry in the healthy tissue, so Monte Carlo simulation is a required approach of the study. EGSnrc Monte Carlo simulation software was used. In the design of a simulation, the radioactive source was divided into 10 rings, partitioned but not separate from each other. All differential sources were simulated for dose calculation, and the shape of dose distribution was determined comparatively distribution of a single-complete source. In this work anisotropy function was examined also mathematically. PMID:24376927

  16. Characterizing short-term stability for Boolean networks over any distribution of transfer functions.

    PubMed

    Seshadhri, C; Smith, Andrew M; Vorobeychik, Yevgeniy; Mayo, Jackson R; Armstrong, Robert C

    2016-07-01

    We present a characterization of short-term stability of Kauffman's NK (random) Boolean networks under arbitrary distributions of transfer functions. Given such a Boolean network where each transfer function is drawn from the same distribution, we present a formula that determines whether short-term chaos (damage spreading) will happen. Our main technical tool which enables the formal proof of this formula is the Fourier analysis of Boolean functions, which describes such functions as multilinear polynomials over the inputs. Numerical simulations on mixtures of threshold functions and nested canalyzing functions demonstrate the formula's correctness.

  17. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    NASA Astrophysics Data System (ADS)

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; Mayo, Jackson R.; Armstrong, Robert C.

    2016-07-01

    We present a characterization of short-term stability of Kauffman's N K (random) Boolean networks under arbitrary distributions of transfer functions. Given such a Boolean network where each transfer function is drawn from the same distribution, we present a formula that determines whether short-term chaos (damage spreading) will happen. Our main technical tool which enables the formal proof of this formula is the Fourier analysis of Boolean functions, which describes such functions as multilinear polynomials over the inputs. Numerical simulations on mixtures of threshold functions and nested canalyzing functions demonstrate the formula's correctness.

  18. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    DOE PAGES

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; ...

    2016-07-05

    Here we present a characterization of short-term stability of random Boolean networks under arbitrary distributions of transfer functions. Given any distribution of transfer functions for a random Boolean network, we present a formula that decides whether short-term chaos (damage spreading) will happen. We provide a formal proof for this formula, and empirically show that its predictions are accurate. Previous work only works for special cases of balanced families. Finally, it has been observed that these characterizations fail for unbalanced families, yet such families are widespread in real biological networks.

  19. Prediction of short-term distributions of load extremes of offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Wang, Ying-guang

    2016-12-01

    This paper proposes a new methodology to select an optimal threshold level to be used in the peak over threshold (POT) method for the prediction of short-term distributions of load extremes of offshore wind turbines. Such an optimal threshold level is found based on the estimation of the variance-to-mean ratio for the occurrence of peak values, which characterizes the Poisson assumption. A generalized Pareto distribution is then fitted to the extracted peaks over the optimal threshold level and the distribution parameters are estimated by the method of the maximum spacing estimation. This methodology is applied to estimate the short-term distributions of load extremes of the blade bending moment and the tower base bending moment at the mudline of a monopile-supported 5MW offshore wind turbine as an example. The accuracy of the POT method using the optimal threshold level is shown to be better, in terms of the distribution fitting, than that of the POT methods using empirical threshold levels. The comparisons among the short-term extreme response values predicted by using the POT method with the optimal threshold levels and with the empirical threshold levels and by using direct simulation results further substantiate the validity of the proposed new methodology.

  20. Comparing two micrometeorological techniques for estimating trace gas emissions from distributed sources

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Measuring trace gas emission from distributed sources such as treatment lagoons, treatment wetlands, land spread of manure, and feedlots requires micrometeorological methods. In this study, we tested the accuracy of two relatively new micrometeorological techniques, vertical radial plume mapping (VR...

  1. GEOCHEMISTRY OF PAHS IN AQUATIC ENVIRONMENTS: A SYNTHESIS OF DISTRIBUTION, SOURCE, PERSISTENCE, PARTITIONING AND BIOAVAILABILITY

    EPA Science Inventory

    On the basis of their distributions, sources, persistence, partitioning and bioavailability, polycyclic aromatic hydrocarbons (PAHs) are a unique class of persistent organic pollutants (POPs) contaminating the aquatic environment. They are of particular interest to geochemists an...

  2. Splitting the Source Term for the Einstein Equation to Classical and Quantum Parts

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Ván, P.

    2015-11-01

    We consider the special and general relativistic extensions of the action principle behind the Schrödinger equation distinguishing classical and quantum contributions. Postulating a particular quantum correction to the source term in the classical Einstein equation we identify the conformal content of the above action and obtain classical gravitation for massive particles, but with a cosmological term representing off-mass-shell contribution to the energy-momentum tensor. In this scenario the—on the Planck scale surprisingly small—cosmological constant stems from quantum bound states (gravonium) having a Bohr radius a as being Λ =3/a^2.

  3. Long-term monitoring of airborne nickel (Ni) pollution in association with some potential source processes in the urban environment.

    PubMed

    Kim, Ki-Hyun; Shon, Zang-Ho; Mauulida, Puteri T; Song, Sang-Keun

    2014-09-01

    The environmental behavior and pollution status of nickel (Ni) were investigated in seven major cities in Korea over a 13-year time span (1998-2010). The mean concentrations of Ni measured during the whole study period fell within the range of 3.71 (Gwangju: GJ) to 12.6ngm(-3) (Incheon: IC). Although Ni values showed a good comparability in a relatively large spatial scale, its values in most cities (6 out of 7) were subject to moderate reductions over the study period. To assess the effect of major sources on the long-term distribution of Ni, the relationship between their concentrations and the potent source processes like non-road transportation sources (e.g., ship and aircraft emissions) were examined from some cities with port and airport facilities. The potential impact of long-range transport of Asian dust particles in controlling Ni levels was also evaluated. The overall results suggest that the Ni levels were subject to gradual reductions over the study period irrespective of changes in such localized non-road source activities. The pollution of Ni at all the study sites was maintained well below the international threshold (Directive 2004/107/EC) value of 20ngm(-3).

  4. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  5. Lattice Boltzmann method for n-dimensional nonlinear hyperbolic conservation laws with the source term.

    PubMed

    Wang, Zhenghua; Shi, Baochang; Xiang, Xiuqiao; Chai, Zhenhua; Lu, Jianhua

    2011-03-01

    It is important for nonlinear hyperbolic conservation laws (NHCL) to own a simulation scheme with high order accuracy, simple computation, and non-oscillatory character. In this paper, a unified and novel lattice Boltzmann model is presented for solving n-dimensional NHCL with the source term. By introducing the high order source term of explicit lattice Boltzmann method (LBM) and the optimum dimensionless relaxation time varied with the specific issues, the effects of space and time resolutions on the accuracy and stability of the model are investigated for the different problems in one to three dimensions. Both the theoretical analysis and numerical simulation validate that the results by the proposed LBM have second-order accuracy in both space and time, which agree well with the analytical solutions.

  6. Binary Source Microlensing Event OGLE-2016-BLG-0733: Interpretation of a Long-term Asymmetric Perturbation

    NASA Astrophysics Data System (ADS)

    Jung, Y. K.; Udalski, A.; Yee, J. C.; Sumi, T.; Gould, A.; Han, C.; Albrow, M. D.; Lee, C.-U.; Kim, S.-L.; Chung, S.-J.; Hwang, K.-H.; Ryu, Y.-H.; Shin, I.-G.; Zhu, W.; Cha, S.-M.; Kim, D.-J.; Lee, Y.; Park, B.-G.; Pogge, R. W.; KMTNet Collaboration; Pietrukowicz, P.; Kozłowski, S.; Poleski, R.; Skowron, J.; Mróz, P.; Szymański, M. K.; Soszyński, I.; Pawlak, M.; Ulaczyk, K.; OGLE Collaboration; Abe, F.; Bennett, D. P.; Barry, R.; Bond, I. A.; Asakura, Y.; Bhattacharya, A.; Donachie, M.; Freeman, M.; Fukui, A.; Hirao, Y.; Itow, Y.; Koshimoto, N.; Li, M. C. A.; Ling, C. H.; Masuda, K.; Matsubara, Y.; Muraki, Y.; Nagakane, M.; Oyokawa, H.; Rattenbury, N. J.; Sharan, A.; Sullivan, D. J.; Suzuki, D.; Tristram, P. J.; Yamada, T.; Yamada, T.; Yonehara, A.; MOA Collaboration

    2017-03-01

    In the process of analyzing an observed light curve, one often confronts various scenarios that can mimic the planetary signals causing difficulties in the accurate interpretation of the lens system. In this paper, we present the analysis of the microlensing event OGLE-2016-BLG-0733. The light curve of the event shows a long-term asymmetric perturbation that would appear to be due to a planet. From the detailed modeling of the lensing light curve, however, we find that the perturbation originates from the binarity of the source rather than the lens. This result demonstrates that binary sources with roughly equal-luminosity components can mimic long-term perturbations induced by planets with projected separations near the Einstein ring. The result also represents the importance of the consideration of various interpretations in planet-like perturbations and of high-cadence observations for ensuring the unambiguous detection of the planet.

  7. Analytical source term optimization for radioactive releases with approximate knowledge of nuclide ratios

    NASA Astrophysics Data System (ADS)

    Hofman, Radek; Seibert, Petra; Kovalets, Ivan; Andronopoulos, Spyros

    2015-04-01

    We are concerned with source term retrieval in the case of an accident in a nuclear power with off-site consequences. The goal is to optimize atmospheric dispersion model inputs using inverse modeling of gamma dose rate measurements (instantaneous or time-integrated). These are the most abundant type of measurements provided by various radiation monitoring networks across Europe and available continuously in near-real time. Usually, a source term of an accidental release comprises of a mixture of nuclides. Unfortunately, gamma dose rate measurements do not provide a direct information on the source term composition; however, physical properties of respective nuclides (deposition properties, decay half-life) can yield some insight. In the method presented, we assume that nuclide ratios are known at least approximately, e.g. from nuclide specific observations or reactor inventory and assumptions on the accident type. The source term can be in multiple phases, each being characterized by constant nuclide ratios. The method is an extension of a well-established source term inversion approach based on the optimization of an objective function (minimization of a cost function). This function has two quadratic terms: mismatch between model and measurements weighted by an observation error covariance matrix and the deviation of the solution from a first guess weighted by the first-guess error covariance matrix. For simplicity, both error covariance matrices are approximated as diagonal. Analytical minimization of the cost function leads to a liner system of equations. Possible negative parts of the solution are iteratively removed by the means of first guess error variance reduction. Nuclide ratios enter the problem in the form of additional linear equations, where the deviations from prescribed ratios are weighted by factors; the corresponding error variance allows us to control how strongly we want to impose the prescribed ratios. This introduces some freedom into the

  8. A numerical method to solve the Stokes problem with a punctual force in source term

    NASA Astrophysics Data System (ADS)

    Lacouture, Loïc

    2015-03-01

    The aim of this note is to present a numerical method to solve the Stokes problem in a bounded domain with a Dirac source term, which preserves optimality for any approximation order by the finite-element method. It is based on the knowledge of a fundamental solution to the associated operator over the whole space. This method is motivated by the modeling of the movement of active thin structures in a viscous fluid.

  9. Implementation of New Turbulence Spectra in the Lighthill Analogy Source Terms

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2000-01-01

    The industry-standard MGB approach to predicting the noise generated by a given aerodynamic flow field requires that the turbulence velocity correlation be specified so that the source terms in the Lighthill acoustic analogy may be computed. The velocity correlation traditionally used in MGB Computations is inconsistent with a number of basic qualitative properties of turbulent flows. In the present investigation the effect on noise prediction of using two alternative velocity correlations is examined.

  10. Elevated Natural Source Water Ammonia and Nitrification in the Distribution Systems of Four Water Utilities

    EPA Science Inventory

    Nitrification in drinking water distribution systems is a concern of many drinking water systems. Although chloramination as a source of nitrification (i.e., addition of excess ammonia or breakdown of chloramines) has drawn the most attention, many source waters contain signific...

  11. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Where do moneys distributed from the Fund and... ENFORCEMENT, DEPARTMENT OF THE INTERIOR ABANDONED MINE LAND RECLAMATION MONEYS AVAILABLE TO ELIGIBLE STATES AND INDIAN TRIBES § 872.12 Where do moneys distributed from the Fund and other sources go? (a)...

  12. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Where do moneys distributed from the Fund and... ENFORCEMENT, DEPARTMENT OF THE INTERIOR ABANDONED MINE LAND RECLAMATION MONEYS AVAILABLE TO ELIGIBLE STATES AND INDIAN TRIBES § 872.12 Where do moneys distributed from the Fund and other sources go? (a)...

  13. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    ERIC Educational Resources Information Center

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  14. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... distributions made during the taxable year consist only of money and exceed the earnings and profits of such... corporation is made out of earnings and profits to the extent thereof and from the most recently accumulated earnings and profits. In determining the source of a distribution, consideration should be given first,...

  15. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... distributions made during the taxable year consist only of money and exceed the earnings and profits of such... corporation is made out of earnings and profits to the extent thereof and from the most recently accumulated earnings and profits. In determining the source of a distribution, consideration should be given first,...

  16. Size distribution of acidic sulfate ions in fine ambient particulate matter and assessment of source region effect

    NASA Astrophysics Data System (ADS)

    Hazi, Y.; Heikkinen, M. S. A.; Cohen, B. S.

    Human exposure studies strongly suggested that the fine fraction of ambient particulate matter (PM) and its associated acidic sulfates are closely correlated with observed adverse health effects. Acidic sulfates are the products of atmospheric sulfur dioxide oxidation and neutralization processes. Few data are available on the amount and size distribution of acidic sulfates within the fine fraction of ambient PM. Knowledge of this distribution will help to understand their toxic mechanisms in the human respiratory tract. The goals of this research were: (1) to measure the size distribution of hydrogen ion, sulfate, and ammonium within the fine fraction of the ambient aerosol in air masses originating from different source regions; and (2) to examine the effect of the source region and the seasons on the sampled PM composition. Six size fractions within the fine ambient PM were collected using a micro-orifice impactor. Results from 30 sampling sessions demonstrated that higher total concentrations of these three ions were observed during the warm months than during the cold months of the year. Size distribution results show that the midpoint diameter of the fraction of particles with the largest fraction of hydrogen, sulfate and ammonium ions was 0.38 μm. Although most of the mass containing hydrogen and sulfate ions was measured in the fraction of particles with 0.38 μm midpoint diameter, the ultrafine fraction (<0.1 μm) was found to be more acidic. Ambient ion concentrations varied between sampling sessions and seasons, but the overall size distribution profiles are similar. Air mass back trajectories were used to identify the source region of the sampled aerosols. No apparent source region effect was observed in terms of the distribution profile of the ions. However, samples collected from air masses that originated from, or passed over, high sulfur dioxide emission areas demonstrated higher concentrations of the different ions.

  17. Numerical analysis of atomic density distribution in arc driven negative ion sources

    SciTech Connect

    Yamamoto, T. Shibata, T.; Hatayama, A.; Kashiwagi, M.; Hanada, M.; Sawada, K.

    2014-02-15

    The purpose of this study is to calculate atomic (H{sup 0}) density distribution in JAEA 10 ampere negative ion source. A collisional radiative model is developed for the calculation of the H{sup 0} density distribution. The non-equilibrium feature of the electron energy distribution function (EEDF), which mainly determines the H{sup 0} production rate, is included by substituting the EEDF calculated from 3D electron transport analysis. In this paper, the H{sup 0} production rate, the ionization rate, and the density distribution in the source chamber are calculated. In the region where high energy electrons exist, the H{sup 0} production and the ionization are enhanced. The calculated H{sup 0} density distribution without the effect of the H{sup 0} transport is relatively small in the upper region. In the next step, the effect should be taken into account to obtain more realistic H{sup 0} distribution.

  18. Measurement-device-independent quantum key distribution with source state errors and statistical fluctuation

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2017-03-01

    We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.

  19. Low-level waste disposal performance assessments - Total source-term analysis

    SciTech Connect

    Wilhite, E.L.

    1995-12-31

    Disposal of low-level radioactive waste at Department of Energy (DOE) facilities is regulated by DOE. DOE Order 5820.2A establishes policies, guidelines, and minimum requirements for managing radioactive waste. Requirements for disposal of low-level waste emplaced after September 1988 include providing reasonable assurance of meeting stated performance objectives by completing a radiological performance assessment. Recently, the Defense Nuclear Facilities Safety Board issued Recommendation 94-2, {open_quotes}Conformance with Safety Standards at Department of Energy Low-Level Nuclear Waste and Disposal Sites.{close_quotes} One of the elements of the recommendation is that low-level waste performance assessments do not include the entire source term because low-level waste emplaced prior to September 1988, as well as other DOE sources of radioactivity in the ground, are excluded. DOE has developed and issued guidance for preliminary assessments of the impact of including the total source term in performance assessments. This paper will present issues resulting from the inclusion of all DOE sources of radioactivity in performance assessments of low-level waste disposal facilities.

  20. Detailed dose distribution prediction of Cf-252 brachytherapy source with boron loading dose enhancement.

    PubMed

    Ghassoun, J; Mostacci, D; Molinari, V; Jehouani, A

    2010-02-01

    The purpose of this work is to evaluate the dose rate distribution and to determine the boron effect on dose rate distribution for (252)Cf brachytherapy source. This study was carried out using a Monte Carlo simulation. To validate the Monte Carlo computer code, the dosimetric parameters were determined following the updated TG-43 formalism and compared with current literature data. The validated computer code was then applied to evaluate the neutron and photon dose distribution and to illustrate the boron loading effect.

  1. Extended Tonks-Langmuir-type model with non-Boltzmann-distributed electrons and cold ion sources

    NASA Astrophysics Data System (ADS)

    Kamran, M.; Kuhn, S.; Tskhakaya, D. D.; Khan, M.; Khan

    2013-04-01

    A general formalism for calculating the potential distribution Φ(z) in the quasineutral region of a new class of plane Tonks-Langmuir (TL)-type bounded-plasma-system (BPS) models differing from the well-known `classical' TL model (Tonks, L. and Langmuir, I. 1929 A general theory of the plasma of an arc. Phys. Rev. 34, 876) by allowing for arbitrary (but still cold) ion sources and arbitrary electron distributions is developed. With individual particles usually undergoing microscopic collision/sink/source (CSS) events, extensive use is made here of the basic kinetic-theory concept of `CSS-free trajectories' (i.e., the characteristics of the kinetic equation). Two types of electron populations, occupying the `type-t' and `type-p' domains of electron phase space, are distinguished. By definition, the type-t and type-p domains are made up of phase points lying on type-t (`trapped') CSS-free trajectories (not intersecting the walls and closing on themselves) and type-p (`passing') ones (starting at one of the walls and ending at the other). This work being the first step, it is assumed that ɛ ≡ λ D /l -> 0+ (where λ D and l are a typical Debye length and a typical ionization length respectively) so that the system exhibits a finite quasineutral `plasma' region and two infinitesimally thin `sheath' regions associated with the `sheath-edge singularities' | dΦ/dz| z->+/-zs -> ∞. The potential in the plasma region is required to satisfy a plasma equation (quasineutrality condition) of the form n i {Φ} = n e (Φ), where the electron density n e (Φ) is given and the ion density n i {Φ} is expressed in terms of trajectory integrals of the ion kinetic equation, with the ions produced by electron-impact ionization of cold neutrals. While previous TL-type models were characterized by electrons diffusing under the influence of frequent collisions with the neutral background particles and approximated by Maxwellian (Riemann, K.-U. 2006 Plasma-sheath transition in the

  2. Long-Term Probability Distribution of Wind Turbine Planetary Bearing Loads (Poster)

    SciTech Connect

    Jiang, Z.; Xing, Y.; Guo, Y.; Dong, W.; Moan, T.; Gao, Z.

    2013-04-01

    Among the various causes of bearing damage and failure, metal fatigue of the rolling contact surface is the dominant failure mechanism. The fatigue life is associated with the load conditions under which wind turbines operate in the field. Therefore, it is important to understand the long-term distribution of the bearing loads under various environmental conditions. The National Renewable Energy Laboratory's 750-kW Gearbox Reliability Collaborative wind turbine is studied in this work. A decoupled analysis using several computer codes is carried out. The global aero-elastic simulations are performed using HAWC2. The time series of the drivetrain loads and motions from the global dynamic analysis are fed to a drivetrain model in SIMPACK. The time-varying internal pressure distribution along the raceway is obtained analytically. A series of probability distribution functions are then used to fit the long-term statistical distribution at different locations along raceways. The long-term distribution of the bearing raceway loads are estimated under different environmental conditions. Finally, the bearing fatigue lives are calculated.

  3. Measurement of anisotropic angular distributions of photon energy spectra for I-125 brachytherapy sources.

    PubMed

    Unno, Yasuhiro; Yunoki, Akira; Kurosawa, Tadahiro; Yamada, Takahiro; Sato, Yasushi; Hino, Yoshio

    2012-09-01

    The angular distribution of photon energy spectra emitted from an I-125 brachytherapy source was measured using a specially designed jig in the range of ±70° in the plane of the long axis of the source. It is important to investigate the angular dependence of photon emissions from these sources for the calibration of the air kerma rate. The results show that the influence of the distributions between 0° and ±8° is small enough to allow a calibration using current primary instruments which have a large entrance window.

  4. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    SciTech Connect

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    2016-01-18

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequential ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/137Cs versus 134Cs/137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed

  5. Identifying Synonymy between SNOMED Clinical Terms of Varying Length Using Distributional Analysis of Electronic Health Records

    PubMed Central

    Henriksson, Aron; Conway, Mike; Duneld, Martin; Chapman, Wendy W.

    2013-01-01

    Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records – the MIMIC-II database – can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length. PMID:24551362

  6. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a "first guess" source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the

  7. Estimating usual food intake distributions by using the multiple source method in the EPIC-Potsdam Calibration Study.

    PubMed

    Haubrock, Jennifer; Nöthlings, Ute; Volatier, Jean-Luc; Dekkers, Arnold; Ocké, Marga; Harttig, Ulrich; Illner, Anne-Kathrin; Knüppel, Sven; Andersen, Lene F; Boeing, Heiner

    2011-05-01

    Estimating usual food intake distributions from short-term quantitative measurements is critical when occasionally or rarely eaten food groups are considered. To overcome this challenge by statistical modeling, the Multiple Source Method (MSM) was developed in 2006. The MSM provides usual food intake distributions from individual short-term estimates by combining the probability and the amount of consumption with incorporation of covariates into the modeling part. Habitual consumption frequency information may be used in 2 ways: first, to distinguish true nonconsumers from occasional nonconsumers in short-term measurements and second, as a covariate in the statistical model. The MSM is therefore able to calculate estimates for occasional nonconsumers. External information on the proportion of nonconsumers of a food can also be handled by the MSM. As a proof-of-concept, we applied the MSM to a data set from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam Calibration Study (2004) comprising 393 participants who completed two 24-h dietary recalls and one FFQ. Usual intake distributions were estimated for 38 food groups with a proportion of nonconsumers > 70% in the 24-h dietary recalls. The intake estimates derived by the MSM corresponded with the observed values such as the group mean. This study shows that the MSM is a useful and applicable statistical technique to estimate usual food intake distributions, if at least 2 repeated measurements per participant are available, even for food groups with a sizeable percentage of nonconsumers.

  8. Imaging a spatially confined photoacoustic source defined by a distribution of plasmonic nanoparticles

    NASA Astrophysics Data System (ADS)

    Norton, Stephen J.; Vo-Dinh, Tuan

    2012-05-01

    This paper describes the use of plasmonic nanoparticles in photoacoustic imaging. When acoustic waves are generated by thermoacoustic expansion in the fluid medium surrounding a distribution of these particles and the acoustic signals are recorded over a planar aperture, a bandlimited image of this distribution can be reconstructed. It is shown that the accessible portion of the three-dimensional spatial Fourier transform of the unknown source distribution is a spherical shell in k-space, with the core representing missing low-frequency Fourier components of the source density. When the source arises from an isolated distribution of nanoparticles, the iterative Gerchberg-Papoulis procedure can be applied to recover the low-frequency Fourier components. It is shown that this version of the photoacoustic source reconstruction problem is well suited for the use of this procedure. In this way, the fidelity of the image of the photoacoustic-generated source defined by the particle concentration can be enhanced. The procedure is illustrated using simulated data derived from a hypothetical source distribution.

  9. Impact of routine episodic emissions on the expected frequency distribution of emissions from oil and gas production sources.

    NASA Astrophysics Data System (ADS)

    Smith, N.; Blewitt, D.; Hebert, L. B.

    2015-12-01

    In coordination with oil and gas operators, we developed a high resolution (< 1 min) simulation of temporal variability in well-pad oil and gas emissions over a year. We include routine emissions from condensate tanks, dehydrators, pneumatic devices, fugitive leaks and liquids unloading. We explore the variability in natural gas emissions from these individual well-pad sources, and find that routine short-term episodic emissions such as tank flashing and liquids unloading result in the appearance of a skewed, or 'fat-tail' distribution of emissions, from an individual well-pad over time. Additionally, we explore the expected variability in emissions from multiple wells with different raw gas composition, gas/liquids production volumes and control equipment. Differences in well-level composition, production volume and control equipment translate into differences in well-level emissions leading to a fat-tail distribution of emissions in the absence of operational upsets. Our results have several implications for recent studies focusing on emissions from oil and gas sources. Time scale of emission estimates are important and have important policy implications. Fat tail distributions may not be entirely driven by avoidable mechanical failures, and are expected to occur under routine operational conditions from short-duration emissions (e.g., tank flashing, liquid unloading). An understanding of the expected distribution of emissions for a particular population of wells is necessary to evaluate whether the observed distribution is more skewed than expected. Temporal variability in well-pad emissions make comparisons to annual average emissions inventories difficult and may complicate the interpretation of long-term ambient fenceline monitoring data. Sophisticated change detection algorithms will be necessary to identify when true operational upsets occur versus routine short-term emissions.

  10. Distribution of Short-Term and Lifetime Predicted Risks of Cardiovascular Diseases in Peruvian Adults

    PubMed Central

    Quispe, Renato; Bazo-Alvarez, Juan Carlos; Burroughs Peña, Melissa S; Poterico, Julio A; Gilman, Robert H; Checkley, William; Bernabé-Ortiz, Antonio; Huffman, Mark D; Miranda, J Jaime

    2015-01-01

    Background Short-term risk assessment tools for prediction of cardiovascular disease events are widely recommended in clinical practice and are used largely for single time-point estimations; however, persons with low predicted short-term risk may have higher risks across longer time horizons. Methods and Results We estimated short-term and lifetime cardiovascular disease risk in a pooled population from 2 studies of Peruvian populations. Short-term risk was estimated using the atherosclerotic cardiovascular disease Pooled Cohort Risk Equations. Lifetime risk was evaluated using the algorithm derived from the Framingham Heart Study cohort. Using previously published thresholds, participants were classified into 3 categories: low short-term and low lifetime risk, low short-term and high lifetime risk, and high short-term predicted risk. We also compared the distribution of these risk profiles across educational level, wealth index, and place of residence. We included 2844 participants (50% men, mean age 55.9 years [SD 10.2 years]) in the analysis. Approximately 1 of every 3 participants (34% [95% CI 33 to 36]) had a high short-term estimated cardiovascular disease risk. Among those with a low short-term predicted risk, more than half (54% [95% CI 52 to 56]) had a high lifetime predicted risk. Short-term and lifetime predicted risks were higher for participants with lower versus higher wealth indexes and educational levels and for those living in urban versus rural areas (P<0.01). These results were consistent by sex. Conclusions These findings highlight potential shortcomings of using short-term risk tools for primary prevention strategies because a substantial proportion of Peruvian adults were classified as low short-term risk but high lifetime risk. Vulnerable adults, such as those from low socioeconomic status and those living in urban areas, may need greater attention regarding cardiovascular preventive strategies. PMID:26254303

  11. Size distribution, mixing state and source apportionment of black carbon aerosol in London during wintertime

    NASA Astrophysics Data System (ADS)

    Liu, D.; Allan, J. D.; Young, D. E.; Coe, H.; Beddows, D.; Fleming, Z. L.; Flynn, M. J.; Gallagher, M. W.; Harrison, R. M.; Lee, J.; Prevot, A. S. H.; Taylor, J. W.; Yin, J.; Williams, P. I.; Zotter, P.

    2014-09-01

    Black carbon aerosols (BC) at a London urban site were characterised in both winter- and summertime 2012 during the Clean Air for London (ClearfLo) project. Positive matrix factorisation (PMF) factors of organic aerosol mass spectra measured by a high-resolution aerosol mass spectrometer (HR-AMS) showed traffic-dominant sources in summer but in winter the influence of additional non-traffic sources became more important, mainly from solid fuel sources (SF). Measurements using a single particle soot photometer (SP2, DMT), showed the traffic-dominant BC exhibited an almost uniform BC core size (Dc) distribution with very thin coating thickness throughout the detectable range of Dc. However, the size distribution of sf (project average mass median Dc = 149 ± 22 nm in winter, and 120 ± 6 nm in summer) and BC coating thickness varied significantly in winter. A novel methodology was developed to attribute the BC number concentrations and mass abundances from traffic (BCtr) and from SF (BCsf), by using a 2-D histogram of the particle optical properties as a function of BC core size, as measured by the SP2. The BCtr and BCsf showed distinctly different sf distributions and coating thicknesses, with BCsf displaying larger Dc and larger coating thickness compared to BCtr. BC particles from different sources were also apportioned by applying a multiple linear regression between the total BC mass and each AMS-PMF factor (BC-AMS-PMF method), and also attributed by applying the absorption spectral dependence of carbonaceous aerosols to 7-wavelength Aethalometer measurements (Aethalometer method). Air masses that originated from westerly (W), southeasterly (SE), and easterly (E) sectors showed BCsf fractions that ranged from low to high, and whose mass median Dc values were 137 ± 10 nm, 143 ± 11 nm and 169 ± 29 nm, respectively. The corresponding bulk relative coating thickness of BC (coated particle size/BC core - Dp/Dc) for these same sectors was 1.28 ± 0.07, 1.45 ± 0

  12. Size distribution, mixing state and source apportionments of black carbon aerosols in London during winter time

    NASA Astrophysics Data System (ADS)

    Liu, D.; Allan, J. D.; Young, D. E.; Coe, H.; Beddows, D.; Fleming, Z. L.; Flynn, M. J.; Gallagher, M. W.; Harrison, R. M.; Lee, J.; Prevot, A. S. H.; Taylor, J. W.; Yin, J.; Williams, P. I.; Zotter, P.

    2014-06-01

    Black carbon aerosols (BC) at a London urban site were characterized in both winter and summer time 2012 during the Clean Air for London (ClearfLo) project. Positive matrix factorization (PMF) factors of organic aerosol mass spectra measured by a high resolution aerosol mass spectrometer (HR-AMS) showed traffic-dominant sources in summer but in winter the influence of additional non-traffic sources became more important, mainly from solid fuel sources (SF). Measurements using a single particle soot photometer (SP2, DMT), showed the traffic-dominant BC exhibited an almost uniform BC core size (Dc) distribution with very thin coating thickness throughout the detectable range of Dc. However the size distribution of Dc (project average mass median Dc = 149 ± 22 nm in winter, and 120 ± 6 nm in summer) and BC coating thickness varied significantly in winter. A novel methodology was developed to attribute the BC number concentrations and mass abundances from traffic (BCtr) and from SF (BCsf), by using a 2-D histogram of the particle optical properties as a function of BC core size, as measured by the SP2. The BCtr and BCsf showed distinctly different Dc distributions and coating thicknesses, with BCsf displaying larger Dc and larger coating thickness compared to BCtr. BC particles from different sources were also apportioned by applying a multiple linear regression between the total BC mass and each AMS-PMF factor (BC-AMS-PMF method), and also attributed by applying the absorption spectral dependence of carbonaceous aerosols to 7-wavelength Aethalometer measurements (Aethalometer method). Air masses that originated from westerly (W), southeasterly (SE), or easterly (E) sectors showed BCsf fractions that ranged from low to high, and whose mass median Dc values were 137 ± 10 nm, 143 ± 11 nm, and 169 ± 29 nm respectively. The corresponding bulk relative coating thickness of BC (coated particle size / BC core - Dp / Dc) for these same sectors was 1.28 ± 0.07, 1.45 ± 0

  13. Long-Term Safe Storage and Disposal of Spent Sealed Radioactive Sources in Borehole Type Repositories

    SciTech Connect

    Ojovan, M. I.; Dmitriev, S. A.; Sobolev, I. A.

    2003-02-26

    Russian Federation has the leading experience in applying borehole storage/disposal method for SRS. A new immobilization technology for sources being disposed of in underground repositories was mastered by 1986 and since then it is used in the country. This method uses all advantages of borehole type repositories supplementing them with metal encapsulation of sources. Sources being uniformly allocated in the volume of underground vessel are fixed in the metal block hence ensuring long-term safety. The dissipation of radiogenic heat from SRS is considerably improved, radiation fields are reduced, and direct contact of sources to an environment is completely eliminated. The capacity of a typical borehole storage/disposal facility is increased almost 6 times applying metal immobilization. That has made new technology extremely favourable economically. The metal immobilization of SRS is considered as an option in Belarus and Ukraine as well as Bulgaria. Immobilization of sources in metal matrices can be a real solution for retrieval of SRS from inadequate repositories.

  14. Review of uncertainty sources affecting the long-term predictions of space debris evolutionary models

    NASA Astrophysics Data System (ADS)

    Dolado-Perez, J. C.; Pardini, Carmen; Anselmo, Luciano

    2015-08-01

    Since the launch of Sputnik-I in 1957, the amount of space debris in Earth's orbit has increased continuously. Historically, besides abandoned intact objects (spacecraft and orbital stages), the primary sources of space debris in Earth's orbit were (i) accidental and intentional break-ups which produced long-lasting debris and (ii) debris released intentionally during the operation of launch vehicle orbital stages and spacecraft. In the future, fragments generated by collisions are expected to become a significant source as well. In this context, and from a purely mathematical point of view, the orbital debris population in Low Earth Orbit (LEO) should be intrinsically unstable, due to the physics of mutual collisions and the relative ineffectiveness of natural sink mechanisms above~700 km. Therefore, the real question should not be "if", but "when" the exponential growth of the space debris population is supposed to start. From a practical point of view, and in order to answer the previous question, since the end of the 1980's several sophisticated long-term debris evolutionary models have been developed. Unfortunately, the predictions performed with such models, in particular beyond a few decades, are affected by considerable uncertainty. Such uncertainty comes from a relative important number of variables that being either under the partial control or completely out of the control of modellers, introduce a variability on the long-term simulation of the space debris population which cannot be captured with standard Monte Carlo statistics. The objective of this paper is to present and discuss many of the uncertainty sources affecting the long-term predictions done with evolutionary models, in order to serve as a roadmap for the uncertainty and the statistical robustness analysis of the long-term evolution of the space debris population.

  15. Spatial and Temporal Volatile Organic Compound Measurements in New England: Key Insight on Sources and Distributions

    NASA Astrophysics Data System (ADS)

    Sive, B. C.; White, M. L.; Russo, R. S.; Zhou, Y.; Ambrose, J. L.; Haase, K.; Mao, H.; Talbot, R. W.

    2010-12-01

    Volatile organic compounds (VOCs) in the atmosphere act as precursors in the formation of tropospheric ozone and their emissions and oxidation products can contribute to secondary organic aerosol formation and growth. In examining their effects on regional chemistry and pollution events, considerable uncertainties exist in our understanding of the relative contributions from different sources and classes of compounds as well as their transport from other regions. To quantitatively improve VOC emission estimates for New England, such as propane from widespread LPG leakage and biogenic emissions of toluene, we have conducted regional surveys for VOCs in northern New England to map their spatial variability. This has included repeated one-day quarterly sampling trips covering four 250 to 300 mile loops throughout Maine, New Hampshire, Massachusetts and small portions of Vermont, Connecticut, and Rhode Island and more intensive diurnal hourly-sampling at selected locations within the measurement area. Additionally, long-term VOC measurements from the AIRMAP atmospheric monitoring station at Thompson Farm in rural Durham, New Hampshire are utilized to characterize the mixing ratios, seasonal to interannual variability, and sources of VOCs in this region. The combination of the spatial temporal data sets has provided key insight on the various sources (e.g., fossil fuel combustion, gasoline, LPG, fuel or solvent evaporation, industry, biogenic) and distributions of VOCs throughout New England. Nonmethane hydrocarbon emission rate estimates from the regional sampling campaigns and Thompson Farm ranged from ~109-1010 molecules cm-2 s-1. Moreover, emission rates based on our spatial and temporal measurements are compared with the (2002 and 2005) EPA National Emissions Inventory for the northeastern U.S. Finally, details regarding our analytical techniques and long-term calibration scales will be presented for measurements of C2-C10 nonmethane hydrocarbons, C1-C2 halocarbons

  16. Trace elements in particulate matter from metropolitan regions of Northern China: Sources, concentrations and size distributions.

    PubMed

    Pan, Yuepeng; Tian, Shili; Li, Xingru; Sun, Ying; Li, Yi; Wentworth, Gregory R; Wang, Yuesi

    2015-12-15

    Public concerns over airborne trace elements (TEs) in metropolitan areas are increasing, but long-term and multi-site observations of size-resolved aerosol TEs in China are still lacking. Here, we identify highly elevated levels of atmospheric TEs in megacities and industrial sites in a Beijing-Tianjin-Hebei urban agglomeration relative to background areas, with the annual mean values of As, Pb, Ni, Cd and Mn exceeding the acceptable limits of the World Health Organization. Despite the spatial variability in concentrations, the size distribution pattern of each trace element was quite similar across the region. Crustal elements of Al and Fe were mainly found in coarse particles (2.1-9 μm), whereas the main fraction of toxic metals, such as Cu, Zn, As, Se, Cd and Pb, was found in submicron particles (<1.1 μm). These toxic metals were enriched by over 100-fold relative to the Earth's crust. The size distributions of Na, Mg, K, Ca, V, Cr, Mn, Ni, Mo and Ba were bimodal, with two peaks at 0.43-0.65 μm and 4.7-5.8 μm. The combination of the size distribution information, principal component analysis and air mass back trajectory model offered a robust technique for distinguishing the main sources for airborne TEs, e.g., soil dust, fossil fuel combustion and industrial emissions, at different sites. In addition, higher elemental concentrations coincided with westerly flow, indicating that polluted soil and fugitive dust were major sources of TEs on the regional scale. However, the contribution of coal burning, iron industry/oil combustion and non-ferrous smelters to atmospheric metal pollution in Northern China should be given more attention. Considering that the concentrations of heavy metals associated with fine particles in the target region were significantly higher than those in other Asian sites, the implementations of strict environmental standards in China are required to reduce the amounts of these hazardous pollutants released into the atmosphere.

  17. Evaluation of severe accident risks: Methodology for the containment, source term, consequence, and risk integration analyses; Volume 1, Revision 1

    SciTech Connect

    Gorham, E.D.; Breeding, R.J.; Brown, T.D.; Harper, F.T.; Helton, J.C.; Murfin, W.B.; Hora, S.C.

    1993-12-01

    NUREG-1150 examines the risk to the public from five nuclear power plants. The NUREG-1150 plant studies are Level III probabilistic risk assessments (PRAs) and, as such, they consist of four analysis components: accident frequency analysis, accident progression analysis, source term analysis, and consequence analysis. This volume summarizes the methods utilized in performing the last three components and the assembly of these analyses into an overall risk assessment. The NUREG-1150 analysis approach is based on the following ideas: (1) general and relatively fast-running models for the individual analysis components, (2) well-defined interfaces between the individual analysis components, (3) use of Monte Carlo techniques together with an efficient sampling procedure to propagate uncertainties, (4) use of expert panels to develop distributions for important phenomenological issues, and (5) automation of the overall analysis. Many features of the new analysis procedures were adopted to facilitate a comprehensive treatment of uncertainty in the complete risk analysis. Uncertainties in the accident frequency, accident progression and source term analyses were included in the overall uncertainty assessment. The uncertainties in the consequence analysis were not included in this assessment. A large effort was devoted to the development of procedures for obtaining expert opinion and the execution of these procedures to quantify parameters and phenomena for which there is large uncertainty and divergent opinions in the reactor safety community.

  18. The impact of light source spectral power distribution on sky glow

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Christian B.; Boley, Paul A.; Davis, Donald R.

    2014-05-01

    The effect of light source spectral power distribution on the visual brightness of anthropogenic sky glow is described. Under visual adaptation levels relevant to observing the night sky, namely with dark-adapted (scotopic) vision, blue-rich (“white”) sources produce a dramatically greater sky brightness than yellow-rich sources. High correlated color temperature LEDs and metal halide sources produce a visual brightness up to 8× brighter than low-pressure sodium and 3× brighter than high-pressure sodium when matched lumen-for-lumen and observed nearby. Though the sky brightness arising from blue-rich sources decreases more strongly with distance, the visual sky glow resulting from such sources remains significantly brighter than from yellow sources out to the limits of this study at 300 km.

  19. Evidence for bathymetric control on the distribution of body wave microseism sources from temporary seismic arrays in Africa

    NASA Astrophysics Data System (ADS)

    Euler, Garrett G.; Wiens, Douglas A.; Nyblade, Andrew A.

    2014-06-01

    Microseisms are the background seismic vibrations mostly driven by the interaction of ocean waves with the solid Earth. Locating the sources of microseisms improves our understanding of the range of conditions under which they are generated and has potential applications to seismic tomography and climate research. In this study, we detect persistent source locations of P-wave microseisms at periods of 5-10 s (0.1-0.2 Hz) using broad-band array noise correlation techniques and frequency-slowness analysis. Data include vertical component records from four temporary seismic arrays in equatorial and southern Africa with a total of 163 broad-band stations and deployed over a span of 13 yr (1994-2007). While none of the arrays were deployed contemporaneously, we find that the recorded microseismic P waves originate from common, distant oceanic bathymetric features with amplitudes that vary seasonally in proportion with extratropical cyclone activity. Our results show that the majority of the persistent microseismic P-wave source locations are within the 30-60º latitude belts of the Northern and Southern hemispheres while a substantially reduced number are found at lower latitudes. Variations in source location with frequency are also observed and indicate tomographic studies including microseismic body wave sources will benefit from analysing multiple frequency bands. We show that the distribution of these source regions in the North Atlantic as well as in the Southern Ocean correlate with variations in bathymetry and ocean wave heights and corroborate current theory on double-frequency microseism generation. The stability of the source locations over the 13-yr time span of our investigation suggests that the long-term body wave microseism source distribution is governed by variations in the bathymetry and ocean wave heights while the interaction of ocean waves has a less apparent influence.

  20. Voltage management of distribution networks with high penetration of distributed photovoltaic generation sources

    NASA Astrophysics Data System (ADS)

    Alyami, Saeed

    Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage

  1. SARNET: Integrating Severe Accident Research in Europe - Safety Issues in the Source Term Area

    SciTech Connect

    Haste, T.; Giordano, P.; Micaelli, J.-C.; Herranz, L.

    2006-07-01

    SARNET (Severe Accident Research Network) is a Network of Excellence of the EU 6. Framework Programme that integrates in a sustainable manner the research capabilities of about fifty European organisations to resolve important remaining uncertainties and safety issues concerning existing and future nuclear plant, especially water-cooled reactors, under hypothetical severe accident conditions. It emphasises integrating activities, spreading of excellence (including knowledge transfer) and jointly-executed research. This paper summarises the main results obtained at the middle of the current 4-year term, highlighting those concerning radioactive release to the environment. Integration is pursued through different methods: the ASTEC integral computer code for severe accident modelling, development of PSA level 2 methods, a means for definition, updating and resolution of safety issues, and development of a web database for storing experimental results. These activities are helped by an evolving Advanced Communication Tool, easing communication amongst partners. Concerning spreading of excellence, educational courses covering severe accident analysis methodology and level 2 PSA have been organised for early 2006. A text book on Severe Accident Phenomenology is being written. A mobility programme for students and young researchers has started. Results are disseminated mainly through open conference proceedings, with journal publications planned. The 1. European Review Meeting on Severe Accidents in November 2005 covered SARNET activities during its first 18 months. Jointly executed research activities concern key issues grouped in the Corium, Containment and Source Term areas. In Source Term, behaviour of the highly radio-toxic ruthenium under oxidising conditions, including air ingress, is investigated. Models are proposed for fuel and ruthenium oxidation. Experiments on transport of oxide ruthenium species are performed. Reactor scenario studies assist in defining

  2. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations

  3. The Multimedia Environmental Pollutant Assessment System (MEPAS){reg_sign}: Source-term release formulations

    SciTech Connect

    Streile, G.P.; Shields, K.D.; Stroh, J.L.; Bagaasen, L.M.; Whelan, G.; McDonald, J.P.; Droppo, J.G.; Buck, J.W.

    1996-11-01

    This report is one of a series of reports that document the mathematical models in the Multimedia Environmental Pollutant Assessment System (MEPAS). Developed by Pacific Northwest National Laboratory for the US Department of Energy, MEPAS is an integrated impact assessment software implementation of physics-based fate and transport models in air, soil, and water media. Outputs are estimates of exposures and health risk assessments for radioactive and hazardous pollutants. Each of the MEPAS formulation documents covers a major MEPAS component such as source-term, atmospheric, vadose zone/groundwater, surface water, and health exposure/health impact assessment. Other MEPAS documentation reports cover the sensitivity/uncertainty formulations and the database parameter constituent property estimation methods. The pollutant source-term release component is documented in this report. MEPAS simulates the release of contaminants from a source, transport through the air, groundwater, surface water, or overland pathways, and transfer through food chains and exposure pathways to the exposed individual or population. For human health impacts, risks are computed for carcinogens and hazard quotients for noncarcinogens. MEPAS is implemented on a desktop computer with a user-friendly interface that allows the user to define the problem, input the required data, and execute the appropriate models for both deterministic and probabilistic analyses.

  4. Experiments on liquid-metal fast breeder reactor aerosol source terms after severe accidents

    SciTech Connect

    Berthoud, G.; Longest, A.W.; Wright, A.L.; Schutz, W.P.

    1988-05-01

    In the extremely unlikely event of a liquid-metal fast breeder reactor core disruptive accident, expanding core material or sodium vapor inside the sodium pool may cause leaks in the vessel head and transport of radioactive material, mostly aerosols, in one large bubble or several smaller bubbles under energetic conditions to the cover gas and through leaks to the inner containment (''instantaneous source term''). Out-of-pile experiments on bubble expansion from a pressurized source inside a liquid (water or sodium) and related phenomena like heat transfer, condensation, entrainment, rise, and aerosol transport were carried out in France and the United States and are continuing in the Federal Republic of Germany. Parameters and results of these experiments are described and discussed, mainly concerning the aerosol problem. It appears that several mechanisms exist for a very efficient removal of particles from the bubble. Retention factors larger than 10,000 were found in most cases. In addition, a short survey is given of French and German experiments on fuel and fission product release from evaporating or burning sodium pools (delayed source term).

  5. Filtered chemical source term modeling for LES of high Karlovitz number premixed flames

    NASA Astrophysics Data System (ADS)

    Lapointe, Simon; Blanquart, Guillaume

    2015-11-01

    Tabulated chemistry with the transport of a single progress variable is a popular technique for large eddy simulations of premixed turbulent flames. Since the reaction zone thickness is usually smaller than the LES grid size, modeling of the filtered progress variable reaction rate is required. Most models assume that the filtered progress variable reaction rate is a function of the filtered progress variable and its variance where the dependence can be obtained through the probability density function (PDF) of the progress variable. Among the most common approaches, the PDF can be presumed (usually as a β-PDF) or computed using spatially filtered one dimensional laminar flames (FLF). Models for the filtered source term are studied a priori using results from DNS of turbulent n-heptane/air premixed flames at varying Karlovitz numbers. Predictions from the optimal estimator and models based on laminar flames using a β-PDF or a FLF-PDF are compared to the exact filtered source term. For all filter widths and Karlovitz numbers, the optimal estimator yields small errors while β-PDF and FLF-PDF approaches present larger errors. Sources of differences are discussed.

  6. Effect of tissue inhomogeneities on dose distributions from Cf-252 brachytherapy source.

    PubMed

    Ghassoun, J

    2013-01-01

    The Monte Carlo method was used to determine the effect of tissue inhomogeneities on dose distribution from a Cf-252 brachytherapy source. Neutron and gamma-ray fluences, energy spectra and dose rate distributions were determined in both homogenous and inhomogeneous phantoms. Simulations were performed using the MCNP5 code. Obtained results were compared with experimentally measured values published in literature. Results showed a significant change in neutron dose rate distributions in presence of heterogeneities. However, their effect on gamma rays dose distribution is minimal.

  7. Technical considerations related to interim source-term assumptions for emergency planning and equipment qualification. [PWR; BWR

    SciTech Connect

    Niemczyk, S.J.; McDowell-Boyer, L.M.

    1982-09-01

    The source terms recommended in the current regulatory guidance for many considerations of light water reactor (LWR) accidents were developed a number of years ago when understandings of many of the phenomena pertinent to source term estimation were relatively primitive. The purpose of the work presented here was to develop more realistic source term assumptions which could be used for interim regulatory purposes for two specific considerations, namely, equipment qualification and emergency planning. The overall approach taken was to adopt assumptions and models previously proposed for various aspects of source term estimation and to modify those assumptions and models to reflect recently gained insights into, and data describing, the release and transport of radionuclides during and after LWR accidents. To obtain illustrative estimates of the magnitudes of the source terms, the results of previous calculations employing the adopted assumptions and models were utilized and were modified to account for the effects of the recent insights and data.

  8. Particulate air pollution in six Asian cities: Spatial and temporal distributions, and associated sources

    NASA Astrophysics Data System (ADS)

    Kim Oanh, N. T.; Upadhyay, N.; Zhuang, Y.-H.; Hao, Z.-P.; Murthy, D. V. S.; Lestari, P.; Villarin, J. T.; Chengchua, K.; Co, H. X.; Dung, N. T.; Lindgren, E. S.

    A monitoring program for particulate matter pollution was designed and implemented in six Asian cities/metropolitan regions including Bandung, Bangkok, Beijing, Chennai, Manila, and Hanoi, within the framework of the Asian regional air pollution research network (AIRPET), coordinated by the Asian Institute of Technology. As uniform the methodologies as possible were intended with an established QA/QC procedure in order to produce reliable and comparable data by the network. The monsoon effects and seasonal changes in the sources/activities require long-term monitoring to understand the nature of air pollution in the cities. During phase 1 (2001-2004) of the AIRPET around 3000 fine and coarse particulate matter samples were collected from characteristic urban sites, which provide insight into temporal and spatial variations of PM in the cities. In all six cities, the levels of PM 10 and PM 2.5 were found high, especially during the dry season, which frequently exceeded the corresponding 24 h US EPA standards at a number of sites. The average concentrations of PM 2.5 and PM 10 in the cities ranged, respectively, 44-168 and 54-262 μg m -3 in the dry season, and 18-104 and 33-180 μg m -3 in the wet season. Spatial and temporal distribution of PM in each city, the ratios of PM 2.5 to PM 10, and the reconstructed mass were presented which provide useful information on possible PM sources in the cities. The findings help to understand the nature of particulate matter air pollution problems in the selected cities/metropolitan regions.

  9. Characterization and Source Term Assessments of Radioactive Particles from Marshall Islands Using Non-Destructive Analytical Techniques

    SciTech Connect

    Jernstrom, J; Eriksson, M; Simon, R; Tamborini, G; Bildstein, O; Carlos-Marquez, R; Kehl, S R; Betti, M; Hamilton, T

    2005-06-11

    A considerable fraction of radioactivity entering the environment from different nuclear events is associated with particles. The impact of these events can only be fully assessed where there is some knowledge about the mobility of particle bound radionuclides entering the environment. The behavior of particulate radionuclides is dependent on several factors, including the physical, chemical and redox state of the environment, the characteristics of the particles (e.g., the chemical composition, crystallinity and particle size) and on the oxidative state of radionuclides contained in the particles. Six plutonium-containing particles stemming from Runit Island soil (Marshall Islands) were characterized using non-destructive analytical and microanalytical methods. By determining the activity of {sup 239,240}Pu and {sup 241}Am isotopes from their gamma peaks structural information related to Pu matrix was obtained, and the source term was revealed. Composition and elemental distribution in the particles were studied with synchrotron radiation based micro X-ray fluorescence (SR-{mu}-XRF) spectrometry. Scanning electron microscope equipped with energy dispersive X-ray detector (SEMEDX) and secondary ion mass spectrometer (SIMS) were used to examine particle surfaces. Based on the elemental composition the particles were divided into two groups; particles with plain Pu matrix, and particles where the plutonium is included in Si/O-rich matrix being more heterogeneously distributed. All of the particles were identified as fragments of initial weapons material. As containing plutonium with low {sup 240}Pu/{sup 239}Pu atomic ratio, {approx}2-6%, which corresponds to weapons grade plutonium, the source term was identified to be among the safety tests conducted in the history of Runit Island.

  10. Operational source term estimation and ensemble prediction for the Grimsvoetn 2011 event

    NASA Astrophysics Data System (ADS)

    Maurer, Christian; Arnold, Delia; Klonner, Robert; Wotawa, Gerhard

    2014-05-01

    The ESA-funded international project VAST (Volcanic Ash Strategic Initiative Team) includes focusing on a realistic source term estimation in the case of volcanic eruptions as well as on an estimate of the forecast uncertainty in the resulting atmospheric dispersion calculations, which partly derive from the forecast uncertainty in the meteorological input data. SEVIRI earth observation data serve as a basis for the source term estimation, from which the total atmospheric column ash content can be estimated. In an operational environment, the already available EUMETCAST VOLE product may be used. Further an a priori source term is needed, which can be coarsely estimated according to information from previous eruptions and/or constrained with observations of the eruption column. The link between observations and the a priori source is established by runs of the atmospheric transport model FLEXPART for individual emission periods and a predefined number of vertical levels. Through minimizing the differences between observations and model results the so-called a posteriori source term can be depicted for a certain time interval as a function of height. Such a result is shown for a first test case, the eruption of the Grimsvoetn volcano on Iceland in May 2011. Once the dispersion calculations are as optimized as possible with regard to the source term, the uncertainty stemming from the forecast uncertainty of the numeric weather prediction model used is still present, adding up to the unavoidable model errors. Since it is impossible to perform FLEXPART runs for all 50 members of the Integrated Forecasting System (IFS) of ECMWF due to computational (time-storage) constraints, the number of members gets restricted to five (maximum seven) representative runs via cluster analysis. The approach used is as of Klonner (2012) where it was demonstrated that exclusive consideration of the wind components on a pressure level (e.g. 400 hPa) makes it possible to find clusters and

  11. The integration of renewable energy sources into electric power distribution systems. Volume 2, Utility case assessments

    SciTech Connect

    Zaininger, H.W.; Ellis, P.R.; Schaefer, J.C.

    1994-06-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: (1) The local solar insolation and/or wind characteristics; (2) renewable energy source penetration level; (3) whether battery or other energy storage systems are applied; and (4) local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kw-scale applications may be connected to three-phase secondaries, and larger hundred-kW and MW-scale applications, such as MW-scale windfarms or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications.

  12. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  13. Effect of asymmetry of the radio source distribution on the apparent proper motion kinematic analysis

    NASA Astrophysics Data System (ADS)

    Titov, O.; Malkin, Z.

    2009-11-01

    Context: Information on physical characteristics of astrometric radio sources, such as magnitude and redshift, is of great importance for many astronomical studies. However, data usually used in radio astrometry is often incomplete and outdated. Aims: Our purpose is to study the optical characteristics of more than 4000 radio sources observed by the astrometric VLBI technique since 1979. We also studied the effect of the asymmetry in the distribution of the reference radio sources on the correlation matrices between vector spherical harmonics of the first and second degrees. Methods: The radio source characteristics were mainly taken from the NASA/IPAC Extragalactic Database (NED). Characteristics of the gravitational lenses were checked with the CfA-Arizona Space Telescope LEns Survey. SIMBAD and HyperLeda databases were also used to clarify the characteristics of some objects. Also we simulated and investigated a list of 4000 radio sources evenly distributed around the celestial sphere. We estimated the correlation matrices between the vector spherical harmonics using the real as well as modelled distribution of the radio sources. Results: A new list OCARS (optical characteristics of astrometric radio sources) of 4261 sources has been compiled. Comparison of our data of optical characteristics with the official International Earth Rotation and Reference Systems Service (IERS) list showed significant discrepancies for about half of the 667 common sources. Finally, we found that asymmetry in the radio source distribution between hemispheres could cause significant correlation between the vector spherical harmonics, especially in the case of sparse distribution of the sources with high redshift. We also identified radio sources having a many-year observation history and lack of redshift. These sources should be urgently observed with large optical telescopes. Conclusions: The list of optical characteristics created in this paper is recommended for use as a

  14. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    SciTech Connect

    Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.; Peterson, Joshua L.

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process data to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.

  15. Modification to ORIGEN2 for generating N Reactor source terms. Volume 1

    SciTech Connect

    Schwarz, R.A.

    1997-04-01

    This report discusses work that has been done to upgrade the ORIGEN2 code cross sections to be compatible with the WIMS computer code data. Because of the changes in the ORIGEN2 calculations. Details on changes made to the ORIGEN2 computer code and the Radnuc code will be discussed along with additional work that should be done in the future to upgrade both ORIGEN2 and Radnuc. A detailed historical description of how source terms have been generated for N Reactor fuel stored in the K Basins has been generated. The neutron source discussed in this description was generated by the WIMS computer code (Gubbins et al. 1982) because of known shortcomings in the ORIGEN2 (Croff 1980) cross sections. Another document includes a discussion of the ORIGEN2 cross sections.

  16. Source term experiment STEP-3 simulating a PWR severe station blackout

    SciTech Connect

    Simms, R.; Baker, L. Jr.; Ritzman, R.L.

    1987-05-21

    For a severe PWR accident that leads to a loss of feedwater to the steam generators, such as might occur in a station blackout, fission product decay heating will cause a water boiloff. Without effective cooling of the core, steam will begin to oxidize the Zircaloy cladding. The noble gases and volatile fission products, such as Cs and I, that are major contributors to the radiological source term, will be released from the damaged fuel shortly after cladding failure. The accident environment when these volatile fission products escape was simulated in STEP-3 using four fuel elements from the Belgonucleaire BR3 reactor. The primary objective was to examine the releases in samples collected as close to the test zone as possible. In this paper, an analysis of the temperatures and hydrogen generation is compared with the measurements. The analysis is needed to estimate releases and characterize conditions at the source for studies of fission product transport.

  17. User`s Manual for the SOURCE1 and SOURCE2 Computer Codes: Models for Evaluating Low-Level Radioactive Waste Disposal Facility Source Terms (Version 2.0)

    SciTech Connect

    Icenhour, A.S.; Tharp, M.L.

    1996-08-01

    The SOURCE1 and SOURCE2 computer codes calculate source terms (i.e. radionuclide release rates) for performance assessments of low-level radioactive waste (LLW) disposal facilities. SOURCE1 is used to simulate radionuclide releases from tumulus-type facilities. SOURCE2 is used to simulate releases from silo-, well-, well-in-silo-, and trench-type disposal facilities. The SOURCE codes (a) simulate the degradation of engineered barriers and (b) provide an estimate of the source term for LLW disposal facilities. This manual summarizes the major changes that have been effected since the codes were originally developed.

  18. Analysis of source term modeling for low-level radioactive waste performance assessments

    SciTech Connect

    Icenhour, A.S.

    1995-03-01

    Site-specific radiological performance assessments are required for the disposal of low-level radioactive waste (LLW) at both commercial and US Department of Energy facilities. This work explores source term modeling of LLW disposal facilities by using two state-of-the-art computer codes, SOURCEI and SOURCE2. An overview of the performance assessment methodology is presented, and the basic processes modeled in the SOURCE1 and SOURCE2 codes are described. Comparisons are made between the two advective models for a variety of radionuclides, transport parameters, and waste-disposal technologies. These comparisons show that, in general, the zero-order model predicts undecayed cumulative fractions leached that are slightly greater than or equal to those of the first-order model. For long-lived radionuclides, results from the two models eventually reach the same value. By contrast, for short-lived radionuclides, the zero-order model predicts a slightly higher undecayed cumulative fraction leached than does the first-order model. A new methodology, based on sensitivity and uncertainty analyses, is developed for predicting intruder scenarios. This method is demonstrated for {sup 137}Cs in a tumulus-type disposal facility. The sensitivity and uncertainty analyses incorporate input-parameter uncertainty into the evaluation of a potential time of intrusion and the remaining radionuclide inventory. Finally, conclusions from this study are presented, and recommendations for continuing work are made.

  19. Short-term spatial change in a volcanic tremor source during the 2011 Kirishima eruption

    NASA Astrophysics Data System (ADS)

    Matsumoto, Satoshi; Shimizu, Hiroshi; Matsushima, Takeshi; Uehira, Kenji; Yamashita, Yusuke; Nakamoto, Manami; Miyazaki, Masahiro; Chikura, Hiromi

    2013-04-01

    Volcanic tremors are indicators of magmatic behavior, which is strongly related to volcanic eruptions and activity. Detection of spatial and temporal variations in the source location is important for understanding the mechanism of volcanic eruptions. However, short-term temporal variations within a tremor event have not always been detected by seismic array observations around volcanoes. Here, we show that volcanic tremor sources were activated at both the top (i.e., the crater) and the lower end of the conduit, by analyzing seismograms from a dense seismic array 3 km from the Shinmoedake crater, Kirishima volcano, Japan. We observed changes in the seismic ray direction during a volcanic tremor sequence, and inferred two major sources of the tremor from the slowness vectors of the approaching waves. One was located in a shallow region beneath the Shinmoedake crater. The other was found in a direction N30°W from the array, pointing to a location above a pressure source. The fine spatial and temporal characteristics of volcanic tremors suggest an interaction between deep and shallow conduits.

  20. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  1. The application of inverse methods to spatially-distributed acoustic sources

    NASA Astrophysics Data System (ADS)

    Holland, K. R.; Nelson, P. A.

    2013-10-01

    Acoustic inverse methods, based on the output of an array of microphones, can be readily applied to the characterisation of acoustic sources that can be adequately modelled as a number of discrete monopoles. However, there are many situations, particularly in the fields of vibroacoustics and aeroacoustics, where the sources are distributed continuously in space over a finite area (or volume). This paper is concerned with the practical problem of applying inverse methods to such distributed source regions via the process of spatial sampling. The problem is first tackled using computer simulations of the errors associated with the application of spatial sampling to a wide range of source distributions. It is found that the spatial sampling criterion for minimising the errors in the radiated far-field reconstructed from the discretised source distributions is strongly dependent on acoustic wavelength but is only weakly dependent on the details of the source field itself. The results of the computer simulations are verified experimentally through the application of the inverse method to the sound field radiated by a ducted fan. The un-baffled fan source with the associated flow field is modelled as a set of equivalent monopole sources positioned on the baffled duct exit along with a matrix of complimentary non-flow Green functions. Successful application of the spatial sampling criterion involves careful frequency-dependent selection of source spacing, and results in the accurate reconstruction of the radiated sound field. Discussions of the conditioning of the Green function matrix which is inverted are included and it is shown that the spatial sampling criterion may be relaxed if conditioning techniques, such as regularisation, are applied to this matrix prior to inversion.

  2. Numerical Dissipation and Wrong Propagation Speed of Discontinuities for Stiff Source Terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Kotov, D. V.; Sjoegreen, B.

    2012-01-01

    In compressible turbulent combustion/nonequilibrium flows, the constructions of numerical schemes for (a) stable and accurate simulation of turbulence with strong shocks, and (b) obtaining correct propagation speed of discontinuities for stiff reacting terms on coarse grids share one important ingredient - minimization of numerical dissipation while maintaining numerical stability. Here coarse grids means standard mesh density requirement for accurate simulation of typical non-reacting flows. This dual requirement to achieve both numerical stability and accuracy with zero or minimal use of numerical dissipation is most often conflicting for existing schemes that were designed for non-reacting flows. The goal of this paper is to relate numerical dissipations that are inherited in a selected set of high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities as a function of stiffness of the source term and the grid spacing.

  3. PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources

    PubMed Central

    Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa

    2015-01-01

    The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data. PMID:26834980

  4. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  5. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-01-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  6. Source term analysis for a criticality accident in metal production line glove boxes

    SciTech Connect

    Nguyen, D.H.

    1991-06-01

    A recent development in criticality accident analysis is the deterministic calculations of the transport of fission products and actinides through the barriers of the physical facility. The knowledge of the redistribution of the materials inside the facility will help determine the reentry and clean-up procedures. The amount of radioactive materials released to the environment is the source term for dispersion calculations. We have used an integrated computer model to determine the release of fission products to the environment from a hypothetical criticality event in a glove box of the metal production line (MPL) at the Lawrence Livermore National Laboratory (LLNL).

  7. ORIGEN-ARP, A Fast and Easy-to-Use Source Term Generation Tool

    SciTech Connect

    Bowman, S.M.; Hermann, O.W.; Leal, L.C.; Parks, C.V.

    1999-10-17

    ORIGEN-ARP is a new SCALE analytical sequence for spent fuel characterization and source term generation that serves as a faster alternative to the SAS2H sequence by using the Automatic Rapid Processing (ARP) methodology for generating problem-dependent ORIGEN-S cross-section libraries. ORIGEN-ARP provides an easy-to-use menu-driven input processor. This new sequence is two orders of magnitude faster than SAS2H while conserving the rigor and accuracy of the SAS2H methodology. ORIGEN-ARP has been validated against pressurized water reactor (PWR) and boiling water reactor (BWR) spent fuel chemical assay data.

  8. EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY

    SciTech Connect

    Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young; Kim, Chang-Lak

    2003-02-27

    Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation of the Korean concept of the LILW disposal project in the near future.

  9. Basic repository source term and data sheet report: Deaf Smith County

    SciTech Connect

    Not Available

    1987-01-01

    This report is one of a series describing studies undertaken in support of the US Department of Energy Civilian Radioactive Waste Management (CRWM) Program. This study contains the derivation of values for environmental source terms and resources consumed for a CRWM repository. Estimates include heavy construction equipment; support equipment; shaft-sinking equipment; transportation equipment; and consumption of fuel, water, electricity, and natural gas. Data are presented for construction and operation at an assumed site in Deaf Smith County, Texas. 2 refs., 6 tabs.

  10. Design parameters and source terms: Volume 1, Design parameters: Revision 0

    SciTech Connect

    Not Available

    1987-09-01

    The Design Parameters and Source Terms Document was prepared in accordance with DOE request and to provide data for the environmental impact study to be performed in the future for the Deaf Smith County, Texas site for a nuclear waste repository in salt. This document updates a previous unpublished report to the level of the Site Characterization Plan - Conceptual Design Report, SCP-CDR. The previous unpublished SCC Study identified the data needs for the Environmental Assessment effort for seven possible salt repository sites.

  11. Distribution and source of (129)I, (239)(,240)Pu, (137)Cs in the environment of Lithuania.

    PubMed

    Ežerinskis, Ž; Hou, X L; Druteikienė, R; Puzas, A; Šapolaitė, J; Gvozdaitė, R; Gudelis, A; Buivydas, Š; Remeikis, V

    2016-01-01

    Fifty five soil samples collected in the Lithuania teritory in 2011 and 2012 were analyzed for (129)I, (137)Cs and Pu isotopes in order to investigate the level and distribution of artificial radioactivity in Lithuania. The activity and atomic ratio of (238)Pu/((239,24)0)Pu, (129)I/(127)I and (131)I/(137)Cs were used to identify the origin of these radionuclides. The (238)Pu/(239+240)Pu and (240)Pu/(239)Pu ratios in the soil samples analyzed varied in the range of 0.02-0.18 and 0.18-0.24, respectively, suggesting the global fallout as the major source of Pu in Lithuania. The values of 10(-9) to 10(-6) for (129)I/(127)I atomic ratio revealed that the source of (129)I in Lithuania is global fallout in most cases though several sampling sites shows a possible impact of reprocessing releases. Estimated (129)I/(131)I ratio in soil samples from the southern part of Lithuania shows negligible input of the Chernobyl fallout. No correlation of the (137)Cs and Pu isotopes with (129)I was observed, indicating their different sources terms. Results demonstrate uneven distribution of these radionuclides in the Lithuanian territory and several sources of contamination i.e. Chernobyl accident, reprocessing releases and global fallout.

  12. Interpreting the neutron's electric form factor: Rest frame charge distribution or foldy term?

    SciTech Connect

    Nathan Isgur

    1998-12-01

    The neutron's electric form factor contains vital information on nucleon structure, but its interpretation within many models has been obscured by relativistic effects. The author demonstrates that, to leading order in the relativistic expansion of a constituent quark model, the Foldy term cancels exactly against a contribution to the Dirac form factor F{sub 1} to leave intact the naive interpretation of G{sup n}{sub E} as arising from the neutron's rest frame charge distribution.

  13. Using sediment particle size distribution to evaluate sediment sources in the Tobacco Creek Watershed

    NASA Astrophysics Data System (ADS)

    Liu, Cenwei; Lobb, David; Li, Sheng; Owens, Philip; Kuzyk, ZouZou

    2014-05-01

    Lake Winnipeg has recently brought attention to the deteriorated water quality due to in part to nutrient and sediment input from agricultural land. Improving water quality in Lake Winnipeg requires the knowledge of the sediment sources within this ecosystem. There are a variety of environmental fingerprinting techniques have been successfully used in the assessment of sediment sources. In this study, we used particle size distribution to evaluate spatial and temporal variations of suspended sediment and potential sediment sources collected in the Tobacco Creek Watershed in Manitoba, Canada. The particle size distribution of suspended sediment can reflect the origin of sediment and processes during sediment transport, deposition and remobilization within the watershed. The objectives of this study were to quantify visually observed spatial and temporal changes in sediment particles, and to assess the sediment source using a rapid and cost-effective fingerprinting technique based on particle size distribution. The suspended sediment was collected by sediment traps twice a year during rainfall and snowmelt periods from 2009 to 2012. The potential sediment sources included the top soil of cultivated field, riparian area and entire profile from stream banks. Suspended sediment and soil samples were pre-wet with RO water and sieved through 600 μm sieve before analyzing. Particle size distribution of all samples was determined using a Malvern Mastersizer 2000S laser diffraction with the measurement range up to 600μm. Comparison of the results for different fractions of sediment showed significant difference in particle size distribution of suspended sediment between snowmelt and rainfall events. An important difference of particle size distribution also found between the cultivated soil and forest soil. This difference can be explained by different land uses which provided a distinct fingerprint of sediment. An overall improvement in water quality can be achieved by

  14. Regulatory Technology Development Plan Sodium Fast Reactor. Mechanistic Source Term Development

    SciTech Connect

    Grabaskas, David S.; Brunett, Acacia Joann; Bucknor, Matthew D.; Sienicki, James J.; Sofu, Tanju

    2015-02-28

    Construction and operation of a nuclear power installation in the U.S. requires licensing by the U.S. Nuclear Regulatory Commission (NRC). A vital part of this licensing process and integrated safety assessment entails the analysis of a source term (or source terms) that represents the release of radionuclides during normal operation and accident sequences. Historically, nuclear plant source term analyses have utilized deterministic, bounding assessments of the radionuclides released to the environment. Significant advancements in technical capabilities and the knowledge state have enabled the development of more realistic analyses such that a mechanistic source term (MST) assessment is now expected to be a requirement of advanced reactor licensing. This report focuses on the state of development of an MST for a sodium fast reactor (SFR), with the intent of aiding in the process of MST definition by qualitatively identifying and characterizing the major sources and transport processes of radionuclides. Due to common design characteristics among current U.S. SFR vendor designs, a metal-fuel, pool-type SFR has been selected as the reference design for this work, with all phenomenological discussions geared toward this specific reactor configuration. This works also aims to identify the key gaps and uncertainties in the current knowledge state that must be addressed for SFR MST development. It is anticipated that this knowledge state assessment can enable the coordination of technology and analysis tool development discussions such that any knowledge gaps may be addressed. Sources of radionuclides considered in this report include releases originating both in-vessel and ex-vessel, including in-core fuel, primary sodium and cover gas cleanup systems, and spent fuel movement and handling. Transport phenomena affecting various release groups are identified and qualitatively discussed, including fuel pin and primary coolant retention, and behavior in the cover gas and

  15. Multiple concurrent sources localization based on a two-node distributed acoustic sensor network

    NASA Astrophysics Data System (ADS)

    Xu, Jiaxin; Zhao, Zhao; Chen, Chunzeng; Xu, Zhiyong

    2017-01-01

    In this work, we propose a new approach to localize multiple concurrent sources using a distributed acoustic sensor network. Only two node-arrays are required in this sensor network, and each node-array consists of only two widely spaced sensors. Firstly, direction-of-arrivals (DOAs) of multiple sources are estimated at each node-array by utilizing a new pooled angular spectrum proposed in this paper, which can implement the spatial aliasing suppression effectively. Based on minimum variance distortionless response (MVDR) beamforming and the DOA estimates of the sources, the time-frequency spectra containing the corresponding energy distribution features associated with those sources are reconstructed in each node-array. Then, scale invariant feature transform (SIFT) is employed to solve the DOA association problem. Performance evaluation is conducted with field recordings and experimental results prove the effectivity and feasibility of the proposed method.

  16. Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M

    2017-02-01

    Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.

  17. Temperature distribution of air source heat pump barn with different air flow

    NASA Astrophysics Data System (ADS)

    He, X.; Li, J. C.; Zhao, G. Q.

    2016-08-01

    There are two type of airflow form in tobacco barn, one is air rising, the other is air falling. They are different in the structure layout and working principle, which affect the tobacco barn in the distribution of temperature field and velocity distribution. In order to compare the temperature and air distribution of the two, thereby obtain a tobacco barn whose temperature field and velocity distribution are more uniform. Taking the air source heat pump tobacco barn as the investigated subject and establishing relevant mathematical model, the thermodynamics of the two type of curing barn was analysed and compared based on Fluent. Provide a reasonable evidence for chamber arrangement and selection of outlet for air source heat pump tobacco barn.

  18. Constraints on galactic distributions of gamma-ray burst sources from BATSE observations

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon; Meegan, Charles A.; Pendleton, Geoffrey N.; Fishman, Gerald J.; Wilson, Robert B.; Paciesas, William S.; Brock, Martin N.; Horack, John M.

    1994-01-01

    The paradigm that gamma-ray bursts originate from Galactic sources is studied in detail using the angular and intensity distributions observed by the Burst and Transient Source Experiment (BATSE) on NASA's Compton Gamma Ray Observatory (CGRO). Monte Carlo models of gamma-ray burst spatial distributions and luminosity functions are used to simulate bursts, which are then folded through mathematical models of BATSE selection effects. The observed and computed angular intensity distributions are analyzed using modifications of standard statistical homogeneity and isotropy studies. Analysis of the BATSE angular and intensity distributions greatly constrains the origins and luminosities of burst sources. In particular, it appears that no single population of sources confined to a Galactic disk, halo, or localized spiral arm satisfactorily explains BATSE observations and that effects of the burst luminosity function are secondary when considering such models. One family of models that still satisfies BATSE observations comprises sources located in an extended spherical Galactic corona. Coronal models are limited to small ranges of burst luminosity and core radius, and the allowed parameter space for such models shrinks with each new burst BATSE observes. Multiple-population models of bursts are found to work only if (1) the primary population accounts for the general isotropy and inhomogeneity seen in the BATSE observations and (2) secondary populations either have characteristics similar to the primary population or contain numbers that are small relative to the primary population.

  19. Kappa Distribution Model for Hard X-Ray Coronal Sources of Solar Flares

    NASA Astrophysics Data System (ADS)

    Oka, M.; Ishikawa, S.; Saint-Hilaire, P.; Krucker, S.; Lin, R. P.

    2013-02-01

    Solar flares produce hard X-ray emission, the photon spectrum of which is often represented by a combination of thermal and power-law distributions. However, the estimates of the number and total energy of non-thermal electrons are sensitive to the determination of the power-law cutoff energy. Here, we revisit an "above-the-loop" coronal source observed by RHESSI on 2007 December 31 and show that a kappa distribution model can also be used to fit its spectrum. Because the kappa distribution has a Maxwellian-like core in addition to a high-energy power-law tail, the emission measure and temperature of the instantaneous electrons can be derived without assuming the cutoff energy. Moreover, the non-thermal fractions of electron number/energy densities can be uniquely estimated because they are functions of only the power-law index. With the kappa distribution model, we estimated that the total electron density of the coronal source region was ~2.4 × 1010 cm-3. We also estimated without assuming the source volume that a moderate fraction (~20%) of electrons in the source region was non-thermal and carried ~52% of the total electron energy. The temperature was 28 MK, and the power-law index δ of the electron density distribution was -4.3. These results are compared to the conventional power-law models with and without a thermal core component.

  20. Characterization of a Distributed Plasma Ionization Source (DPIS) for Ion Mobility Spectrometry and Mass Spectrometry

    SciTech Connect

    Waltman, Melanie J.; Dwivedi, Prabha; Hill, Herbert; Blanchard, William C.; Ewing, Robert G.

    2008-10-15

    A recently developed atmospheric pressure ionization source, a distributed plasma ionization source (DPIS), was characterized and compared to commonly used atmospheric pressure ionization sources with both mass spectrometry and ion mobility spectrometry. The source consisted of two electrodes of different sizes separated by a thin dielectric. Application of a high RF voltage across the electrodes generated plasma in air yielding both positive and negative ions depending on the polarity of the applied potential. These reactant ions subsequently ionized the analyte vapors. The reactant ions generated were similar to those created in a conventional point-to-plane corona discharge ion source. The positive reactant ions generated by the source were mass identified as being solvated protons of general formula (H2O)nH+ with (H2O)2H+ as the most abundant reactant ion. The negative reactant ions produced were mass identified primarily as CO3-, NO3-, NO2-, O3- and O2- of various relative intensities. The predominant ion and relative ion ratios varied depending upon source construction and supporting gas flow rates. A few compounds including drugs, explosives and environmental pollutants were selected to evaluate the new ionization source. The source was operated continuously for several months and although deterioration was observed visually, the source continued to produce ions at a rate similar that of the initial conditions. The results indicated that the DPIS may have a longer operating life than a conventional corona discharge.

  1. Characterization of a distributed plasma ionization source (DPIS) for ion mobility spectrometry and mass spectrometry.

    PubMed

    Waltman, Melanie J; Dwivedi, Prabha; Hill, Herbert H; Blanchard, William C; Ewing, Robert G

    2008-10-19

    A recently developed atmospheric pressure ionization source, a distributed plasma ionization source (DPIS), was characterized and compared to commonly used atmospheric pressure ionization sources with both mass spectrometry (MS) and ion mobility spectrometry (IMS). The source consisted of two electrodes of different sizes separated by a thin dielectric. Application of a high RF voltage across the electrodes generated plasma in air yielding both positive and negative ions. These reactant ions subsequently ionized the analyte vapors. The reactant ions generated were similar to those created in a conventional point-to-plane corona discharge ion source. The positive reactant ions generated by the source were mass identified as being solvated protons of general formula (H(2)O)(n)H(+) with (H(2)O)(2)H(+) as the most abundant reactant ion. The negative reactant ions produced were mass identified primarily as CO(3)(-), NO(3)(-), NO(2)(-), O(3)(-) and O(2)(-) of various relative intensities. The predominant ion and relative ion ratios varied depending upon source construction and supporting gas flow rates. A few compounds including drugs, explosives and amines were selected to evaluate the new ionization source. The source was operated continuously for 3 months and although surface deterioration was observed visually, the source continued to produce ions at a rate similar that of the initial conditions.

  2. The Analytical Repository Source-Term (AREST) model: Description and documentation

    SciTech Connect

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs.

  3. Toward a Mechanistic Source Term in Advanced Reactors: A Review of Past Incidents, Experiments, and Analyses

    SciTech Connect

    Bucknor, Matthew; Brunett, Acacia J.; Grabaskas, David

    2016-04-17

    In 2015, as part of a Regulatory Technology Development Plan (RTDP) effort for sodium-cooled fast reactors (SFRs), Argonne National Laboratory investigated the current state of knowledge of source term development for a metal-fueled, pool-type SFR. This paper provides a summary of past domestic metal-fueled SFR incidents and experiments and highlights information relevant to source term estimations that were gathered as part of the RTDP effort. The incidents described in this paper include fuel pin failures at the Sodium Reactor Experiment (SRE) facility in July of 1959, the Fermi I meltdown that occurred in October of 1966, and the repeated melting of a fuel element within an experimental capsule at the Experimental Breeder Reactor II (EBR-II) from November 1967 to May 1968. The experiments described in this paper include the Run-Beyond-Cladding-Breach tests that were performed at EBR-II in 1985 and a series of severe transient overpower tests conducted at the Transient Reactor Test Facility (TREAT) in the mid-1980s.

  4. Decontamination Techniques and Fixative Coatings Evaluated in the Building 235-F Legacy Source Term Removal Study

    SciTech Connect

    WAYNE, FARRELL

    2005-04-21

    Savannah River Site Building 235-F was being considered for future plutonium storage and stabilization missions but the Defense Nuclear Facilities Safety Board (DNFSB) noted that large quantities of Plutonium-238 left in cells and gloveboxes from previous operations posed a potential hazard to both the existing and future workforce. This material resulted from the manufacture of Pu-238 heat sources used by the NASA space program to generate electricity for deep space exploration satellites. A multi-disciplinary team was assembled to propose a cost- effective solution to mitigate this legacy source term which would facilitate future DOE plutonium storage activities in 235-F. One aspect of this study involved an evaluation of commercially available radiological decontamination techniques to remove the legacy Pu-238 and fixative coatings that could stabilize any residual Pu-238 following decontamination activities. Four chemical methods were identified as most likely to meet decontamination objectives for this project and are discussed in detail. Short and long term fixatives will be reviewed with particular attention to the potential radiation damage caused by Pu-238, which has a high specific activity and would be expected to cause significant radiation damage to any coating applied. Encapsulants that were considered to mitigate the legacy Pu-238 will also be reviewed.

  5. Accident source terms for pressurized water reactors with high-burnup cores calculated using MELCOR 1.8.5.

    SciTech Connect

    Gauntt, Randall O.; Powers, Dana Auburn; Ashbaugh, Scott G.; Leonard, Mark Thomas; Longmire, Pamela

    2010-04-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in this study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs2MoO4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU

  6. The influence of cross-order terms in interface mobilities for structure-borne sound source characterization: Frame-like structures

    NASA Astrophysics Data System (ADS)

    Bonhoff, H. A.; Petersson, B. A. T.

    2009-01-01

    The applicability of interface mobilities for structure-borne sound source characterization critically depends on the admissibility of neglecting the cross-order terms. Following on from a previous study [H.A. Bonhoff, B.A.T. Petersson, Journal of Sound and Vibration 311 (2008) 473-484], the influence of the cross-order terms is investigated for frame-like structures under the assumption of a uniform force-order distribution. Considering the complex power, the cross-order terms are significant from intermediate frequencies on upwards. At lower frequencies, the cross-order terms can come into play for cases where the in-phase motion of the structure along the interface is constrained. The frequency characteristics of the influence of cross-order terms for the zero-order source descriptor and coupling function are similar to those of the complex power. For non-zero source descriptor and coupling function orders, the quality of the equal-order approximation mainly depends on the presence of low-order cross-order interface mobilities. By analyzing the symmetry of an interface system, it is possible to predict which cross-order terms are equal to zero. The equal-order approximation manages to capture the main trends and overall characteristics and offers an acceptable estimate for engineering practice.

  7. Regulatory Technology Development Plan - Sodium Fast Reactor. Mechanistic Source Term - Metal Fuel Radionuclide Release

    SciTech Connect

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-02-01

    The development of an accurate and defensible mechanistic source term will be vital for the future licensing efforts of metal fuel, pool-type sodium fast reactors. To assist in the creation of a comprehensive mechanistic source term, the current effort sought to estimate the release fraction of radionuclides from metal fuel pins to the primary sodium coolant during fuel pin failures at a variety of temperature conditions. These release estimates were based on the findings of an extensive literature search, which reviewed past experimentation and reactor fuel damage accidents. Data sources for each radionuclide of interest were reviewed to establish release fractions, along with possible release dependencies, and the corresponding uncertainty levels. Although the current knowledge base is substantial, and radionuclide release fractions were established for the elements deemed important for the determination of offsite consequences following a reactor accident, gaps were found pertaining to several radionuclides. First, there is uncertainty regarding the transport behavior of several radionuclides (iodine, barium, strontium, tellurium, and europium) during metal fuel irradiation to high burnup levels. The migration of these radionuclides within the fuel matrix and bond sodium region can greatly affect their release during pin failure incidents. Post-irradiation examination of existing high burnup metal fuel can likely resolve this knowledge gap. Second, data regarding the radionuclide release from molten high burnup metal fuel in sodium is sparse, which makes the assessment of radionuclide release from fuel melting accidents at high fuel burnup levels difficult. This gap could be addressed through fuel melting experimentation with samples from the existing high burnup metal fuel inventory.

  8. Source-term development for a contaminant plume for use by multimedia risk assessment models

    SciTech Connect

    Whelan, Gene ); McDonald, John P. ); Taira, Randal Y. ); Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equal importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.

  9. Long term change in relative contribution of various source regions on the surface ozone over Japan

    NASA Astrophysics Data System (ADS)

    Nagashima, T.; Sudo, K.; Akimoto, H.; Kurokawa, J.; Ohara, T.

    2011-12-01

    Although the concentrations of O3 precursors over Japan have been decreasing in recent decades, long-term monitoring data shows that the surface concentration of O3 in Japan has increased since the mid-1980s until the present time. As the cause of this recent increase in surface O3 over Japan, the trans-boundary transport of O3 from the outside of Japan have been pointed out and discussed. In particular, transport from East Asian countries whose emissions of the O3 precursors have been greatly increasing recently due to their economic growth is likely a major cause of the observed increase in O3 over Japan. However, the long-term change in other factors that also have an influence on the O3 in Japan, such as the domestic emissions or the background O3, should also be evaluated. Here, we performed a long-term (1980-2005) simulation of the Source-Receptor (S-R) relationship for surface O3 in East Asia by utilizing the tagged tracer method with a global chemical transport model. During this period, emissions of O3 precursors in the model from East Asia, especially from China, have increased more than double, while those from North America have not change so much and those from Europe have decreased. The model simulated the long-term increasing trend in the surface O3 over Japan similar to the observation. The long-term changes of contributions from each source region of O3 showed that the largest contributor to the increasing trend of surface O3 in Japan is the increase of O3 created in planetary boundary layer (PBL) of China which accounts for 35% of the trend, and those of O3 created in PBL of Korean Peninsular and Japan account for 13% and 12%, respectively. The O3 created in free troposphere of China also increased, which account for 4% of the trend. Therefore, almost 40% of recent O3 increase in Japan can be attributed to the increase in the O3 created over China.

  10. Distribution functions in plasmas generated by a volume source of fission fragments. [in nuclear pumped lasers

    NASA Technical Reports Server (NTRS)

    Deese, J. E.; Hassan, H. A.

    1979-01-01

    The role played by fission fragments and electron distribution functions in nuclear pumped lasers is considered and procedures for their calculations are outlined. The calculations are illustrated for a He-3/Xe mixture where fission is provided by the He-3(n,p)H-3 reaction. Because the dominant ion in the system depends on the Xe fraction, the distribution functions cannot be determined without the simultaneous consideration of a detailed kinetic model. As is the case for wall sources of fission fragments, the resulting plasmas are essentially thermal but the electron distribution functions are non-Maxwellian.

  11. Marine litter on Mediterranean shores: Analysis of composition, spatial distribution and sources in north-western Adriatic beaches.

    PubMed

    Munari, Cristina; Corbau, Corinne; Simeoni, Umberto; Mistri, Michele

    2016-03-01

    Marine litter is one descriptor in the EU Marine Strategy Framework Directive (MSFD). This study provides the first account of an MSFD indicator (Trends in the amount of litter deposited on coastlines) for the north-western Adriatic. Five beaches were sampled in 2015. Plastic dominated in terms of abundance, followed by paper and other groups. The average density was 0.2 litter items m(-2), but at one beach it raised to 0.57 items m(-2). The major categories were cigarette butts, unrecognizable plastic pieces, bottle caps, and others. The majority of marine litter came from land-based sources: shoreline and recreational activities, smoke-related activities and dumping. Sea-based sources contributed for less. The abundance and distribution of litter seemed to be particularly influenced by beach users, reflecting inadequate disposal practices. The solution to these problems involves implementation and enforcement of local educational and management policies.

  12. Impact of the differential fluence distribution of brachytherapy sources on the spectroscopic dose-rate constant

    SciTech Connect

    Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A. E-mail: ladewerd@wisc.edu

    2015-05-15

    Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-only model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in

  13. The Analytical Repository Source-Term (AREST) model: Analysis of spent fuel as a nuclear waste form

    SciTech Connect

    Apted, M.J.; Liebetrau, A.M.; Engel, D.W.

    1989-02-01

    The purpose of this report is to assess the performance of spent fuel as a final waste form. The release of radionuclides from spent nuclear fuel has been simulated for the three repository sites that were nominated for site characterization in accordance with the Nuclear Waste Policy Act of 1982. The simulation is based on waste package designs that were presented in the environmental assessments prepared for each site. Five distinct distributions for containment failure have been considered, and the release for nuclides from the UO/sub 2/ matrix, gap (including grain boundary), crud/surface layer, and cladding has been calculated with the Analytic Repository Source-Term (AREST) code. Separate scenarios involving incongruent and congruent release from the UO/sub 2/ matrix have also been examined using the AREST code. Congruent release is defined here as the condition in which the relative mass release rates of a given nuclide and uranium from the UO/sub 2/ matrix are equal to their mass ratios in the matrix. Incongruent release refers to release of a given nuclide from the UO/sub 2/ matrix controlled by its own solubility-limiting solid phase. Release of nuclides from other sources within the spent fuel (e.g., cladding, fuel/cladding gap) is evaluated separately from either incongruent or congruent matrix release. 51 refs., 200 figs., 9 tabs.

  14. Quantifying the Combined Effect of Radiation Therapy and Hyperthermia in Terms of Equivalent Dose Distributions

    SciTech Connect

    Kok, H. Petra; Crezee, Johannes; Franken, Nicolaas A.P.; Barendsen, Gerrit W.

    2014-03-01

    Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normal tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different

  15. Bacterial Composition in a Metropolitan Drinking Water Distribution System Utilizing Different Source Waters

    EPA Science Inventory

    The microbial community structure was investigated from bulk phase water samples of multiple collection sites from two service areas within the Cincinnati drinking water distribution system (DWDS). Each area is associated with a different primary source of water (i.e., groundwat...

  16. Potential breeding distributions of U.S. birds predicted with both short-term variability and long-term average climate data.

    PubMed

    Bateman, Brooke L; Pidgeon, Anna M; Radeloff, Volker C; Flather, Curtis H; VanDerWal, Jeremy; Akçakaya, H Resit; Thogmartin, Wayne E; Albright, Thomas P; Vavrus, Stephen J; Heglund, Patricia J

    2016-12-01

    Climate conditions, such as temperature or precipitation, averaged over several decades strongly affect species distributions, as evidenced by experimental results and a plethora of models demonstrating statistical relations between species occurrences and long-term climate averages. However, long-term averages can conceal climate changes that have occurred in recent decades and may not capture actual species occurrence well because the distributions of species, especially at the edges of their range, are typically dynamic and may respond strongly to short-term climate variability. Our goal here was to test whether bird occurrence models can be predicted by either covariates based on short-term climate variability or on long-term climate averages. We parameterized species distribution models (SDMs) based on either short-term variability or long-term average climate covariates for 320 bird species in the conterminous USA and tested whether any life-history trait-based guilds were particularly sensitive to short-term conditions. Models including short-term climate variability performed well based on their cross-validated area-under-the-curve AUC score (0.85), as did models based on long-term climate averages (0.84). Similarly, both models performed well compared to independent presence/absence data from the North American Breeding Bird Survey (independent AUC of 0.89 and 0.90, respectively). However, models based on short-term variability covariates more accurately classified true absences for most species (73% of true absences classified within the lowest quarter of environmental suitability vs. 68%). In addition, they have the advantage that they can reveal the dynamic relationship between species and their environment because they capture the spatial fluctuations of species potential breeding distributions. With this information, we can identify which species and guilds are sensitive to climate variability, identify sites of high conservation value where climate

  17. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  18. Detection prospects for high energy neutrino sources from the anisotropic matter distribution in the local Universe

    NASA Astrophysics Data System (ADS)

    Mertsch, Philipp; Rameez, Mohamed; Tamborra, Irene

    2017-03-01

    Constraints on the number and luminosity of the sources of the cosmic neutrinos detected by IceCube have been set by targeted searches for point sources. We set complementary constraints by using the 2MASS Redshift Survey (2MRS) catalogue, which maps the matter distribution of the local Universe. Assuming that the distribution of the neutrino sources follows that of matter, we look for correlations between ``warm'' spots on the IceCube skymap and the 2MRS matter distribution. Through Monte Carlo simulations of the expected number of neutrino multiplets and careful modelling of the detector performance (including that of IceCube-Gen2), we demonstrate that sources with local density exceeding 10‑6 Mpc‑3 and neutrino luminosity Lν lesssim 1042 erg s‑1 (1041 erg s‑1) will be efficiently revealed by our method using IceCube (IceCube-Gen2). At low luminosities such as will be probed by IceCube-Gen2, the sensitivity of this analysis is superior to requiring statistically significant direct observation of a point source.

  19. Transient Flows and Stratification of an Enclosure Containing Both a Localised and Distributed Source of Buoyancy

    NASA Astrophysics Data System (ADS)

    Partridge, Jamie; Linden, Paul

    2014-11-01

    We examine the transient flow and stratification in a naturally ventilated enclosure containing both a localised and distributed source of buoyancy. Both sources of buoyancy are located at the base of the enclosure to represent a building where there is a distributed heat flux from the floor, for example from a sun patch, that competes with a localised heat source within the space. The steady conditions of the space are controlled purely by the geometry of the enclosure and the ratio of the distributed and localised buoyancy fluxes Ψ and are independent of the order buoyancy fluxes are introduced into the space. However, the order sources are introduced into the space, such as delaying the introduction of a localised source, alter the transients significantly. To investigate this problem, small-scale experiments were conducted and compared to a `perfect-mixing' model of the transients. How the stratification evolves in time, in particular how long it takes to reach steady conditions, is key to understanding what can be expected in real buildings. The transient evolution of the interior stratification is reported here and compared to the theoretical model.

  20. Short-Term Synaptic Depression Is Topographically Distributed in the Cochlear Nucleus of the Chicken

    PubMed Central

    Oline, Stefan N.

    2014-01-01

    In the auditory system, sounds are processed in parallel frequency-tuned circuits, beginning in the cochlea. Activity of auditory nerve fibers reflects this frequency-specific topographic pattern, known as tonotopy, and imparts frequency tuning onto their postsynaptic target neurons in the cochlear nucleus. In birds, cochlear nucleus magnocellularis (NM) neurons encode the temporal properties of acoustic stimuli by “locking” discharges to a particular phase of the input signal. Physiological specializations exist in gradients corresponding to the tonotopic axis in NM that reflect the characteristic frequency (CF) of their auditory nerve fiber inputs. One feature of NM neurons that has not been investigated across the tonotopic axis is short-term synaptic plasticity. NM offers a rather homogeneous population of neurons with a distinct topographical distribution of synaptic properties that is ideal for the investigation of specialized synaptic plasticity. Here we demonstrate for the first time that short-term synaptic depression (STD) is expressed topographically, where unitary high CF synapses are more robust with repeated stimulation. Correspondingly, high CF synapses drive spiking more reliably than their low CF counterparts. We show that postsynaptic AMPA receptor desensitization does not contribute to the observed difference in STD. Further, rate of recovery from depression, a presynaptic property, does not differ tonotopically. Rather, we show that another presynaptic feature, readily releasable pool (RRP) size, is tonotopically distributed and inversely correlated with vesicle release probability. Mathematical model results demonstrate that these properties of vesicle dynamics are sufficient to explain the observed tonotopic distribution of STD. PMID:24453322

  1. Short-term synaptic depression is topographically distributed in the cochlear nucleus of the chicken.

    PubMed

    Oline, Stefan N; Burger, R Michael

    2014-01-22

    In the auditory system, sounds are processed in parallel frequency-tuned circuits, beginning in the cochlea. Activity of auditory nerve fibers reflects this frequency-specific topographic pattern, known as tonotopy, and imparts frequency tuning onto their postsynaptic target neurons in the cochlear nucleus. In birds, cochlear nucleus magnocellularis (NM) neurons encode the temporal properties of acoustic stimuli by "locking" discharges to a particular phase of the input signal. Physiological specializations exist in gradients corresponding to the tonotopic axis in NM that reflect the characteristic frequency (CF) of their auditory nerve fiber inputs. One feature of NM neurons that has not been investigated across the tonotopic axis is short-term synaptic plasticity. NM offers a rather homogeneous population of neurons with a distinct topographical distribution of synaptic properties that is ideal for the investigation of specialized synaptic plasticity. Here we demonstrate for the first time that short-term synaptic depression (STD) is expressed topographically, where unitary high CF synapses are more robust with repeated stimulation. Correspondingly, high CF synapses drive spiking more reliably than their low CF counterparts. We show that postsynaptic AMPA receptor desensitization does not contribute to the observed difference in STD. Further, rate of recovery from depression, a presynaptic property, does not differ tonotopically. Rather, we show that another presynaptic feature, readily releasable pool (RRP) size, is tonotopically distributed and inversely correlated with vesicle release probability. Mathematical model results demonstrate that these properties of vesicle dynamics are sufficient to explain the observed tonotopic distribution of STD.

  2. Calculation of the neutron source distribution in the VENUS PWR Mockup Experiment

    SciTech Connect

    Williams, M.L.; Morakinyo, P.; Kam, F.B.K.; Leenders, L.; Minsart, G.; Fabry, A.

    1984-01-01

    The VENUS PWR Mockup Experiment is an important component of the Nuclear Regulatory Commission's program goal of benchmarking reactor pressure vessel (RPV) fluence calculations in order to determine the accuracy to which RPV fluence can be computed. Of particular concern in this experiment is the accuracy of the source calculation near the core-baffle interface, which is the important region for contributing to RPV fluence. Results indicate that the calculated neutron source distribution within the VENUS core agrees with the experimental measured values with an average error of less than 3%, except at the baffle corner, where the error is about 6%. Better agreement with the measured fission distribution was obtained with a detailed space-dependent cross-section weighting procedure for thermal cross sections near the core-baffle interface region. The maximum error introduced into the predicted RPV fluence due to source errors should be on the order of 5%.

  3. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  4. ACT: a program for calculation of the changes in radiological source terms with time

    SciTech Connect

    Woolfolk, S.W.

    1985-08-12

    The program ACT calculates the source term activity from a set of initial activities as a function of discrete time steps. This calculation considers inbreeding of daughter products. ACT also calculates ''Probable Release'', which is the activity at a given time multiplied by both the fraction released and the probability of the release. The ''Probable Release'' not only assumes that the fraction released is a single step function with time, but that the probability of release is zero for a limited period and it can be described by the ''Wisconsin Regression'' function using time as the independent variable. Finally, the program calculates the time integrated sum of the ''Probable Release'' for each isotope. This program is intended to support analysis of releases from radioactive waste disposal sites such as those required by 40 CFR 191.

  5. Update to the NARAC NNPP Non-Reactor Source Term Products

    SciTech Connect

    Vogt, P

    2009-06-29

    Recent updates to NARAC plots for NNPP requires a modification to your iClient database. The steps you need to take are described below. Implementation of the non-reactor source terms in February 2009 included four plots, the traditional three instantaneous plots (1-3) and a new Gamma Dose Rate: 1. Particulate Air Concentration 2. Total Ground Deposition 3. Whole Body Inhalation Dose Rate (CEDE Rate) 4. Gamma Dose Rate These plots were all initially implemented to be instantaneous output and generated 30 minutes after the release time. Recently, Bettis and NAVSEA have requested the Whole Body CEDE rate plot to be changed to an integrated dose valid at two hours. This is consistent with the change made to the Thyroid Dose rate plot conversion to a 2-hour Integrated Thyroid dose for the Reactor and Criticality accidents.

  6. The Annular Core Research Reactor (ACRR) postulated limiting event initial and building source terms

    SciTech Connect

    Restrepo, L F

    1992-08-01

    As part of the update of the Safety analysis Report (SAR) for the Annular Core Research Reactor (ACRR), operational limiting events under the category of inadvertent withdrawal of an experiment while at power or during a power pulse were determined to be the most limiting event(s) for this reactor. This report provides a summary of the assumptions, modeling, and results in evaluation of: Reactivity and thermal hydraulics analysis to determine the amount of fuel melt or fuel damage ratios; The reactor inventories following the limiting event; A literature review of post NUREG-0772 release fraction experiment results on severe fuel damages; Decontamination factors due to in-pool transport; and In-building transport modeling and building source term analysis.

  7. Resolution of USQ regarding source term in the 232-Z waste incinerator building

    SciTech Connect

    Westsik, G.

    1995-12-31

    The 232-Z waste incinerator at the Hanford plutonium finishing facility was used to incinerate plutonium-bearing combustible materials generated during normal plant operations. Nondestructive analysis performed after the incinerator ceased operations indicated high plutonium loading in exhaust ductwork near the incinerator glove box, while the incinerator was found to have only low quantities. Measurements following a campaign to remove some of the ductwork resulted in a markedly higher assay valve for the incinerator glove box itself. Subsequent assays confirmed the most recent results and pointed to a potential further underestimation of the holdup, in part due to the attenuation due to fire brick which could not be seen and which had been thought to be present. Resolution of the raised concerns entailed forming a task team to perform further assay based on gamma and neutron NDA methods. This paper is a discussion of the unreviewed safety question regarding the source term in this area.

  8. Microbial characterization for the Source-Term Waste Test Program (STTP) at Los Alamos

    SciTech Connect

    Leonard, P.A.; Strietelmeier, B.A.; Pansoy-Hjelvik, M.E.; Villarreal, R.

    1999-04-01

    The effects of microbial activity on the performance of the proposed underground nuclear waste repository, the Waste Isolation Pilot Plant (WIPP) at Carlsbad, New Mexico are being studied at Los Alamos National Laboratory (LANL) as part of an ex situ large-scale experiment. Actual actinide-containing waste is being used to predict the effect of potential brine inundation in the repository in the distant future. The study conditions are meant to simulate what might exist should the underground repository be flooded hundreds of years after closure as a result of inadvertent drilling into brine pockets below the repository. The Department of Energy (DOE) selected LANL to conduct the Actinide Source-Term Waste Test Program (STTP) to confirm the predictive capability of computer models being developed at Sandia National Laboratory.

  9. Identification of an unknown source term in a vibrating cantilevered beam from final overdetermination

    NASA Astrophysics Data System (ADS)

    Hasanov, Alemdar

    2009-11-01

    Inverse problems of determining the unknown source term F(x, t) in the cantilevered beam equation utt = (EI(x)uxx)xx + F(x, t) from the measured data μ(x) := u(x, T) or ν(x) := ut(x, T) at the final time t = T are considered. In view of weak solution approach, explicit formulae for the Fréchet gradients of the cost functionals J1(F) = ||u(x, T; w) - μ(x)||20 and J2(F) = ||ut(x, T; w) - ν(x)||20 are derived via the solutions of corresponding adjoint (backward beam) problems. The Lipschitz continuity of the gradients is proved. Based on these results the gradient-type monotone iteration process is constructed. Uniqueness and ill-conditionedness of the considered inverse problems are analyzed.

  10. Distribution, sources, and potential toxicological significance of PAHs in drinking water sources within the Pearl River Delta.

    PubMed

    An, Taicheng; Qiao, Meng; Li, Guiying; Sun, Hongwei; Zeng, Xiangying; Fu, Jiamo

    2011-05-01

    The Pearl River Delta (PRD) region is one of the most population-dense areas in China. The safety of its drinking source water is essential to human health. Polycyclic aromatic hydrocarbons (PAHs) have attracted attention from the scientific community and the general public due to their toxicity and wide distribution in the global environment. In this work, PAHs pollution levels from the drinking source water in nine main cities within the PRD were investigated. ∑15 PAHs concentrations during the wet season varied from 32.0 to 754.8 ng L(-1) in the dissolved phase, and from 13.4 to 3017.8 ng L(-1) in the particulate phase. During the dry season, dissolved PAHs ranged from 48.1 to 113.6 ng L(-1), and particulate PAHs from 8.6 to 69.6 ng L(-1). Overall, ∑15 PAHs concentrations were extremely high in the XC and ZHQ stations during the wet season in 2008 and 2009. In most sites, PAHs originated from mixed sources. Hazard ratios based on non-cancerous and cancerous risks were extremely higher in XC compared with the others during the wet season, though they were much less than 1. Nevertheless, risks caused by the combined toxicity of ∑15 PAHs and other organics should be seriously considered. PAHs toxic equivalent quantities ranged from 0.508 to 177.077 ng L(-1).

  11. The integration of renewable energy sources into electric power distribution systems. Volume 1: National assessment

    SciTech Connect

    Barnes, P.R.; Van Dyke, J.W.; Tesche, F.M.; Zaininger, H.W.

    1994-06-01

    Renewable energy technologies such as photovoltaic, solar thermal electricity, and wind turbine power are environmentally beneficial sources of electric power generation. The integration of renewable energy sources into electric power distribution systems can provide additional economic benefits because of a reduction in the losses associated with transmission and distribution lines. Benefits associated with the deferment of transmission and distribution investment may also be possible for cases where there is a high correlation between peak circuit load and renewable energy electric generation, such as photovoltaic systems in the Southwest. Case studies were conducted with actual power distribution system data for seven electric utilities with the participation of those utilities. Integrating renewable energy systems into electric power distribution systems increased the value of the benefits by about 20 to 55% above central station benefits in the national regional assessment. In the case studies presented in Vol. II, the range was larger: from a few percent to near 80% for a case where costly investments were deferred. In general, additional savings of at least 10 to 20% can be expected by integrating at the distribution level. Wind energy systems were found to be economical in good wind resource regions, whereas photovoltaic systems costs are presently a factor of 2.5 too expensive under the most favorable conditions.

  12. A simple method for estimating potential source term bypass fractions from confinement structures

    SciTech Connect

    Kalinich, D.A.; Paddleford, D.F.

    1997-07-01

    Confinement structures house many of the operating processes at the Savannah River Site (SRS). Under normal operating conditions, a confinement structure in conjunction with its associated ventilation systems prevents the release of radiological material to the environment. However, under potential accident conditions, the performance of the ventilation systems and integrity of the structure may be challenged. In order to calculate the radiological consequences associated with a potential accident (e.g. fires, explosion, spills, etc.), it is necessary to determine the fraction of the source term initially generated by the accident that escapes from the confinement structure to the environment. While it would be desirable to estimate the potential bypass fraction using sophisticated control-volume/flow path computer codes (e.g. CONTAIN, MELCOR, etc.) in order to take as much credit as possible for the mitigative effects of the confinement structure, there are many instances where using such codes is not tractable due to limits on the level-of-effort allotted to perform the analysis. Moreover, the current review environment, with its emphasis on deterministic/bounding-versus probabilistic/best-estimate-analysis discourages using analytical techniques that require the consideration of a large number of parameters. Discussed herein is a simplified control-volume/flow path approach for calculating source term bypass fraction that is amenable to solution in a spreadsheet or with a commercial mathematical solver (e.g. MathCad or Mathematica). It considers the effects of wind and fire pressure gradients on the structure, ventilation system operation, and Halon discharges. Simple models are used to characterize the engineered and non-engineered flow paths. By making judicious choices for the limited set of problem parameters, the results from this approach can be defended as bounding and conservative.

  13. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  14. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  15. A simplified radionuclide source term for total-system performance assessment; Yucca Mountain Site Characterization Project

    SciTech Connect

    Wilson, M.L.

    1991-11-01

    A parametric model for releases of radionuclides from spent-nuclear-fuel containers in a waste repository is presented. The model is appropriate for use in preliminary total-system performance assessments of the potential repository site at Yucca Mountain, Nevada; for this reason it is simpler than the models used for detailed studies of waste-package performance. Terms are included for releases from the spent fuel pellets, from the pellet/cladding gap and the grain boundaries within the fuel pellets, from the cladding of the fuel rods, and from the radioactive fuel-assembly parts. Multiple barriers are considered, including the waste container, the fuel-rod cladding, the thermal ``dry-out``, and the waste form itself. The basic formulas for release from a single fuel rod or container are extended to formulas for expected releases for the whole repository by using analytic expressions for probability distributions of some important parameters. 39 refs., 4 figs., 4 tabs.

  16. Optimal long-term design, rehabilitation and upgrading of water distribution networks

    NASA Astrophysics Data System (ADS)

    Tanyimboh, Tiku; Kalungi, Paul

    2008-07-01

    Given a limited budget, the choice of the best water distribution network upgrading strategy is a complex optimization problem. A model for the optimal long-term design and upgrading of new and existing water distribution networks is presented. A key strength of the methodology is the use of maximum entropy flows, which reduces the size of the problem and enables the application of linear programming for pipe size optimization. It also ensures the reliability level is high. The capital and maintenance costs and hydraulic performance are considered simultaneously for a predefined design horizon. The timing of upgrading over the entire planning horizon is obtained by dynamic programming. The deterioration over time of the structural integrity and hydraulic capacity of every pipe are explicitly considered. The upgrading options considered include pipe paralleling and replacement. The effectiveness of the model is demonstrated using the water supply network of Wobulenzi town in Uganda.

  17. Improvement of capabilities of the Distributed Electrochemistry Modeling Tool for investigating SOFC long term performance

    SciTech Connect

    Gonzalez Galdamez, Rinaldo A.; Recknagle, Kurtis P.

    2012-04-30

    This report provides an overview of the work performed for Solid Oxide Fuel Cell (SOFC) modeling during the 2012 Winter/Spring Science Undergraduate Laboratory Internship at Pacific Northwest National Laboratory (PNNL). A brief introduction on the concept, operation basics and applications of fuel cells is given for the general audience. Further details are given regarding the modifications and improvements of the Distributed Electrochemistry (DEC) Modeling tool developed by PNNL engineers to model SOFC long term performance. Within this analysis, a literature review on anode degradation mechanisms is explained and future plans of implementing these into the DEC modeling tool are also proposed.

  18. Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms

    SciTech Connect

    Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary

    2013-04-01

    The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but the mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.

  19. Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms

    DOE PAGES

    Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary

    2013-04-01

    The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less

  20. Source-term experiment STEP-3 simulating a PWR severe station blackout

    SciTech Connect

    Simms, R.; Baker, L. Jr.; Ritzman, R.L.

    1987-01-01

    For a severe pressurized water reactor accident that leads to a loss of feedwater to the stream generators, such as might occur in a station blackout, fission product decay heating causes a water boil-off. Without effective decay heat removal, the fuel elements will be uncovered. Eventually, steam will oxidize the overheated cladding. The noble gases and volatile fission products, such as cesium and iodine, that are major contributors to the radiological source term will be released from the damaged fuel shortly after cladding failure. The accident environment when these volatile fission products escape was simulated in STEP-3 using four fuel elements from the Belgonucleaire BR3 reactor. The primary objective was to examine the releases in samples collected as close to the test zone as possible. In this paper, an analysis of the temperatures and hydrogen generation is compared with the measurements. The analysis is needed to estimate releases and characterize conditions at the source for studies of fission product transport.

  1. Independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool

    SciTech Connect

    Madni, I.K.

    1995-11-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor (LWR) nuclear power plants and is being developed for the US Nuclear Regulatory Commission (NRC) by Sandia National Laboratories. Brookhaven National Laboratory (BNL) has a program with the NRC called MELCOR Verification, Benchmarking, and Applications, the aim of which is to provide independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool. The scope of this program is to perform quality control verification on all released versions of MELCOR, to benchmark MELCOR against more mechanistic codes and experimental data from severe fuel damage tests, and to evaluate the ability of MELCOR to simulate long-term severe accident transients in commercial LWRs, by applying the code to model both boiling water reactors and pressurized water reactors. Under this program, BNL provided input to the NRC-sponsored MELCOR Peer Review and is currently contributing to the MELCOR Cooperative Assessment Program (MCAP). A summary of MELCOR assessment efforts at BNL and their contribution to NRC goals with respect to MELCOR is presented.

  2. Independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool

    SciTech Connect

    Madni, I.K.; Eltawila, F.

    1994-01-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the US Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC called ``MELCOR Verification, Benchmarking, and Applications,`` whose aim is to provide independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool. The scope of this program is to perform quality control verification on all released versions of MELCOR, to benchmark MELCOR against more mechanistic codes and experimental data from severe fuel damage tests, and to evaluate the ability of MELCOR to simulate long-term severe accident transients in commercial LWRs, by applying the code to model both BWRs and PWRs. Under this program, BNL provided input to the NRC-sponsored MELCOR Peer Review, and is currently contributing to the MELCOR Cooperative Assessment Program (MCAP). This paper presents a summary of MELCOR assessment efforts at BNL and their contribution to NRC goals with respect to MELCOR.

  3. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  4. Evaluation of the beta energy spectrum from a distributed uranium mill tailings source

    SciTech Connect

    Reif, R.H.; Martz, D.E.; Carlson, D.S.; Turner, J.B. )

    1993-10-01

    The beta energy spectra from uranium mill tailings, 90Sr with different absorber thicknesses, and a uranium metal slab were measured and compared to select an appropriate beta source for calibrating a personal dosimeter to measure shallow dose equivalent when exposed to uranium mill tailings. The measured beta energy spectrum from the 90Sr source, with a 111 mg cm-2 cover thickness, was selected as a possible calibration source for a personnel dosimeter. The dose equivalent rate to the skin at 1 cm from a distributed tailings source of infinite thickness, with a 226Ra activity of 56 Bq g-1 (1.5 x 10(3) pCi g-1), was measured to be 0.024 mSv h-1 (2.4 mrem h-1).

  5. Evaluation of the beta energy spectrum from a distributed uranium mill tailings source.

    PubMed

    Reif, R H; Martz, D E; Carlson, D S; Turner, J B

    1993-10-01

    The beta energy spectra from uranium mill tailings, 90Sr with different absorber thicknesses, and a uranium metal slab were measured and compared to select an appropriate beta source for calibrating a personal dosimeter to measure shallow dose equivalent when exposed to uranium mill tailings. The measured beta energy spectrum from the 90Sr source, with a 111 mg cm-2 cover thickness, was selected as a possible calibration source for a personnel dosimeter. The dose equivalent rate to the skin at 1 cm from a distributed tailings source of infinite thickness, with a 226Ra activity of 56 Bq g-1 (1.5 x 10(3) pCi g-1), was measured to be 0.024 mSv h-1 (2.4 mrem h-1).

  6. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    PubMed Central

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  7. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  8. Distribution and sources of carbon, nitrogen, phosphorus and biogenic silica in the sediments of Chilika lagoon

    NASA Astrophysics Data System (ADS)

    Nazneen, Sadaf; Raju, N. Janardhana

    2017-02-01

    The present study investigated the spatial and vertical distribution of organic carbon (OC), total nitrogen (TN), total phosphorus (TP) and biogenic silica (BSi) in the sedimentary environments of Asia's largest brackish water lagoon. Surface and core sediments were collected from various locations of the Chilika lagoon and were analysed for grain-size distribution and major elements in order to understand their distribution and sources. Sand is the dominant fraction followed by silt + clay. Primary production within the lagoon, terrestrial input from river discharge and anthropogenic activities in the vicinity of the lagoon control the distribution of OC, TN, TP and BSi in the surface as well as in the core sediments. Low C/N ratios in the surface sediments (3.49-3.41) and cores (4-11.86) suggest that phytoplankton and macroalgae may be major contributors of organic matter (OM) in the lagoon. BSi is mainly associated with the mud fraction. Core C5 from Balugaon region shows the highest concentration of OC ranging from 0.58-2.34%, especially in the upper 30 cm, due to direct discharge of large amounts of untreated sewage into the lagoon. The study highlights that Chilika is a dynamic ecosystem with a large contribution of OM by autochthonous sources with some input from anthropogenic sources as well.

  9. Dipole versus distributed EEG source localization for single versus averaged spikes in focal epilepsy.

    PubMed

    Plummer, C; Wagner, M; Fuchs, M; Harvey, A S; Cook, M J

    2010-06-01

    The aim of this study is to characterize and compare dipole and distributed EEG source localization (ESL) of interictal epileptiform discharges (IEDs) in focal epilepsy. Single and averaged scalp IEDs from eight patients-four with benign focal epilepsy of childhood with centrotemporal spikes (BFEC) and four with mesial temporal lobe epilepsy (MTLE)-underwent independent component analysis (ICA) from IED onset to peak. The boundary element method forward model was applied to one of four inverse models: two dipolar-moving regularized, rotating nonregularized and two distributed-standardized low-resolution electromagnetic tomography with rotating cortical sources or with fixed extended sources. Solutions were studied at IED onset, midupswing, peak; ESL strength maxima; ESL residual deviation minima (best fit). From 11,040 ESL parameter points and 960 ESL maps, best-fit dipole and distributed solutions fell at the IED midupswing in BFEC and MTLE when the dominant ICA component typically peaked, localizing to the lower Rolandic sulcus in BFEC and to basolateral or anterior temporal cortex in MTLE. Single-to-averaged ESL variability was high in MTLE. Dipole and distributed ESL are complementary; best-fit solutions for both occupy the IED midupswing and not the IED peak. ICA, a "blind" statistical operation, aids clinical interpretation of ESL fit quality. Single-to-averaged IED localization discordance can be high, a problem warranting further scrutiny if ESL is to earn a place in routine epilepsy care.

  10. Long-term accounting for raindrop size distribution variations improves quantitative precipitation estimation by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2016-04-01

    Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.

  11. Analysis of the relationship between landslides size distribution and earthquake source area

    NASA Astrophysics Data System (ADS)

    Valagussa, Andrea; Crosta, Giovanni B.; Frattini, Paolo; Xu, Chong

    2014-05-01

    The spatial distribution of earthquake induced landslides around the seismogenetic source has been analysed to better understand the triggering of landslides in seismic areas and to forecast the maximum distance at which an earthquake, with a certain magnitude, can induce landslides (e.g Keefer, 1984). However, when applying such approaches to old earthquakes (e.g 1929 Buller and 1968 Iningahua earthquakes New Zealand; Parker, 2013; 1976 Friuli earthquake, Italy) one should be concerned about the undersampling of smaller landslides which can be cancelled by erosion and landscape evolution. For this reason, it is important to characterize carefully the relationship between landslide area and number with distance from the source, but also the size distribution of landslides as a function of distance from the source. In this paper, we analyse the 2008 Wenchuan earthquake landslide inventory (Xu et al, 2013). The earthquake triggered more than 197,000 landslides of different type, including rock avalanches, rockfalls, translational and rotational slides, lateral spreads and derbies flows. First, we calculated the landslide intensity (number of landslides per unit area) and spatial density (landslide area per unit area) as a function of distance from the source area of the earthquake. Then, we developed magnitude frequency curves (MFC) for different distances from the source area. Comparing these curves, we can describe the relation between the distance and the frequency density of landslide in seismic area. Keefer D K (1984) Landslides caused by earthquakes. Geological Society of America Bulletin, 95(4), 406-421. Parker R N, (2013) Hillslope memory and spatial and temporal distributions of earthquake-induced landslides, Durham theses, Durham University. Xu, C., Xu, X., Yao, X., & Dai, F. (2013). Three (nearly) complete inventories of landslides triggered by the May 12, 2008 Wenchuan Mw 7.9 earthquake of China and their spatial distribution statistical analysis

  12. CCN frequency distributions and aerosol chemical composition from long-term observations at European ACTRIS supersites

    NASA Astrophysics Data System (ADS)

    Decesari, Stefano; Rinaldi, Matteo; Schmale, Julia Yvonne; Gysel, Martin; Fröhlich, Roman; Poulain, Laurent; Henning, Silvia; Stratmann, Frank; Facchini, Maria Cristina

    2016-04-01

    Cloud droplet number concentration is regulated by the availability of aerosol acting as cloud condensation nuclei (CCN). Predicting the air concentrations of CCN involves knowledge of all physical and chemical processes that contribute to shape the particle size distribution and determine aerosol hygroscopicity. The relevance of specific atmospheric processes (e.g., nucleation, coagulation, condensation of secondary organic and inorganic aerosol, etc.) is time- and site-dependent, therefore the availability of long-term, time-resolved aerosol observations at locations representative of diverse environments is strategic for the validation of state-of-the-art chemical transport models suited to predict CCN concentrations. We focused on long-term (year-long) datasets of CCN and of aerosol composition data including black carbon, and inorganic as well as organic compounds from the Aerosol Chemical Speciation Monitor (ACSM) at selected ACTRIS supersites (http://www.actris.eu/). We discuss here the joint frequency distribution of CCN levels and of aerosol chemical components concentrations for two stations: an alpine site (Jungfraujoch, CH) and a central European rural site (Melpitz, DE). The CCN frequency distributions at Jungfraujoch are broad and generally correlated with the distributions of the concentrations of aerosol chemical components (e.g., high CCN concentrations are most frequently found for high organic matter or black carbon concentrations, and vice versa), which can be explained as an effect of the strong seasonality in the aerosol characteristics at the mountain site. The CCN frequency distributions in Melpitz show a much weaker overlap with the distributions of BC concentrations or other chemical compounds. However, especially at high CCN concentration levels, a statistical correlation with organic matter (OM) concentration can be observed. For instance, the number of CCN (with particle diameter between 20 and 250 nm) at a supersaturation of 0.7% is

  13. Free-space quantum key distribution with a high generation rate potassium titanyl phosphate waveguide photon-pair source

    NASA Astrophysics Data System (ADS)

    Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip R.; Floyd, Bertram; Lind, Alexander J.; Cavin, John D.; Helmick, Spencer R.

    2016-09-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nm pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nm photons are up-converted to a single 532-nm photon in the first stage. In the second stage, the 532-nm photon is down-converted to an entangled photon-pair at 800 nm and 1600 nm which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free space QKD experiment with the B92 protocol are also presented.

  14. Free-Space Quantum Key Distribution with a High Generation Rate Potassium Titanyl Phosphate Waveguide Photon-Pair Source

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip; Floyd, Bertram M.; Lind, Alexander J.; Cavin, John D.; Helmick, Spencer R.

    2016-01-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nanometer pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nanometer photons are up-converted to a single 532-nanometer photon in the first stage. In the second stage, the 532-nanometer photon is down-converted to an entangled photon-pair at 800 nanometer and 1600 nanometer which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free-space QKD experiment with the B92 protocol are also presented.

  15. Occurrence, distribution and risk assessment of polychlorinated biphenyls and polybrominated diphenyl ethers in nine water sources.

    PubMed

    Yang, Yuyi; Xie, Qilai; Liu, Xinyu; Wang, Jun

    2015-05-01

    Water quality of water sources is a critical issue for human health in South China, which experiences rapid economic development and is the most densely populated region in China. In this study, the pollution of organohalogen compounds in nine important water sources, South China was investigated. Twenty six organohalogen compounds including seventeen polychlorinated biphenyls (PCBs) and nine polybrominated diphenyl ethers (PBDEs) were detected using gas chromatograph analysis. The concentrations of total PCBs ranged from 0.93 to 13.07ngL(-1), with an average value of 7.06ngL(-1). The total concentrations of nine PBDE congeners were found in range not detected (nd) to 7.87ngL(-1) with an average value of 2.59ngL(-1). Compositions of PCBs and PBDEs indicated the historical use of Aroclors 1248, 1254 and 1260, and commercial PBDEs may be the main source of organohalogen compounds in water sources in South China. The nine water sources could be classified into three clusters by self-organizing map neural network. Low halogenated PCBs and PBDEs showed similar distribution in the nine water sources. Cancer risks of PCBs and PBDEs via water consumption were all below 10(-6), indicating the water quality in the nine water sources, South China was safe for human drinking.

  16. Tsunami source parameters estimated from slip distribution and their relation to tsunami intensity

    NASA Astrophysics Data System (ADS)

    Bolshakova, Anna; Nosov, Mikhail; Kolesov, Sergey

    2015-04-01

    Estimation of the level of tsunami hazard on the basis of earthquake moment magnitude often fails. The most important reason for this is that tsunamis are related to earthquakes in a complex and ambiguous way. In order to reveal a measure of tsunamigenic potential of an earthquake that would be better than moment magnitude of earthquake we introduce a set of tsunami source parameters that can be calculated from co-seismic ocean-bottom deformation and bathymetry. We consider more than two hundred ocean-bottom earthquakes (1923-2014) those for which detailed slip distribution data (Finite Fault Model) are available on USGS, UCSB, Caltech, and eQuake-RC sites. Making use of the Okada formulae the vector fields of co-seismic deformation of ocean bottom are estimated from the slip distribution data. Taking into account bathymetry (GEBCO_08) we determine tsunami source parameters such as double amplitude of bottom deformation, displaced water volume, potential energy of initial elevation, etc. The tsunami source parameters are examined as a function of earthquake moment magnitude. The contribution of horisontal component of ocean bottom deformation to tsunami generation is investigated. We analyse the Soloviev-Imamura tsunami intensity as a function of tsunami source parameters. The possibility of usage of tsunami source parameters instead of moment magnitude in tsunami warning is discussed. This work was supported by the Russian Foundation for Basic Research, project 14-05-31295

  17. Extension of the distributed point source method for ultrasonic field modeling

    PubMed Central

    Cheng, Jiqi; Lin, Wei; Qin, Yi-Xian

    2011-01-01

    The distributed point source method (DPSM) was recently proposed for ultrasonic field modeling and other applications. This method uses distributed point sources, placed slightly behind transducer surface, to model the ultrasound field. The acoustic strength of each point source is obtained through matrix inversion that requires the number of target points on the transducer surface to be equal to the number of point sources. In this work, DPSM was extended and further developed to overcome the limitations of the original method and provide a solid mathematical explanation of the physical principle behind the method. With the extension, the acoustic strength of the point sources was calculated as the solution to the least squares minimization problem instead of using direct matrix inversion. As numerical examples, the ultrasound fields of circular and rectangular transducers were calculated using the extended and original DPSMs which were then systematically compared with the results calculated using the theoretical solution and the exact spatial impulse response method. The numerical results showed the extended method can model ultrasonic fields accurately without the scaling step required by the original method. The extended method has potential applications in ultrasonic field modeling, tissue characterization, nondestructive testing, and ultrasound system optimization. PMID:21269654

  18. Long-term Satellite Observations of Asian Dust Storm: Source, Pathway, and Interannual Variability

    NASA Technical Reports Server (NTRS)

    Hsu, N. Christina

    2008-01-01

    between Deep Blue retrievals of aerosol optical thickness and those directly from AERONET sunphotometers over desert and semi-desert regions. New Deep Blue products will allow scientists to determine quantitatively the aerosol properties near sources using high spatial resolution measurements from SeaWiFS and MODIS-like instruments. Long-term satellite measurements (1998 - 2007) from SeaWiFS will be utilized to investigate the interannual variability of source, pathway, and dust loading associated with the Asian dust storm outbreaks. In addition, monthly averaged aerosol optical thickness during the springtime from SeaWiFS will also be compared with the MODIS Deep Blue products.

  19. Arsenic solubility and distribution in poultry waste and long-term amended soil.

    PubMed

    Han, F X; Kingery, W L; Selim, H M; Gerard, P D; Cox, M S; Oldham, J L

    2004-03-05

    The purpose of this study was to quantify the solubility and distribution of As among solid-phase components in poultry wastes and soils receiving long-term poultry waste applications. Arsenic in the water-soluble, NaOCl-extractable (organically bound), NH(2)OH x HCl-extractable (oxide bound) and residual fractions were quantified in an Upper Coastal Plain soil (Neshoba County, MS) that received annual waste applications. After 25 years, As in the amended soil had a mean of 8.4 mg kg(-1) compared to 2.68 mg kg(-1) for a non-amended soil. Arsenic in the amended soil was mainly in the residual fraction (72% of total), which is generally considered the least bioavailable fraction. Arsenic in poultry waste samples was primarily water-soluble (5.3-25.1 mg kg(-1)), representing 36-75% of the total As. To assess the extent of spatial heterogeneity, total As in a 0.5-ha area within the long-term waste-amended field was quantified. Soil surface samples were taken on 10-m grid points and results for total As appeared negatively skewed and approximated a bimodal distribution. Total As in the amended soil was strongly correlated with Fe oxides, clay and hydroxy interlayered vermiculite concentrations, and negatively correlated with Mehlich III-P, mica and quartz contents.

  20. Reactive hydro- end chlorocarbons in the troposphere and lower stratosphere : sources, distributions, and chemical impact

    NASA Astrophysics Data System (ADS)

    Scheeren, H. A.

    2003-09-01

    The work presented in this thesis focuses on measurements of chemical reactive C2 C7 non-methane hydrocarbons (NMHC) and C1 C2 chlorocarbons with atmospheric lifetimes of a few hours up to about a year. The group of reactive chlorocarbons includes the most abundant atmospheric species with large natural sources, which are chloromethane (CH3Cl), dichloromethane (CH2Cl2), and trichloromethane (CHCl3), and tetrachloroethylene (C2Cl4) with mainly anthropogenic sources. The NMHC and chlorocarbons are present at relatively low quantities in our atmosphere (10-12 10-9 mol mol-1 of air). Nevertheless, they play a key role in atmospheric photochemistry. For example, the oxidation of NMHC plays a dominant role in the formation of ozone in the troposphere, while the photolysis of chlorocarbons contributes to enhanced ozone depletion in the stratosphere. In spite of their important role, however, their global source and sinks budgets are still poorly understood. Hence, this study aims at improving our understanding of the sources, distribution, and chemical role of reactive NMHC and chlorocarbons in the troposphere and lower stratosphere. To meet this aim, a comprehensive data set of selected C2 C7 NMHC and chlorocarbons has been analyzed, derived from six aircraft measurement campaigns with two different jet aircrafts (the Dutch TUD/NLR Cessna Citation PH-LAB, and the German DLR Falcon) conducted between 1995 and 2001 (STREAM 1995 and 1997 and 1998, LBA-CLAIRE 1998, INDOEX 1999, MINOS 2001). The NMHC and chlorocarbons have been detected by gas-chromatography (GC-FID/ECD) in pre-concentrated whole air samples collected in stainless steel canister on-board the measurement aircrafts. The measurement locations include tropical (Maldives/Indian Ocean and Surinam), midlatitude (Western Europe and Canada) and polar regions (Lapland/northern Sweden) between the equator to about 70ºN, covering different seasons and pollution levels in the troposphere and lower stratosphere. Of

  1. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  2. A bio-inspired cooperative algorithm for distributed source localization with mobile nodes.

    PubMed

    Khalili, Azam; Rastegarnia, Amir; Islam, Md Kafiul; Yang, Zhi

    2013-01-01

    In this paper we propose an algorithm for distributed optimization in mobile nodes. Compared with many published works, an important consideration here is that the nodes do not know the cost function beforehand. Instead of decision-making based on linear combination of the neighbor estimates, the proposed algorithm relies on information-rich nodes that are iteratively identified. To quickly find these nodes, the algorithm adopts a larger step size during the initial iterations. The proposed algorithm can be used in many different applications, such as distributed odor source localization and mobile robots. Comparative simulation results are presented to support the proposed algorithm.

  3. Source terms released into the environment for a station blackout severe accident at the Peach Bottom Atomic Power Station

    SciTech Connect

    Carbajo, J.J.

    1995-07-01

    This study calculates source terms released into the environment at the Peach Bottom Atomic Power Station after containment failure during a postulated low-pressure, short-term station blackout severe accident. The severe accident analysis code MELCOR, version 1.8.1, was used in these calculations. Source terms were calculated for three different containment failure modes. The largest environmental releases occur for early containment failure at the drywell liner in contact with the cavity by liner melt-through. This containment failure mode is very likely to occur when the cavity is dry during this postulated severe accident sequence.

  4. Temporal-spatial distribution of non-point source pollution in a drinking water source reservoir watershed based on SWAT

    NASA Astrophysics Data System (ADS)

    Wang, M.; Cheng, W.; Yu, B.-S.; Fang, Y.

    2015-05-01

    The conservation of drinking water source reservoirs has a close relationship between regional economic development and people's livelihood. Research on the non-point pollution characteristics in its watershed is crucial for reservoir security. Tang Pu Reservoir watershed was selected as the study area. The non-point pollution model of Tang Pu Reservoir was established based on the SWAT (Soil and Water Assessment Tool) model. The model was adjusted to analyse the temporal-spatial distribution patterns of total nitrogen (TN) and total phosphorus (TP). The results showed that the loss of TN and TP in the reservoir watershed were related to precipitation in flood season. And the annual changes showed an "M" shape. It was found that the contribution of loss of TN and TP accounted for 84.5% and 85.3% in high flow years, and for 70.3% and 69.7% in low flow years, respectively. The contributions in normal flow years were 62.9% and 63.3%, respectively. The TN and TP mainly arise from Wangtan town, Gulai town, and Wangyuan town, etc. In addition, it was found that the source of TN and TP showed consistency in space.

  5. Theoretical and measured electric field distributions within an annular phased array: consideration of source antennas.

    PubMed

    Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V

    1993-08-01

    The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices.

  6. Dynamical changes of ion current distribution for a Penning discharge source using a Langmuir probe arraya)

    NASA Astrophysics Data System (ADS)

    Li, M.; Xiang, W.; Xiao, K. X.; Chen, L.

    2012-02-01

    A paralleled plate electrode and a 9-tip Langmuir probe array located 1 mm behind the extraction exit of a cold cathode Penning ion source are employed to measure the total current and the dynamical changes of the ion current in the 2D profile, respectively. Operation of the ion source by 500 V DC power supply, the paralleled plate electrode and the Langmuir probe array are driven by a bias voltage ranging from -200 V to 200 V. The dependence of the total current and the dynamical changes of the ion current in the 2D profile are presented at the different bias voltage. The experimental results show that the distribution of ion current is axial symmetry and approximate a unimodal distribution.

  7. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    EPA Pesticide Factsheets

    This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).

  8. In situ image segmentation using the convexity of illumination distribution of the light sources.

    PubMed

    Zhang, Li

    2008-10-01

    When separating objects from a background in an image, we often meet difficulties in obtaining the precise output due to the unclear edges of the objects, as well as the poor or nonuniform illumination. In order to solve this problem, this paper presents an in situ segmentation method which takes advantages of the distribution feature of illumination of light sources, rather than analyzing the image pixels themselves. After analyzing the convexity of illumination distribution (CID) of point and linear light sources, the paper makes use of the CID features to find pixels belonging to the background. Then some background pixels are selected as control points to reconstruct the image background by means of B-spline; finally, by subtracting the reconstructed background from the original image, global thresholding can be employed to make the final segmentation. Quantitative evaluation experiments are made to test the performance of the method.

  9. A Monte Carlo study on dose distribution evaluation of Flexisource 192Ir brachytherapy source

    PubMed Central

    Alizadeh, Majid; Ghorbani, Mahdi; Haghparast, Abbas; Zare, Naser; Ahmadi Moghaddas, Toktam

    2015-01-01

    Aim The aim of this study is to evaluate the dose distribution of the Flexisource 192Ir source. Background Dosimetric evaluation of brachytherapy sources is recommended by task group number 43 (TG. 43) of American Association of Physicists in Medicine (AAPM). Materials and methods MCNPX code was used to simulate Flexisource 192Ir source. Dose rate constant and radial dose function were obtained for water and soft tissue phantoms and compared with previous data on this source. Furthermore, dose rate along the transverse axis was obtained by simulation of the Flexisource and a point source and the obtained data were compared with those from Flexiplan treatment planning system (TPS). Results The values of dose rate constant obtained for water and soft tissue phantoms were equal to 1.108 and 1.106, respectively. The values of the radial dose function are listed in the form of tabulated data. The values of dose rate (cGy/s) obtained are shown in the form of tabulated data and figures. The maximum difference between TPS and Monte Carlo (MC) dose rate values was 11% in a water phantom at 6.0 cm from the source. Conclusion Based on dosimetric parameter comparisons with values previously published, the accuracy of our simulation of Flexisource 192Ir was verified. The results of dose rate constant and radial dose function in water and soft tissue phantoms were the same for Flexisource and point sources. For Flexisource 192Ir source, the results of TPS calculations in a water phantom were in agreement with the simulations within the calculation uncertainties. Furthermore, the results from the TPS calculation for Flexisource and MC calculation for a point source were practically equal within the calculation uncertainties. PMID:25949224

  10. [Case study of red water phenomenon in drinking water distribution systems caused by water source switch].

    PubMed

    Wang, Yang; Zhang, Xiao-jian; Chen, Chao; Pan, An-jun; Xu, Yang; Liao, Ping-an; Zhang, Su-xia; Gu, Jun-nong

    2009-12-01

    Red water phenomenon occurred in some communities of a city in China after water source switch in recent days. The origin of this red water problem and mechanism of iron release were investigated in the study. Water quality of local and new water sources was tested and tap water quality in suffered area had been monitored for 3 months since red water occurred. Interior corrosion scales on the pipe which was obtained from the suffered area were analyzed by XRD, SEM, and EDS. Corrosion rates of cast iron under the conditions of two source water were obtained by Annular Reactor. The influence of different source water on iron release was studied by pipe section reactor to simulate the distribution systems. The results indicated that large increase of sulfate concentration by water source shift was regarded as the cause of red water problem. The Larson ratio increased from about 0.4 to 1.7-1.9 and the red water problem happened in the taps of some urban communities just several days after the new water source was applied. The mechanism of iron release was concluded that the stable shell of scales in the pipes had been corrupted by this kind of high-sulfate-concentration source water and it was hard to recover soon spontaneously. The effect of sulfate on iron release of the old cast iron was more significant than its effect on enhancing iron corrosion. The rate of iron release increased with increasing Larson ratio, and the correlation of them was nonlinear on the old cast-iron. The problem remained quite a long time even if the water source re-shifted into the blended one with only small ratio of the new source and the Larson ratio reduced to about 0.6.

  11. Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks

    NASA Astrophysics Data System (ADS)

    Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.

    2010-12-01

    Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for

  12. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    PubMed

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  13. Single-Event Correlation Analysis of Quantum Key Distribution with Single-Photon Sources

    NASA Astrophysics Data System (ADS)

    Shangli Dong,; Xiaobo Wang,; Guofeng Zhang,; Liantuan Xiao,; Suotang Jia,

    2010-04-01

    Multiple photons exist that allow efficient eavesdropping strategies that threaten the security of quantum key distribution. In this paper, we theoretically discuss the photon correlations between authorized partners in the case of practical single-photon sources including a multiple-photon background. To investigate the feasibility of intercept-resend attacks, the cross correlations and the maximum intercept-resend ratio caused by the background signal are determined using single-event correlation analysis based on single-event detection.

  14. Influence of the electron source distribution on field-aligned currents

    NASA Astrophysics Data System (ADS)

    Bruening, K.; Goertz, C. K.

    1985-01-01

    The field-aligned current density above a discrete auroral arc has been deduced from the downward electron flux and magnetic field measurements onboard the rocket Porcupine flight 4. Both measurements show that the field-aligned current density is, in spite of decreasing peak energies towards the edge of the arc, about 4 times higher there than in the center of the arc. This can be explained by using the single particle description for an anisotropic electron source distribution.

  15. Revision of earthquake hypocentre locations in global bulletin data sets using source-specific station terms

    NASA Astrophysics Data System (ADS)

    Nooshiri, Nima; Saul, Joachim; Heimann, Sebastian; Tilmann, Frederik; Dahm, Torsten

    2017-02-01

    Global earthquake locations are often associated with very large systematic travel-time residuals even for clear arrivals, especially for regional and near-regional stations in subduction zones because of their strongly heterogeneous velocity structure. Travel-time corrections can drastically reduce travel-time residuals at regional stations and, in consequence, improve the relative location accuracy. We have extended the shrinking-box source-specific station terms technique to regional and teleseismic distances and adopted the algorithm for probabilistic, nonlinear, global-search location. We evaluated the potential of the method to compute precise relative hypocentre locations on a global scale. The method has been applied to two specific test regions using existing P- and pP-phase picks. The first data set consists of 3103 events along the Chilean margin and the second one comprises 1680 earthquakes in the Tonga-Fiji subduction zone. Pick data were obtained from the GEOFON earthquake bulletin, produced using data from all available, global station networks. A set of timing corrections varying as a function of source position was calculated for each seismic station. In this way, we could correct the systematic errors introduced into the locations by the inaccuracies in the assumed velocity structure without explicitly solving for a velocity model. Residual statistics show that the median absolute deviation of the travel-time residuals is reduced by 40-60 per cent at regional distances, where the velocity anomalies are strong. Moreover, the spread of the travel-time residuals decreased by ˜20 per cent at teleseismic distances (>28°). Furthermore, strong variations in initial residuals as a function of recording distance are smoothed out in the final residuals. The relocated catalogues exhibit less scattered locations in depth and sharper images of the seismicity associated with the subducting slabs. Comparison with a high-resolution local catalogue reveals that

  16. Source contributions to the regional distribution of secondary particulate matter in California

    NASA Astrophysics Data System (ADS)

    Ying, Qi; Kleeman, Michael J.

    Source contributions to PM2.5 nitrate, sulfate and ammonium ion concentrations in California's San Joaquin Valley (SJV) (4-6 January 1996) and South Coast Air Basin (SoCAB) surrounding Los Angeles (23-25 September 1996) were predicted using a three-dimensional source-oriented Eulerian air quality model. The air quality model tracks the formation of PM2.5 nitrate, sulfate and ammonium ion from primary particles and precursor gases emitted from different sources though a mathematical simulation of emission, chemical reaction, gas-to-particle conversion, transport and deposition. The observed PM2.5 nitrate, sulfate and ammonium ion concentrations, and the mass distribution of nitrate, sulfate and ammonium ion as a function of particle size have been successfully reproduced by the model simulation. Approximately 45-57% of the PM2.5 nitrate and 34-40% of the PM2.5 ammonium ion in the SJV is formed from precursor gaseous species released from sources upwind of the valley. In the SoCAB, approximately 83% of the PM2.5 nitrate and 82% of the PM2.5 ammonium ion is formed from precursor gaseous species released from sources within the air basin. In the SJV, transportation related sources contribute approximately 24-30% of the PM2.5 nitrate (diesel engines ˜13.5-17.0%, catalyst equipped gasoline engines ˜10.2-12.8% and non-catalyst equipped gasoline engines ˜0.3-0.4%). In the SoCAB, transportation related sources directly contribute to approximately 67% of the PM2.5 nitrate (diesel engines 34.6%, non-catalyst equipped gasoline engine 4.7% and catalyst equipped gasoline engine 28.1%). PM2.5 ammonium ion concentrations in the SJV were dominated by area (including animal) NH 3 sources (16.7-25.3%), soil (7.2-10.9%), fertilizer NH 3 sources (11.4-17.3%) and point NH 3 sources (14.3-21.7%). In the SoCAB, ammonium ion is mainly associated with animal sources (28.2%) and catalyst equipped gasoline engines (16.2%). In both regions, the majority of the relatively low PM2.5 sulfate

  17. Long-term dust aerosol production from natural sources in Iceland.

    PubMed

    Dagsson-Waldhauserova, Pavla; Arnalds, Olafur; Olafsson, Haraldur

    2017-02-01

    Iceland is a volcanic island in the North Atlantic Ocean with maritime climate. In spite of moist climate, large areas are with limited vegetation cover where >40% of Iceland is classified with considerable to very severe erosion and 21% of Iceland is volcanic sandy deserts. Not only do natural emissions from these sources influenced by strong winds affect regional air quality in Iceland ("Reykjavik haze"), but dust particles are transported over the Atlantic ocean and Arctic Ocean >1000 km at times. The aim of this paper is to place Icelandic dust production area into international perspective, present long-term frequency of dust storm events in northeast Iceland, and estimate dust aerosol concentrations during reported dust events. Meteorological observations with dust presence codes and related visibility were used to identify the frequency and the long-term changes in dust production in northeast Iceland. There were annually 16.4 days on average with reported dust observations on weather stations within the northeastern erosion area, indicating extreme dust plume activity and erosion within the northeastern deserts, even though the area is covered with snow during the major part of winter. During the 2000s the highest occurrence of dust events in six decades was reported. We have measured saltation and Aeolian transport during dust/volcanic ash storms in Iceland, which give some of the most intense wind erosion events ever measured. Icelandic dust affects the ecosystems over much of Iceland and causes regional haze. It is likely to affect the ecosystems of the oceans around Iceland, and it brings dust that lowers the albedo of the Icelandic glaciers, increasing melt-off due to global warming. The study indicates that Icelandic dust may contribute to the Arctic air pollution.

  18. Long-term fluctuations of hailstorms in South Moravia, Czech Republic: synthesis of different data sources

    NASA Astrophysics Data System (ADS)

    Chromá, Kateřina; Brázdil, Rudolf; Dolák, Lukáš; Řezníčková, Ladislava; Valášek, Hubert; Zahradníček, Pavel

    2016-04-01

    Hailstorms belong to natural phenomena causing great material damage in present time, similarly as it was in the past. In Moravia (eastern part of the Czech Republic), systematic meteorological observations started generally in the latter half of the 19th century. Therefore, in order to create long-term series of hailstorms, it is necessary to search for other sources of information. Different types of documentary evidence are used in historical climatology, such as annals, chronicles, diaries, private letters, newspapers etc. Besides them, institutional documentary evidence of economic and administrative character (e.g. taxation records) has particular importance. This study aims to create a long-term series of hailstorms in South Moravia using various types of documentary evidence (such as taxation records, family archives, chronicles and newspapers which are the most important) and systematic meteorological observations in the station network. Although available hailstorm data cover the 1541-2014 period, incomplete documentary evidence allows reasonable analysis of fluctuations in hailstorm frequency only since the 1770s. The series compiled from documentary data and systematic meteorological observations is used to identify periods of lower and higher hailstorm frequency. Existing data may be used also for the study of spatial hailstorm variability. Basic uncertainties of compiled hailstorm series are discussed. Despite some bias in hailstorm data, South-Moravian hailstorm series significantly extends our knowledge about this phenomenon in the south-eastern part of the Czech Republic. The study is a part of the research project "Hydrometeorological extremes in Southern Moravia derived from documentary evidence" supported by the Grant Agency of the Czech Republic, reg. no. 13-19831S.

  19. Polycyclic Aromatic Hydrocarbons in the Dagang Oilfield (China): Distribution, Sources, and Risk Assessment

    PubMed Central

    Jiao, Haihua; Rui, Xiaoping; Wu, Shanghua; Bai, Zhihui; Zhuang, Xuliang; Huang, Zhanbin

    2015-01-01

    The levels of 16 polycyclic aromatic hydrocarbons (PAHs) were investigated in 27 upper layer (0–25 cm) soil samples collected from the Dagang Oilfield (China) in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs) varied between 103.6 µg·kg−1 and 5872 µg·kg−1, with a mean concentration of 919.8 µg·kg−1; increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg−1), to oil well areas (mean of 627.3 µg·kg−1), to urban and residential zones (mean of 1856 µg·kg−1). Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B) or not classified/non-carcinogenic (NB). The total concentrations of carcinogenic PAHs (∑BPAHs) varied from 13.3 µg·kg−1 to 4397 µg·kg−1 across all samples, with a mean concentration of 594.4 µg·kg−1. The results suggest that oilfield soil is subject to a certain level of ecological environment risk. PMID:26016436

  20. Inverse Analysis of Heat Conduction in Hollow Cylinders with Asymmetric Source Distributions

    NASA Astrophysics Data System (ADS)

    Lambrakos, Samuel G.; Michopoulos, John G.; Jones, Harry N.; Boyer, Craig N.

    2008-10-01

    This paper presents an application of inverse analysis for determining both the temperature field histories and corresponding heat source distributions in hollow cylinders. The primary goal, however, is the development of an inversion infrastructure in a manner that allows taking advantage of all aspects related to its utility, including sensitivity analysis. The conditions generating heat sources are those resulting from intense pulsed-current electrical contact experiments. Under these conditions intense heat currents are generated due to the Joule conversion of the electric conduction currents. Asymmetry of the heat source is induced from the localized melting due to arc-enhanced electric conduction. Experimentally acquired temperature histories and melting domain boundary data are utilized to setup an inverse model of the heat conduction problem. This permits the construction of an estimate not only of the temperature field histories throughout the computational domain but also of an evaluation of the effective thermal diffusivity of the material involved.

  1. Balancing continuous-variable quantum key distribution with source-tunable linear optics cloning machine

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Lv, Geli; Zeng, Guihua

    2015-11-01

    We show that the tolerable excess noise can be dynamically balanced in source preparation while inserting a tunable linear optics cloning machine (LOCM) for balancing the secret key rate and the maximal transmission distance of continuous-variable quantum key distribution (CVQKD). The intensities of source noise are sensitive to the tunable LOCM and can be stabilized to the suitable values to eliminate the impact of channel noise and defeat the potential attacks even in the case of the degenerated linear optics amplifier (LOA). The LOCM-additional noise can be elegantly employed by the reference partner of reconciliation to regulate the secret key rate and the transmission distance. Simulation results show that there is a considerable improvement in the secret key rate of the LOCM-based CVQKD while providing a tunable LOCM for source preparation with the specified parameters in suitable ranges.

  2. Source apportionment of ambient fine particle size distribution using positive matrix factorization in Erfurt, Germany

    PubMed Central

    Yue, Wei; Stölzel, Matthias; Cyrys, Josef; Pitz, Mike; Heinrich, Joachim; Kreyling, Wolfgang G.; Wichmann, H.-Erich; Peters, Annette; Wang, Sheng; Hopke, Philip K.

    2008-01-01

    Particle size distribution data collected between September 1997 and August 2001 in Erfurt, Germany were used to investigate the sources of ambient particulate matter by positive matrix factorization (PMF). A total of 29,313 hourly averaged particle size distribution measurements covering the size range of 0.01 to 3.0 μm were included in the analysis. The particle number concentrations (cm−3) for the 9 channels in the ultrafine range, and mass concentrations (ng m−3) for the 41 size bins in the accumulation mode and particle up to 3 μm in aerodynamic diameter were used in the PMF. The analysis was performed separately for each season. Additional analyses were performed including calculations of the correlations of factor contributions with gaseous pollutants (O3, NO, NO2, CO and SO2) and particle composition data (sulfate, organic carbon and elemental carbon), estimating the contributions of each factor to the total number and mass concentration, identifying the directional locations of the sources using the conditional probability function, and examining the diurnal patterns of factor scores. These results were used to assist in the interpretation of the factors. Five factors representing particles from airborne soil, ultrafine particles from local traffic, secondary aerosols from local fuel combustion, particles from remote traffic sources, and secondary aerosols from multiple sources were identified in all seasons. PMID:18433834

  3. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romanach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  4. Regional Sources of Nitrous Oxide over the United States: Seasonal Variation and Spatial Distribution

    SciTech Connect

    Miller, S. M.; Kort, E. A.; Hirsch, A. I.; Dlugokencky, E. J.; Andrews, A. E.; Xu, X.; Tian, H.; Nehrkorn, T.; Eluszkiewicz, J.; Michalak, A. M.; Wofsy, S. C.

    2012-01-01

    This paper presents top-down constraints on the magnitude, spatial distribution, and seasonality of nitrous oxide (N{sub 2}O) emissions over the central United States. We analyze data from tall towers in 2004 and 2008 using a high resolution Lagrangian particle dispersion model paired with both geostatistical and Bayesian inversions. Our results indicate peak N{sub 2}O emissions in June with a strong seasonal cycle. The spatial distribution of sources closely mirrors data on fertilizer application with particularly large N{sub 2}O sources over the US Cornbelt. Existing inventories for N{sub 2}O predict emissions that differ substantially from the inverse model results in both seasonal cycle and magnitude. We estimate a total annual N{sub 2}O budget over the central US of 0.9-1.2 TgN/yr and an extrapolated budget for the entire US and Canada of 2.1-2.6 TgN/yr. By this estimate, the US and Canada account for 12-15% of the total global N{sub 2}O source or 32-39% of the global anthropogenic source as reported by the Intergovernmental Panel on Climate Change in 2007.

  5. Radiation Therapy Photon Beams Dose Conformation According to Dose Distribution Around Intracavitary-Applied Brachytherapy Sources

    SciTech Connect

    Jurkovic, Slaven Zauhar, Gordana; Faj, Dario; Radojcic, Deni Smilovic; Svabic, Manda

    2010-04-01

    Intracavitary application of brachytherapy sources followed by external beam radiation is essential for the local treatment of carcinoma of the cervix. Due to very high doses to the central portion of the target volume delivered by brachytherapy sources, this part of the target volume must be shielded while being irradiated by photon beams. Several shielding techniques are available, from rectangular block and standard cervix wedge to more precise, customized step wedge filters. Because the calculation of a step wedge filter's shape was usually based on effective attenuation coefficient, an approach that accounts, in a more precise way, for the scattered radiation, is suggested. The method was verified under simulated clinical conditions using film dosimetry. Measured data for various compensators were compared to the numerically determined sum of the dose distribution around brachytherapy sources and one of compensated beam. Improvements in total dose distribution are demonstrated, using our method. Agreement between calculation and measurements were within 3%. Sensitivity of the method on sources displacement during treatment has also been investigated.

  6. Spatial distribution of the source-receptor relationship of sulfur in Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kajino, M.; Ueda, H.; Sato, K.; Sakurai, T.

    2011-07-01

    The spatial distribution of the source-receptor relationship (SRR) of sulfur over Northeast Asia was examined using a chemical transport model (RAQM) off-line coupled with a meteorological model (MM5). The simulation was conducted for the entire year of 2002. The results were evaluated using monitoring data for six remote stations of the Acid Deposition Monitoring Network in East Asia (EANET). The modeled SO2 and O3 concentrations agreed well with the observations quantitatively. The modeled aerosol and wet deposition fluxes of SO42- were underestimated by 30 % and 50 %, respectively. The domain was divided into 5 source-receptor regions: (I) North China; (II) Central China; (III) South China; (IV) South Korea; and (V) Japan. The sulfur deposition in each receptor region amounted to about 50-75 % of the emissions from the same region. The largest contribution to the deposition in each region was originated from the same region, accounting for 53-84 %. The second largest contribution was due to Region II, supplying 14-43 %. The spatial distributions of the SRRs revealed that subregional values varied by about two times more than regional averages due to nonuniformity across the deposition fields. Examining the spatial distributions of the deposition fields was important for identifying subregional areas where the deposition was highest within a receptor region. The horizontal distribution changed substantially according to season.

  7. Spatial distribution of the source-receptor relationship of sulfur in Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kajino, M.; Ueda, H.; Sato, K.; Sakurai, T.

    2010-12-01

    The spatial distribution of the source-receptor relationship (SRR) of sulfur over Northeast Asia was examined using an off-line coupled meteorological/chemical transport model (MM5/RAQM). The simulation was conducted for the entire year of 2002. The results were evaluated using monitoring data for six remote stations of the Acid Deposition Monitoring Network in East Asia (EANET). The modeled SO2 and O3 concentrations agreed well with the observations quantitatively. The modeled aerosol and wet deposition fluxes of SO42- were underestimated by 30% and 50%, respectively, whereas the modeled precipitation was overestimated by 1.6 to 1.9 times. The domain was divided into 5 source-receptor regions: I, North China; II, Central China; III, South China; IV, South Korea; and V, Japan. The sulfur deposition in each receptor region amounted to about 50-75% of the emissions from the same region. The largest contribution to the deposition in each region was the domestic origin, accounting for 53-84%. The second largest contribution after the domestic origin was due to region II, supplying 14-43%, outside region II itself. The spatial distributions of the SRRs revealed that subregional values varied by about two times more than regional averages due to nonuniformity across the deposition fields. Examining the spatial distributions of the deposition fields was important for identifying subregional areas where the deposition was highest within a receptor region. The horizontal distribution changed substantially according to season.

  8. Geochemistry of dissolved trace elements and heavy metals in the Dan River Drainage (China): distribution, sources, and water quality assessment.

    PubMed

    Meng, Qingpeng; Zhang, Jing; Zhang, Zhaoyu; Wu, Tairan

    2016-04-01

    Dissolved trace elements and heavy metals in the Dan River drainage basin, which is the drinking water source area of South-to-North Water Transfer Project (China), affect large numbers of people and should therefore be carefully monitored. To investigate the distribution, sources, and quality of river water, this study integrating catchment geology and multivariate statistical techniques was carried out in the Dan River drainage from 99 river water samples collected in 2013. The distribution of trace metal concentrations in the Dan River drainage was similar to that in the Danjiangkou Reservoir, indicating that the reservoir was significantly affected by the Dan River drainage. Moreover, our results suggested that As, Sb, Cd, Mn, and Ni were the major pollutants. We revealed extremely high concentrations of As and Sb in the Laoguan River, Cd in the Qingyou River, Mn, Ni, and Cd in the Yinhua River, As and Sb in the Laojun River, and Sb in the Dan River. According to the water quality index, water in the Dan River drainage was suitable for drinking; however, an exposure risk assessment model suggests that As and Sb in the Laojun and Laoguan rivers could pose a high risk to humans in terms of adverse health and potential non-carcinogenic effects.

  9. Analysis of electron energy distribution function in the Linac4 H{sup −} source

    SciTech Connect

    Mochizuki, S. Nishida, K.; Hatayama, A.; Mattei, S.; Lettry, J.

    2016-02-15

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H{sup −} negative ion production by reducing the gas pressure.

  10. Analysis of electron energy distribution function in the Linac4 H- source

    NASA Astrophysics Data System (ADS)

    Mochizuki, S.; Mattei, S.; Nishida, K.; Hatayama, A.; Lettry, J.

    2016-02-01

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H- negative ion production by reducing the gas pressure.

  11. Analysis of electron energy distribution function in the Linac4 H⁻ source.

    PubMed

    Mochizuki, S; Mattei, S; Nishida, K; Hatayama, A; Lettry, J

    2016-02-01

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H(-) negative ion production by reducing the gas pressure.

  12. The Temporal and Spatial Distribution Characteristics of Heating Season and Source Tracing in Beijing

    NASA Astrophysics Data System (ADS)

    Gong, Huili; Zhao, Wenhui; Li, Xiaojuan; Zhao, Wenji

    2013-01-01

    Inhalable particulate matter (IPM) is one of the principal pollutants in Beijing. Sand weather in spring and winter seasons partly because of regional airflow, in most cases it is results from autochthonic pollution, especially in heating season of winter. In this paper, the law of temporal spatial distribution of IPM and the relationship between IPM and influence factors were studied combing RS techniques with ground-based monitoring. The change of underlying surface which were obtained from high resolution Remote Sensing images in different periods was analyzed; the content of different diameter of particles were collected by ground observation instrument and chemical composition were analyzed; the relationship of distribution of IPM and underlying surface was studied using spatial analysis of GIS. The results indicate that the pollution distribution of IPM has a very close relation with underlying surface, man-made pollution sources, population density and meteorological factors.

  13. Integrating multiple data sources in species distribution modeling: a framework for data fusion.

    PubMed

    Pacifici, Krishna; Reich, Brian J; Miller, David A W; Gardner, Beth; Stauffer, Glenn; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A

    2017-03-01

    The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species' occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches ("Shared," "Correlation," "Covariates") for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source ("Single"). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.

  14. From Source to City: Particulate Matter Concentration and Size Distribution Data from an Icelandic Dust Storm

    NASA Astrophysics Data System (ADS)

    Thorsteinsson, T.; Mockford, T.; Bullard, J. E.

    2015-12-01

    Dust storms are the source of particulate matter in 20%-25% of the cases in which the PM10health limit is exceeded in Reykjavik; which occurred approximately 20 times a year in 2005-2010. Some of the most active source areas for dust storms in Iceland, contributing to the particulate matter load in Reykjavik, are on the south coast of Iceland, with more than 20 dust storm days per year (in 2002-2011). Measurements of particle matter concentration and size distribution were recorded at Markarfljot in May and June 2015. Markarfljot is a glacial river that is fed by Eyjafjallajokull and Myrdalsjokull, and the downstream sandur areas have been shown to be significant dust sources. Particulate matter concentration during dust storms was recorded on the sandur area using a TSI DustTrak DRX Aerosol Monitor 8533 and particle size data was recorded using a TSI Optical Particle Sizer 3330 (OPS). Wind speed was measured using cup anemometers at five heights. Particle size measured at the source area shows an extremely fine dust creation, PM1 concentration reaching over 5000 μg/m3 and accounting for most of the mass. This is potentially due to sand particles chipping during saltation instead of breaking uniformly. Dust events occurring during easterly winds were captured by two permanent PM10 aerosol monitoring stations in Reykjavik (140 km west of Markarfljot) suggesting the regional nature of these events. OPS measurements from Reykjavik also provide an interesting comparison of particle size distribution from source to city. Dust storms contribute to the particular matter pollution in Reykjavik and their small particle size, at least from this source area, might be a serious health concern.

  15. Mercury in soil near a long-term air emission source in southeastern Idaho

    USGS Publications Warehouse

    Abbott, M.L.; Susong, D.D.; Olson, M.; Krabbenhoft, D.P.

    2003-01-01

    At the Idaho National Engineering and Environmental Laboratory in southeastern Idaho, a 500??C fluidized bed calciner was intermittently operated for 37 years, with measured Hg emission rates of 9-11 g/h. Surface soil was sampled at 57 locations around the facility to determine the spatial distribution of Hg fallout and surface Hg variability, and to predict the total residual Hg mass in the soil from historical emissions. Measured soil concentrations were slightly higher (p<0.05) within 5 km of the source but were overall very low (15-20 ng/g) compared to background Hg levels published for similar soils in the USA (50-70 ng/g). Concentrations decreased 4%/cm with depth and were found to be twice as high under shrubs and in depressions. Mass balance calculations accounted for only 2.5-20% of the estimated total Hg emitted over the 37-year calciner operating history. These results suggest that much of the Hg deposited from calciner operations may have been reduced in the soil and re-emitted as Hg(0) to the global atmospheric pool.

  16. TREAT source-term experiment STEP-1 simulating a PWR LOCA

    SciTech Connect

    Simms, R.; Baker, L. Jr.; Blomquist, C.A.; Ritzman, R.L.

    1986-01-01

    In a hypothetical pressurized water reactor (PWR) large-break loss-of-coolant accident (LOCA) in which the emergency core cooling system fails, fission product decay heating causes water boil-off and reduced heat removal. Zircaloy cladding is oxidized by the steam. The noble gases and volatile fission products such as cesium and iodine that constitute a principal part of the source term will be released from the damaged fuel at or shortly after the time of cladding failure. TREAT test STEP-1 simulated the LOCA environment when the volatile fission products would be released using four fuel elements from the Belgonucleaire BR3 reactor. The principal objective was to collect a portion of the releases carried by the flow stream in a region as close as possible to the test zone. In this paper, the test is described and the results of an analysis of the thermal and steam/hydrogen environment are compared with the test measurements in order to provide a characterization for analysis of fission product releases and aerosol formation. The results of extensive sample examinations are reported separately.

  17. High order finite difference methods with subcell resolution for advection equations with stiff source terms

    SciTech Connect

    Wang, Wei; Shu, Chi-Wang; Yee, H.C.; Sjögreen, Björn

    2012-01-01

    A new high order finite-difference method utilizing the idea of Harten ENO subcell resolution method is proposed for chemical reactive flows and combustion. In reaction problems, when the reaction time scale is very small, e.g., orders of magnitude smaller than the fluid dynamics time scales, the governing equations will become very stiff. Wrong propagation speed of discontinuity may occur due to the underresolved numerical solution in both space and time. The present proposed method is a modified fractional step method which solves the convection step and reaction step separately. In the convection step, any high order shock-capturing method can be used. In the reaction step, an ODE solver is applied but with the computed flow variables in the shock region modified by the Harten subcell resolution idea. For numerical experiments, a fifth-order finite-difference WENO scheme and its anti-diffusion WENO variant are considered. A wide range of 1D and 2D scalar and Euler system test cases are investigated. Studies indicate that for the considered test cases, the new method maintains high order accuracy in space for smooth flows, and for stiff source terms with discontinuities, it can capture the correct propagation speed of discontinuities in very coarse meshes with reasonable CFL numbers.

  18. Implementation of a source term control program in a mature boiling water reactor.

    PubMed

    Vargo, G J; Jarvis, A J; Remark, J F

    1991-06-01

    The implementation and results of a source term control program implemented at the James A. FitzPatrick Nuclear Power Plant (JAF), a mature boiling water reactor (BWR) facility that has been in commercial operation since 1975, are discussed. Following a chemical decontamination of the reactor water recirculation piping in the Reload 8/Cycle 9 refueling outage in 1988, hydrogen water chemistry (HWC) and feedwater Zn addition were implemented. This is the first application of both HWC and feedwater Zn addition in a BWR facility. The radiological benefits and impacts of combined operation of HWC and feedwater Zn addition at JAF during Cycle 9 are detailed and summarized. The implementation of hydrogen water chemistry resulted in a significant transport of corrosion products within the reactor coolant system that was greater than anticipated. Feedwater Zn addition appears to be effective in controlling buildup of other activated corrosion products such as 60Co on reactor water recirculation piping; however, adverse impacts were encountered. The major adverse impact of feedwater Zn addition is the production of 65Zn that is released during plant outages and operational transients.

  19. Regulatory Technology Development Plan - Sodium Fast Reactor. Mechanistic Source Term - Trial Calculation. Work Plan

    SciTech Connect

    Grabaskas, David; Bucknor, Matthew; Jerden, James; Brunett, Acacia J.

    2016-02-01

    The overall objective of the SFR Regulatory Technology Development Plan (RTDP) effort is to identify and address potential impediments to the SFR regulatory licensing process. In FY14, an analysis by Argonne identified the development of an SFR-specific MST methodology as an existing licensing gap with high regulatory importance and a potentially long lead-time to closure. This work was followed by an initial examination of the current state-of-knowledge regarding SFR source term development (ANLART-3), which reported several potential gaps. Among these were the potential inadequacies of current computational tools to properly model and assess the transport and retention of radionuclides during a metal fuel pool-type SFR core damage incident. The objective of the current work is to determine the adequacy of existing computational tools, and the associated knowledge database, for the calculation of an SFR MST. To accomplish this task, a trial MST calculation will be performed using available computational tools to establish their limitations with regard to relevant radionuclide release/retention/transport phenomena. The application of existing modeling tools will provide a definitive test to assess their suitability for an SFR MST calculation, while also identifying potential gaps in the current knowledge base and providing insight into open issues regarding regulatory criteria/requirements. The findings of this analysis will assist in determining future research and development needs.

  20. Heat Loss in a Laser-Driven, Magnetized, X-Ray Source with Thermoelectric Terms

    NASA Astrophysics Data System (ADS)

    Giuliani, J. L.; Velikovich, A. L.; Kemp, G. E.; Colvin, J. D.; Koning, J.; Fournier, K. B.

    2016-10-01

    The efficiency of laser-driven K-shell radiation sources, i.e., pipes containing a gas or a metal foam, may be improved by using an axial magnetic field to thermally insulate the pipe wall from the hot interior. A planar, self-similar solution for the magnetic and thermal diffusion is developed to model the near wall physics that includes the thermoelectric Nernst and Ettingshausen effects. This solution extends previous work for the MagLIF concept to include the full dependence of the transport coefficients on the electron Hall parameter. The analytic solution assumes a constant pressure. This case is matched with a 1D MHD code, which is then applied to the case allowing for pressure gradients. These numerical solutions are found to evolve toward the self-similar ones. The variation of the time integrated heat loss with and without the thermoelectric terms will be examined. The present work provides a verification test for general MHD codes that use Braginskii's or Epperlein-Haines' transport model to account for thermoelectric effects. NRL supported by the DOE/NNSA. LLNL work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.

  1. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  2. Comparison of radiation spectra from selected source-term computer codes

    SciTech Connect

    Brady, M.C.; Hermann, O.W.; Wilson, W.B.

    1989-04-01

    This report compares the radiation spectra and intensities predicted by three radionuclide inventory/depletion codes, ORIGEN2, ORIGEN-S, and CINDER-2. The comparisons were made for a series of light-water reactor models (including three pressurized-water reactors (PWR) and two boiling-water reactors (BWR)) at cooling times ranging from 30 d to 100 years. The work presented here complements the results described in an earlier report that discusses in detail the three depletion codes, the various reactor models, and the comparison by nuclide of the inventories, activities, and decay heat predictions by nuclide for the three codes. In this report, the photon production rates from fission product nuclides and actinides were compared as well as the total photon production rates and energy spectra. Very good agreement was observed in the photon source terms predicted by ORIGEN2 and ORIGEN-S. The absence of bremsstrahlung radiation in the CINDER-2 calculations resulted in large differences in both the production rates and spectra in comparison with the ORIGEN2 and ORIGEN-S results. A comparison of the CINDER-2 photon production rates with an ORIGEN-S calculation neglecting bremsstrahlung radiation showed good agreement. An additional discrepancy was observed in the photon spectra predicted from the CINDER-2 calculations and has been attributed to the absence of spectral data for /sup 144/Pr in those calculations. 12 refs., 26 figs., 36 tabs.

  3. A Source-Term Based Boundary Layer Bleed/Effusion Model for Passive Shock Control

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Norris, Andrew T.

    2011-01-01

    A modeling framework for boundary layer effusion has been developed based on the use of source (or sink) terms instead of the usual practice of specifying bleed directly as a boundary condition. This framework allows the surface boundary condition (i.e. isothermal wall, adiabatic wall, slip wall, etc.) to remain unaltered in the presence of bleed. This approach also lends itself to easily permit the addition of empirical models for second order effects that are not easily accounted for by simply defining effective transpiration values. Two effusion models formulated for supersonic flows have been implemented into this framework; the Doerffer/Bohning law and the Slater formulation. These models were applied to unit problems that contain key aspects of the flow physics applicable to bleed systems designed for hypersonic air-breathing propulsion systems. The ability of each model to predict bulk bleed properties was assessed, as well as the response of the boundary layer as it passes through and downstream of a porous bleed system. The model assessment was performed with and without the presence of shock waves. Three-dimensional CFD simulations that included the geometric details of the porous plate bleed systems were also carried out to supplement the experimental data, and provide additional insights into the bleed flow physics. Overall, both bleed formulations fared well for the tests performed in this study. However, the sample of test problems considered in this effort was not large enough to permit a comprehensive validation of the models.

  4. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    SciTech Connect

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty.

  5. 76 FR 77223 - SourceGas Distribution LLC; Notice of Petition for Rate Approval and Revised Statement of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-12

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Petition for Rate Approval and Revised Statement of Operating Conditions Take notice that on December 1, 2011, SourceGas Distribution LLC...

  6. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, Judy L.

    1987-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogeneous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the source terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  7. Spatial distribution and source identification of wet deposition at remote EANET sites in Japan

    NASA Astrophysics Data System (ADS)

    Seto, Sinya; Sato, Manabu; Tatano, Tsutomu; Kusakari, Takashi; Hara, Hiroshi

    Wet deposition of major ions was discussed from the viewpoint of its potential sources for six remote EANET sites in Japan (Rishiri, Happo, Oki, Ogasawara, Yusuhara, and Hedo) having sufficiently high data completeness during 2000-2004. The annual deposition for each site ranged from 12.1 to 46.6 meq m -2 yr -1 for nss-SO 42-, from 5.0 to 21.9 meq m -2 yr -1 for NO 3-. The ranges of annual deposition of the two ions for the sites were lower than those for urban and rural sites in Japanese Acid Deposition Survey by Ministry of the Environment, Japan, and higher than those for global remote marine sites. Factor analysis was performed on log-transformed daily wet deposition of major ions for each site. The obtained two factors were interpreted as (1) acid and soil source (or acid source for some sites), and (2) sea-salt source for all the sites. This indicates that wet deposition of ions over the remote areas in Japan has a similar structure in terms of types of sources. Factor scores of acid and soil source were relatively high during Kosa (Asian dust) events in spring in western Japan. Back-trajectories for high-deposition episodes of acid and soil source (or acid source) for the remote sites showed that episodic air masses frequently came from the northeastern area of Asian Continent in spring and winter, and from central China in summer and autumn. This indicates a large contribution of continental emissions to wet deposition of ions over the remote areas in Japan.

  8. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    PubMed

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.

  9. High-order scheme for the source-sink term in a one-dimensional water temperature model

    PubMed Central

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005

  10. a Model Analysis of the Spatial Distribution and Temporal Trends of Nitrous Oxide Sources and Sinks

    NASA Astrophysics Data System (ADS)

    Nevison, Cynthia Dale

    1994-01-01

    Nitrous oxide ({N_ {2}O}), an atmospheric trace gas that contributes to both greenhouse warming and stratospheric ozone depletion, is increasing at an annual rate of about 0.25%/yr. By use of a global model of the changing terrestrial nitrogen cycle, the timing and magnitude of this increase are shown to be consistent with enhanced microbial N _2O production due to fertilizer, land clearing, livestock manure, and human sewage. Fertilizer appears to be a particularly important source. Increasing emissions from additional anthropogenic N_2O sources, including fossil fuel combustion and nylon production are also shown to coincide with and contribute to N _2O's annual atmospheric increase. Collectively, these industrial, combustion-related, and enhanced microbial N_2O emissions add up to a total anthropogenic source of about 5 Tg N/yr. Natural N_2O emissions from microbial activity in soils and oceans and from natural fires are estimated to produce an annual source of about 11 Tg N/yr, of which the oceans contribute a substantially larger fraction than reported in most current budgets. In contrast to anthropogenic emissions, which are increasing rapidly, natural emissions are predicted to remain relatively constant from 1860 to 2050, although this prediction ignores possible enhancements in microbial N_2O production due to global warming. Also in contrast to anthropogenic emissions, which are heavily dominated by the northern hemisphere, the natural source is fairly evenly distributed over the Earth. The predicted magnitude of the natural source is checked against an estimate of the N_2O stratospheric sink, while the predicted present day distribution of natural and anthropogenic sources is tested in a 3-dimensional transport model run. This run reproduces the observed 1ppb interhemispheric gradient (higher in the north), and suggests that larger gradients may exist over strong continental source regions. Substantial increases in most anthropogenic N _2O sources are

  11. Heavy metals in soils from a typical county in Shanxi Province, China: Levels, sources and spatial distribution.

    PubMed

    Pan, Li-bo; Ma, Jin; Wang, Xian-liang; Hou, Hong

    2016-04-01

    The concentrations of As, Cd, Cr, Cu, Pb, Ni, Zn, and Hg in 128 surface soil samples from Xiangfen County, Shanxi Province, China were measured. The concentrations of these eight heavy metals were lower than the critical values in the national soil quality standard. However, these concentrations were found to be slightly higher than their background values in soils in Shanxi Province, indicating enrichment of these metals in soils in Xiangfen County, especially for Hg and Cd. Principal component analysis coupled with cluster analysis was used to analyze the data and identify possible sources of these heavy metals; the results showed that the eight heavy metals in soils from Xiangfen County came from three different sources. Lead, Cd, Cu and Zn mainly arose from agricultural practices and vehicle emissions. Arsenic and Ni arose mainly from parent materials. Industrial practices were the main sources of Cr and Hg. The spatial distribution of the heavy metals varied greatly, and was closely correlated to local anthropogenic activities. This study will be helpful not only for improving local soil environmental quality but will also provide a basis for effectively targeting policies to protect soils from long-term heavy metal accumulation.

  12. Sources/sinks analysis with satellite sensing for exploring global atmospheric CO2 distributions

    NASA Astrophysics Data System (ADS)

    Shim, C.; Nassar, R.; Kim, J.

    2010-12-01

    There is growing interest in CO2 budget analysis since space-borne measurements of global CO2 distribution have been conducted (e.g, GOSAT project). Here we simulated the global CO2 distribution to estimate individual source/sink contributions. The chemical transport model (GEOS-Chem) was used in order to simulate the global CO2 distribution with updated global sources/sinks with 2°x2.5° horizontal resolution. In addition, 3-D emissions from aviation and chemical oxidation of CO are implemented. The model simulated CO2 amounts were compared with the GOSAT column averaged CO2 column (SWIR L2 data) from April 2009 to May 2010. The seasonal cycles of CO2 concentration were compared and the regional patterns of CO2 distribution are explained by the model with a systemic difference by 1 ~ 2% in the CO2 concentration. In other work, the GEOS-Chem CO2 concentrations show reasonable agreement with GLOBALVIEW-CO2. We further estimated the sources/sinks contributions to the global CO2 budget through 9 tagged CO2 tracers (fossil fuels, ocean exchanges, biomass burning, biofuel burning, balanced biosphere, net terrestrial exchange, ship emissions, aviation emissions, and oxidation from carbon precursors) over the years 2005-2009. Global CO2 concentration shows an increase of 2.1 ppbv/year in which the human fossil fuel and cement emissions are the main driving force (5.0 ppbv/year) for the trend. Net terrestrial and oceanic exchange of CO2 are main sinks (-2.1 ppbv/year and -0.7 ppbv/year, respectively). Our model results will help to suggest the level of reduction in global human CO2 emissions which could control the global CO2 trends in 21th century.

  13. Management of Ultimate Risk of Nuclear Power Plants by Source Terms - Lessons Learned from the Chernobyl Accident

    SciTech Connect

    Genn Saji

    2006-07-01

    The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessons learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and deposition

  14. Spatial distribution of the plasma parameters in the RF negative ion source prototype for fusion

    SciTech Connect

    Lishev, S.; Schiesko, L.; Wünderlich, D.; Fantz, U.

    2015-04-08

    A numerical model, based on the fluid plasma theory, has been used for description of the spatial distribution of the plasma parameters (electron density and temperature, plasma potential as well as densities of the three types of positive hydrogen ions) in the IPP prototype RF negative hydrogen ion source. The model covers the driver and the expansion plasma region of the source with their actual size and accounts for the presence of the magnetic filter field with its actual value and location as well as for the bias potential applied to the plasma grid. The obtained results show that without a magnetic filter the two 2D geometries considered, respectively, with an axial symmetry and a planar one, represent accurately the complex 3D structure of the source. The 2D model with a planar symmetry (where the E×B and diamagnetic drifts could be involved in the description) has been used for analysis of the influence, via the charged-particle and electron-energy fluxes, of the magnetic filter and of the bias potential on the spatial structure of the plasma parameters in the source. Benchmarking of results from the code to experimental data shows that the model reproduces the general trend in the axial behavior of the plasma parameters in the source.

  15. Nitrate distributions and source identification in the Abbotsford-Sumas Aquifer, northwestern Washington State

    USGS Publications Warehouse

    Mitchell, R.J.; Babcock, R.S.; Gelinas, S.; Nanus, L.; Stasney, D.E.

    2003-01-01

    The Abbotsford-Sumas Aquifer is a shallow, predominantly unconfined aquifer that spans regions in southwestern British Columbia, Canada and northwestern Washington, USA. The aquifer is prone to nitrate contamination because of extensive regional agricultural practices. A 22-month ground water nitrate assessment was performed in a 10-km2 study area adjacent to the international boundary in northwestern Washington to examine nitrate concentrations and nitrogen isotope ratios to characterize local source contributions from up-gradient sources in Canada. Nitrate concentrations in excess of 10 mg nitrate as nitrogen per liter (mg N L-1) were observed in ground water from most of the 26 domestic wells sampled in the study area, and in a creek that dissects the study area. The nitrate distribution was characteristic of nonpoint agricultural sources and consistent with the historical documentation of agriculturally related nitrate contamination in many parts of the aquifer. Hydrogeologic information, nitrogen isotope values, and statistical analyses indicated a nitrate concentration stratification in the study area. The highest concentrations (>20 mg N L-1) occurred in shallow regions of the aquifer and were linked to local agricultural practices in northwestern Washington. Nitrate concentrations in excess of 10 mg N L-1 deeper in the aquifer (>10 m) were related to agricultural sources in Canada. The identification of two possible sources of ground water nitrate in northwestern Washington adds to the difficulty in assessing and implementing local nutrient management plans for protecting drinking water in the region.

  16. Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System

    PubMed Central

    Prest, E. I.; Weissbrodt, D. G.; Hammes, F.; van Loosdrecht, M. C. M.; Vrouwenvelder, J. S.

    2016-01-01

    Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1–3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson’s correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously

  17. Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System.

    PubMed

    Prest, E I; Weissbrodt, D G; Hammes, F; van Loosdrecht, M C M; Vrouwenvelder, J S

    2016-01-01

    Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1-3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson's correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously

  18. Codon information value and codon transition-probability distributions in short-term evolution

    NASA Astrophysics Data System (ADS)

    Jiménez-Montaño, M. A.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Ramos-Fernández, A.

    2016-07-01

    To understand the way the Genetic Code and the physical-chemical properties of coded amino acids affect accepted amino acid substitutions in short-term protein evolution, taking into account only overall amino acid conservation, we consider an underlying codon-level model. This model employs codon pair-substitution frequencies from an empirical matrix in the literature, modified for single-base mutations only. Ordering the degenerated codons according to their codon information value (Volkenstein, 1979), we found that three-fold and most of four-fold degenerated codons, which have low codon values, were best fitted to rank-frequency distributions with constant failure rate (exponentials). In contrast, almost all two-fold degenerated codons, which have high codon values, were best fitted to rank-frequency distributions with variable failure rate (inverse power-laws). Six-fold degenerated codons are considered to be doubly assigned. The exceptional behavior of some codons, including non-degenerate codons, is discussed.

  19. A census of molecular hydrogen outflows and their sources along the Orion A molecular ridge. Characteristics and overall distribution

    NASA Astrophysics Data System (ADS)

    Davis, C. J.; Froebrich, D.; Stanke, T.; Megeath, S. T.; Kumar, M. S. N.; Adamson, A.; Eislöffel, J.; Gredel, R.; Khanzadyan, T.; Lucas, P.; Smith, M. D.; Varricatt, W. P.

    2009-03-01

    Aims: A census of molecular hydrogen flows across the entire Orion A giant molecular cloud is sought. With this paper we aim to associate each flow with its progenitor and associated molecular core, so that the characteristics of the outflows and outflow sources can be established. Methods: We present wide-field near-infrared images of Orion A, obtained with the Wide Field Camera, WFCAM, on the United Kingdom Infrared Telescope. Broad-band K and narrow-band H2 1-0S(1) images of a contiguous ~8 square degree region are compared to mid-IR photometry from the Spitzer Space Telescope and (sub)millimetre dust-continuum maps obtained with the MAMBO and SCUBA bolometer arrays. Using previously-published H2 images, we also measured proper motions for H2 features in 33 outflows, and use these data to help associate flows with existing sources and/or dust cores. Results: Together these data give a detailed picture of dynamical star formation across this extensive region. We increase the number of known H2 outflows to 116. A total of 111 H2 flows were observed with Spitzer; outflow sources are identified for 72 of them (12 more H2 flows have tentative progenitors). The MAMBO 1200 μm maps cover 97 H2 flows; 57 of them (59%) are associated with Spitzer sources and either dust cores or extended 1200 μm emission. The H2 jets are widely distributed and randomly orientated. The jets do not appear to be orthogonal to large-scale filaments or even to the small-scale cores associated with the outflow sources (at least when traced with the 11´´ resolution of the 1200 μm MAMBO observations). Moreover, H2 jet lengths (L) and opening angles (θ) are not obviously correlated with indicators of outflow source age - source spectral index, α (measured from mid-IR photometry), or (sub)millimetre core flux. It seems clear that excitation requirements limit the usefulness of H2 as a tracer of L and θ (though jet position angles are well defined). Conclusions: We demonstrate that H2 jet

  20. A Long Term Data Record of the Ozone Vertical Distribution: 1970-2010

    NASA Astrophysics Data System (ADS)

    McPeters, R. D.; Haffner, D. P.; Taylor, S.; Bhartia, P. K.

    2011-12-01

    Under a NASA program to produce long term data records from instruments on multiple satellites (MEaSUREs), data from a series of eight SBUV and SBUV/2 instruments have been reprocessed to create a coherent ozone time series. Data from the Nimbus 4 BUV, Nimbus 7 SBUV, and SBUV/2 instruments on NOAA 9, 11, 14, 16, 17, and 18 were used covering the period 1970-1972 and 1979-2011. The ultimate goal is an ozone Earth Science Data Record (ESDR) - a consistent, calibrated ozone time series that can used for trend analyses and other studies. Instead of making simple adjustments to ozone to create a long term record, for this data set radiance adjustments were made for each instrument to maintain a consistent calibration. Intra-instrument comparisons as well as SAGE II and UARS MLS data were used to evaluate the consistency of the record and make calibration adjustments as needed. In this version 8.6 processing, data for all eight instruments was reprocessed using adjusted radiances. Also for version 8.6 new ozone cross sections were used, the Brion, Daumont, and Malicet cross sections, and a cloud height climatology derived from Aura OMI measurements was used. Total column ozone appears to be consistent to about 1% for the new time series, while the ozone vertical distribution is consistent to about 5%.

  1. Assessing the complexity of short-term heartbeat interval series by distribution entropy.

    PubMed

    Li, Peng; Liu, Chengyu; Li, Ke; Zheng, Dingchang; Liu, Changchun; Hou, Yinglong

    2015-01-01

    Complexity of heartbeat interval series is typically measured by entropy. Recent studies have found that sample entropy (SampEn) or fuzzy entropy (FuzzyEn) quantifies essentially the randomness, which may not be uniformly identical to complexity. Additionally, these entropy measures are heavily dependent on the predetermined parameters and confined to data length. Aiming at improving the robustness of complexity assessment for short-term RR interval series, this study developed a novel measure--distribution entropy (DistEn). The DistEn took full advantage of the inherent information underlying the vector-to-vector distances in the state space by probability density estimation. Performances of DistEn were examined by theoretical data and experimental short-term RR interval series. Results showed that DistEn correctly ranked the complexity of simulated chaotic series and Gaussian noise series. The DistEn had relatively lower sensitivity to the predetermined parameters and showed stability even for quantifying the complexity of extremely short series. Analysis further showed that the DistEn indicated the loss of complexity in both healthy aging and heart failure patients (both p < 0.01), whereas neither the SampEn nor the FuzzyEn achieved comparable results (all p ≥ 0.05). This study suggested that the DistEn would be a promising measure for prompt clinical examination of cardiovascular function.

  2. Efficient construction of high-resolution TVD conservative schemes for equations with source terms: application to shallow water flows

    NASA Astrophysics Data System (ADS)

    Burguete, J.; García-Navarro, P.

    2001-09-01

    High-resolution total variation diminishing (TVD) schemes are widely used for the numerical approximation of hyperbolic conservation laws. Their extension to equations with source terms involving spatial derivatives is not obvious. In this work, efficient ways of constructing conservative schemes from the conservative, non-conservative or characteristic form of the equations are described in detail. An upwind, as opposed to a pointwise, treatment of the source terms is adopted here, and a new technique is proposed in which source terms are included in the flux limiter functions to get a complete second-order compact scheme. A new correction to fix the entropy problem is also presented and a robust treatment of the boundary conditions according to the discretization used is stated. Copyright

  3. Size distributions and source function of sea spray aerosol over the South China Sea

    NASA Astrophysics Data System (ADS)

    Chu, Yingjia; Sheng, Lifang; Liu, Qian; Zhao, Dongliang; Jia, Nan; Kong, Yawen

    2016-08-01

    The number concentrations in the radius range of 0.06-5 μm of aerosol particles and meteorological parameters were measured on board during a cruise in the South China Sea from August 25 to October 12, 2012. Effective fluxes in the reference height of 10 m were estimated by steady state dry deposition method based on the observed data, and the influences of different air masses on flux were discussed in this paper. The number size distribution was characterized by a bimodal mode, with the average total number concentration of (1.50 ± 0.76)×103 cm-3. The two mode radii were 0.099 µm and 0.886 µm, both of which were within the scope of accumulation mode. A typical daily average size distribution was compared with that measured in the Bay of Bengal. In the whole radius range, the number concentrations were in agreement with each other; the modes were more distinct in this study than that abtained in the Bay of Bengal. The size distribution of the fluxes was fitted with the sum of log-normal and power-law distribution. The impact of different air masses was mainly on flux magnitude, rather than the shape of spectral distribution. A semiempirical source function that is applicable in the radius range of 0.06 µm< r 80<0.3 µm with the wind speed varying from 1.00 m s-1 to 10.00 m s-1 was derived.

  4. Electromagnetic field distributions in waveguide-based axial-type microwave plasma source

    NASA Astrophysics Data System (ADS)

    Nowakowska, H.; Jasiński, M.; Mizeraczyk, J.

    2009-08-01

    We present results from simulations of 2D distributions of the electromagnetic field inside a waveguide-based axial-type microwave plasma source (MPS) used for hydrogen production via methane reforming. The studies are aimed at optimization of discharge processes and hydrogen production. We derive equations for determining electromagnetic field distributions and next determine the electromagnetic field distributions for two cases - without and with plasma inside the MPS. For the first case, we examine the influence of the length of the inner conductor of the coaxial line on electromagnetic field distributions. We have obtained standing wave patterns along the coaxial line and found resonances for certain positions of the coaxial line inner conductor. For the case with plasma inside the MPS, we perform calculations assuming that distributions of plasma parameters are known. Simulations are done for several values of maximum electron density. We have found that for values of electron density greater than 3× 10^{18} m^{-3} strong skin effect in the plasma is observed. Consequently, plasma may be treated as an extension of the inner conductor of the coaxial line. We have used FlexPDE software for the calculations.

  5. The distribution of polarized radio sources >15 μJy IN GOODS-N

    SciTech Connect

    Rudnick, L.; Owen, F. N.

    2014-04-10

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy){sup –0.6} per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m{sup –2} around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  6. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may

  7. North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge

    USGS Publications Warehouse

    Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.

    2006-01-01

    Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high

  8. Distribution of Practice and Metacognition in Learning and Long-Term Retention of a Discrete Motor Task

    ERIC Educational Resources Information Center

    Dail, Teresa K.; Christina, Robert W.

    2004-01-01

    This study examined judgments of learning and the long-term retention of a discrete motor task (golf putting) as a function of practice distribution. The results indicated that participants in the distributed practice group performed more proficiently than those in the massed practice group during both acquisition and retention phases. No…

  9. Effect of seasonal and long-term changes in stress on sources of water to wells

    USGS Publications Warehouse

    Reilly, Thomas E.; Pollock, David W.

    1995-01-01

    The source of water to wells is ultimately the location where the water flowing to a well enters the boundary surface of the ground-water system . In ground-water systems that receive most of their water from areal recharge, the location of the water entering the system is at the water table . The area contributing recharge to a discharging well is the surface area that defines the location of the water entering the groundwater system. Water entering the system at the water table flows to the well and is eventually discharged from the well. Many State agencies are currently (1994) developing wellhead-protection programs. The thrust of some of these programs is to protect water supplies by determining the areas contributing recharge to water-supply wells and by specifying regulations to minimize the opportunity for contamination of the recharge water by activities at the land surface. In the analyses of ground-water flow systems, steady-state average conditions are frequently used to simplify the problem and make a solution tractable. Recharge is usually cyclic in nature, however, having seasonal cycles and longer term climatic cycles. A hypothetical system is quantitatively analyzed to show that, in many cases, these cyclic changes in the recharge rates apparently do not significantly affect the location and size of the areas contributing recharge to wells. The ratio of the mean travel time to the length of the cyclic stress period appears to indicate whether the transient effects of the cyclic stress must be explicitly represented in the analysis of contributing areas to wells. For the cases examined, if the ratio of the mean travel time to the period of the cyclic stress was much greater than one, then the transient area contributing recharge to wells was similar to the area calculated using an average steady-state condition. Noncyclic long-term transient changes in water use, however, and cyclic stresses on systems with ratios less than 1 can and do affect the

  10. Long-term mechanical life testing of polymeric post insulators for distribution and a comparison to porcelain

    SciTech Connect

    Cherney, E.A. )

    1988-07-01

    The paper presents the results and analyses of long-term cantilever strength tests on polymeric line post insulators. The time-to-failure data for static cantilever loads are represented by the Weibull distribution. The life distribution, obtained from the maximum likelihood estimates of the accelerated failure times, fits an exponential model. An extrapolation of the life distribution to normal loads provides an estimate of the strength rating and mechanical equivalence to porcelain line post insulators.

  11. Organic micropollutants in coastal waters from NW Mediterranean Sea: sources distribution and potential risk.

    PubMed

    Sánchez-Avila, Juan; Tauler, Romà; Lacorte, Silvia

    2012-10-01

    This study provides a first estimation on the sources, distribution and risk of organic micropollutants (OMPs) in coastal waters from NW Mediterranean Sea. Polycyclic aromatic hydrocarbons, polychlorinated biphenyls, organochlorinated pesticides, polybrominated diphenyl ethers, phthalates and alkylphenols were analyzed by solid phase extraction and gas chromatography coupled to tandem mass spectrometry (SPE-GC-EI-MS/MS). River waters and wastewater treatment plant effluents discharging to the sea were identified as the main sources of OMPs to coastal waters, with an estimated input amount of around of 25,800 g d(-1). The concentration of ΣOMPs in coastal areas ranged from 17.4 to 8442 ng L(-1), and was the highest in port waters, followed by coastal and river mouth seawaters. A summarized overview of the patterns and sources of OMP contamination on the investigated coastal sea waters of NW Mediterranean Sea, as well as of their geographical distribution was obtained by Principal Component Analysis of the complete data set after its adequate pretreatment. Alkylphenols, bisphenol A and phthalates were the main contributors to ΣOMPs and produced an estimated significant pollution risk for fish, algae and the sensitive mysid shrimp organisms in seawater samples. The combination of GC-MS/MS, chemometrics and risk analysis is proven to be useful for a better control and management of OMP discharges.

  12. Scaling Relations Between Mainshock Source Parameters and Aftershock Distributions for Use in Aftershock Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2010-12-01

    Aftershocks are often used to delineate the mainshock rupture zone retrospectively. In aftershock forecasting on the other hand, the problem is to use mainshock rupture area to determine the aftershock zone prospectively. The procedures for this type of prediction are not as well developed and have been restricted to simple parameterizations such as the Utsu-Seki (1955) scaling relation between mainshock energy and aftershock area (Ogata and Zhueng, 2006). With a focus on improving current forecasting methods, we investigate the relationship between spatial source parameters that can be rapidly computed (spatial centroid and characteristic dimensions) and corresponding spatial measures of the aftershock distribution. For a set of about 30 large events, we either extracted source parameters from the McGuire et al (2002) finite moment tensor (FMT) catalog, or computed them from the online SRCMOD database (Mai, 2004). We identified aftershocks with windowing and scale-free methods, and computed both L1 and L2 measures of their distributions. Our comparisons produce scaling relations among the characteristic dimensions that can be used to initiate aftershock forecasts. By using rapidly-determined source parameters, we can decrease the forecasting latency and thus improve the probability gain of the forecasting methods.

  13. Sources and distribution of aliphatic and polyaromatic hydrocarbons in sediments from the Neuquen River, Argentine Patagonia.

    PubMed

    Monza, Liliana B; Loewy, Ruth M; Savini, Mónica C; Pechen de d'Angelo, Ana M

    2013-01-01

    Spatial distribution and probable sources of aliphatic and polyaromatic hydrocarbons (AHs, PAHs) were investigated in surface sediments collected along the bank of the Neuquen River, Argentina. Total concentrations of aliphatic hydrocarbons ranged between 0.41 and 125 μg/g dw. Six stations presented low values of resolved aliphatic hydrocarbons and the n-alkane distribution indexes applied suggested a clear biogenic source. These values can be considered the baseline levels of aliphatic hydrocarbons for the river sediments. This constitutes important information for the assessment of future impacts since a strong impulse in the exploitation of shale gas and shale oil in these zones is nowadays undergoing. For the other 11 stations, a mixture of aliphatic hydrocarbons of petrogenic and biogenic origin was observed. The spatial distribution reflects local inputs of these pollutants with a significant increase in concentrations in the lower course, where two major cities are located. The highest values of total aliphatic hydrocarbons were found in this sector which, in turn, was the only one where individual PAHs were detected.

  14. Distribution and geological sources of selenium in environmental materials in Taoyuan County, Hunan Province, China.

    PubMed

    Ni, Runxiang; Luo, Kunli; Tian, Xinglei; Yan, Songgui; Zhong, Jitai; Liu, Maoqiu

    2016-06-01

    The selenium (Se) distribution and geological sources in Taoyuan County, China, were determined by using hydride generation atomic fluorescence spectrometry on rock, soil, and food crop samples collected from various geological regions within the county. The results show Se contents of 0.02-223.85, 0.18-7.05, and 0.006-5.374 mg/kg in the rock, soil, and food crops in Taoyuan County, respectively. The region showing the highest Se content is western Taoyuan County amid the Lower Cambrian and Ediacaran black rock series outcrop, which has banding distributed west to east. A relatively high-Se environment is found in the central and southern areas of Taoyuan County, where Quaternary Limnetic sedimentary facies and Neoproterozoic metamorphic volcanic rocks outcrop, respectively. A relatively low-Se environment includes the central and northern areas of Taoyuan County, where Middle and Upper Cambrian and Ordovician carbonate rocks and Cretaceous sandstones and conglomerates outcrop. These results indicate that Se distribution in Taoyuan County varies markedly and is controlled by the Se content of the bedrock. The Se-enriched Lower Cambrian and Ediacaran black rock series is the primary source of the seleniferous environment observed in Taoyuan County. Potential seleniferous environments are likely to be found near outcrops of the Lower Cambrian and Ediacaran black rock series in southern China.

  15. Wall-loss distribution of charge breeding ions in an electron cyclotron resonance ion source

    SciTech Connect

    Jeong, S. C.; Oyaizu, M.; Imai, N.; Hirayama, Y.; Ishiyama, H.; Miyatake, H.; Niki, K.; Okada, M.; Watanabe, Y. X.; Otokawa, Y.; Osa, A.; Ichikawa, S.

    2011-03-15

    The ion loss distribution in an electron cyclotron resonance ion source (ECRIS) was investigated to understand the element dependence of the charge breeding efficiency in an electron cyclotron resonance (ECR) charge breeder. The radioactive {sup 111}In{sup 1+} and {sup 140}Xe{sup 1+} ions (typical nonvolatile and volatile elements, respectively) were injected into the ECR charge breeder at the Tokai Radioactive Ion Accelerator Complex to breed their charge states. Their respective residual activities on the sidewall of the cylindrical plasma chamber of the source were measured after charge breeding as functions of the azimuthal angle and longitudinal position and two-dimensional distributions of ions lost during charge breeding in the ECRIS were obtained. These distributions had different azimuthal symmetries. The origins of these different azimuthal symmetries are qualitatively discussed by analyzing the differences and similarities in the observed wall-loss patterns. The implications for improving the charge breeding efficiencies of nonvolatile elements in ECR charge breeders are described. The similarities represent universal ion loss characteristics in an ECR charge breeder, which are different from the loss patterns of electrons on the ECRIS wall.

  16. Preparation of Radium and Other Spent Sealed Sources Containing Long-Lived Radonuclides to Long-Term Storage

    SciTech Connect

    Arustamov, A. E.; Ojovan, M. I.; Semenov, K. N.; Sobolev, I. A.

    2003-02-26

    At present time management of radioactive waste containing long-lived a radionuclides, is one of the most serious problems. The complexity of the management this kind of waste is due to extended half-life of these radionuclides. Hence it is difficult to predict not only long-term behavior of packages with waste, but also conditions of containing geological medium. The spent sources containing long-lived radionuclides are not suitable for disposal in shallow ground repositories. They must be temporary stored in special engineered structures. Long terms storage of these sources require application of additional measures for diminishing of risk of incidents with them.

  17. Estimation of marine source-term following Fukushima Dai-ichi accident.

    PubMed

    Bailly du Bois, P; Laguionie, P; Boust, D; Korsakissok, I; Didier, D; Fiévet, B

    2012-12-01

    Contamination of the marine environment following the accident in the Fukushima Dai-ichi nuclear power plant represented the most important artificial radioactive release flux into the sea ever known. The radioactive marine pollution came from atmospheric fallout onto the ocean, direct release of contaminated water from the plant and transport of radioactive pollution from leaching through contaminated soil. In the immediate vicinity of the plant (less than 500 m), the seawater concentrations reached 68,000 Bq.L(-1) for (134)Cs and (137)Cs, and exceeded 100,000 Bq.L(-1) for (131)I in early April. Due to the accidental context of the releases, it is difficult to estimate the total amount of radionuclides introduced into seawater from data obtained in the plant. An evaluation is proposed here, based on measurements performed in seawater for monitoring purposes. Quantities of (137)Cs in seawater in a 50-km area around the plant were calculated from interpolation of seawater measurements. The environmental halftime of seawater in this area is deduced from the time-evolution of these quantities. This halftime appeared constant at about 7 days for (137)Cs. These data allowed estimation of the amount of principal marine inputs and their evolution in time: a total of 27 PBq (12 PBq-41 PBq) of (137)Cs was estimated up to July 18. Even though this main release may be followed by residual inputs from the plant, river runoff and leakage from deposited sediments, it represents the principal source-term that must be accounted for future studies of the consequences of the accident on marine systems. The (137)Cs from Fukushima will remain detectable for several years throughout the North Pacific, and (137)Cs/(134)Cs ratio will be a tracer for future studies.

  18. The development of a realistic source term for sodium-cooled fast reactors : assessment of current status and future needs.

    SciTech Connect

    LaChance, Jeffrey L.; Phillips, Jesse; Parma, Edward J., Jr.; Olivier, Tara Jean; Middleton, Bobby D.

    2011-06-01

    Sodium-cooled fast reactors (SFRs) continue to be proposed and designed throughout the United States and the world. Although the number of SFRs actually operating has declined substantially since the 1980s, a significant interest in advancing these types of reactor systems remains. Of the many issues associated with the development and deployment of SFRs, one of high regulatory importance is the source term to be used in the siting of the reactor. A substantial amount of modeling and experimental work has been performed over the past four decades on accident analysis, sodium coolant behavior, and radionuclide release for SFRs. The objective of this report is to aid in determining the gaps and issues related to the development of a realistic, mechanistically derived source term for SFRs. This report will allow the reader to become familiar with the severe accident source term concept and gain a broad understanding of the current status of the models and experimental work. Further, this report will allow insight into future work, in terms of both model development and experimental validation, which is necessary in order to develop a realistic source term for SFRs.

  19. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, J. L.

    1986-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  20. S-values for bone surfaces with a source distributed homogeneously in bone volume or with a surface deposited source

    SciTech Connect

    Johansson, L.

    1981-06-01

    The epithelial cells close to the bone surfaces are a radiosensitive part of the skeleton. For the purpose of calculating the effective dose equivalent, the International Commission on Radiological Protection (ICRP) has therefore given the bone surfaces its own weighting factor. ICRP has also recommended absorbed fractions for the bone surfaces and red marrow from alpha and beta radiation to be used for dosimetry of radionuclides in bone. The fractions are given for a source in cortical or trabecular bone, either homogeneously distributed in the bone volume or surface deposited. For gamma radiation ICRP recommends that the MIRD specific absorbed fractions for total bone should also be used for the bone surfaces. The MIRD 11 publication does not include bone surfaces as an organ; therefore, we have calculated the S-values on the basis of the ICRP recommendation and with the use of the radionuclide decay schemes and the MIRD 11 S-values. The calculated values are also compared with S-values for total mineral bone as the target organ.

  1. DOES SIZE MATTER? THE UNDERLYING INTRINSIC SIZE DISTRIBUTION OF RADIO SOURCES AND IMPLICATIONS FOR UNIFICATION BY ORIENTATION

    SciTech Connect

    DiPompeo, M. A.; Runnoe, J. C.; Myers, A. D.; Boroson, T. A.

    2013-09-01

    Unification by orientation is a ubiquitous concept in the study of active galactic nuclei. A gold standard of the orientation paradigm is the hypothesis that radio galaxies and radio-loud quasars are intrinsically the same, but are observed over different ranges of viewing angles. Historically, strong support for this model was provided by the projected sizes of radio structure in luminous radio galaxies, which were found to be significantly larger than those of quasars, as predicted due to simple geometric projection. Recently, this test of the simplest prediction of orientation-based models has been revisited with larger samples that cover wider ranges of fundamental properties-and no clear difference in projected sizes of radio structure is found. Cast solely in terms of viewing angle effects, these results provide convincing evidence that unification of these objects solely through orientation fails. However, it is possible that conflicting results regarding the role orientation plays in our view of radio sources simply result from insufficient sampling of their intrinsic size distribution. We test this possibility using Monte Carlo simulations constrained by real sample sizes and properties. We develop models for the real intrinsic size distribution of radio sources, simulate observations by randomly sampling intrinsic sizes and viewing angles, and analyze how likely each sample is to support or dispute unification by orientation. We find that, while it is possible to reconcile conflicting results purely within a simple, orientation-based framework, it is very unlikely. We analyze the effects that sample size, relative numbers of radio galaxies and quasars, the critical angle that separates the two subclasses, and the shape of the intrinsic size distribution have on this type of test.

  2. An inverse modeling method to assess the source term of the Fukushima nuclear power plant accident using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.

    2013-06-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2013) but none of them uses dose rate measurements. However, it is the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi nuclear power plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 103 PBq of 131I, 35.5 PBq of 132I, 15.5 PBq of 137Cs and 12 100 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with environmental observations. The result is that the model-measurement agreement for all of the monitoring locations is correct for 80% of simulated dose rates that are within a factor of 2 of the observed values. Changes in dose rates over time have been overall properly reconstructed, especially

  3. Long distance measurement-device-independent quantum key distribution with entangled photon sources

    SciTech Connect

    Xu, Feihu; Qi, Bing; Liao, Zhongfa; Lo, Hoi-Kwong

    2013-08-05

    We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.

  4. Anthropogenic {sup 129}I in western New York: Distribution, sources and pathways

    SciTech Connect

    Fehn, U.; Rao, U.; Teng, R.T.D.

    1995-12-01

    The present {sup 129}I concentration at the surface of the earth is dominated by releases from anthropogenic sources such as atmospheric weapons tests and nuclear facilities. We report here {sup 129}I concentrations in waters, plants and soils from Western New York and the surrounding areas. Values found in waters and plants outside Western New York are approximately three orders of magnitude above natural levels. Concentrations in Western New York are another order of magnitude higher with a distinct pattern of concentration pointing to the source at West Valley, a former reprocessing facility, which continues to add significantly to the {sup 129}I budget in this region. Transport of anthropogenic {sup 129}I is clearly detectable in waters draining the West Valley area but the overall distribution of this isotope indicates also a significant component of aerial transport. The continued presence of bomb-related {sup 129}I demonstrates a long residence time of this isotope in biosphere and hydrosphere.

  5. Distribution, richness, quality, and thermal maturity of source rock units on the North Slope of Alaska

    USGS Publications Warehouse

    Peters, K.E.; Bird, K.J.; Keller, M.A.; Lillis, P.G.; Magoon, L.B.

    2003-01-01

    Four source rock units on the North Slope were identified, characterized, and mapped to better understand the origin of petroleum in the area: Hue-gamma ray zone (Hue-GRZ), pebble shale unit, Kingak Shale, and Shublik Formation. Rock-Eval pyrolysis, total organic carbon analysis, and well logs were used to map the present-day thickness, organic quantity (TOC), quality (hydrogen index, HI), and thermal maturity (Tmax) of each unit. To map these units, we screened all available geochemical data for wells in the study area and assumed that the top and bottom of the oil window occur at Tmax of ~440° and 470°C, respectively. Based on several assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original organic richness prior to thermal maturation.

  6. Key-rate enhancement using qutrit states for quantum key distribution with askew aligned sources

    NASA Astrophysics Data System (ADS)

    Jo, Yonggi; Son, Wonmin

    2016-11-01

    It is known that measurement-device-independent quantum key distribution (MDI-QKD) provides ultimate security from all types of side-channel attack on detectors at the expense of low key rate. In the present study, we propose MDI-QKD using three-dimensional quantum states and show that the protocol improves the secret key rate under the analysis of mismatched-basis statistics. Specifically, we analyze security of the 3 d -MDI-QKD protocol with askew aligned sources, meaning that the original sources contain unwanted states instead of the expected one. We evaluate the secret key rate of the protocol and identify the regime in which the key rate is higher than the protocol with the qubit MDI-QKD.

  7. Photon-monitoring attack on continuous-variable quantum key distribution with source in middle

    NASA Astrophysics Data System (ADS)

    Wang, Yijun; Huang, Peng; Guo, Ying; Huang, Dazu

    2014-12-01

    Motivated by a fact that the non-Gaussian operation may increase entanglement of an entangled system, we suggest a photon-monitoring attack strategy in the entanglement-based (EB) continuous-variable quantum key distribution (CVQKD) using the photon subtraction operations, where the entangled source originates from the center instead of one of the legal participants. It shows that an eavesdropper, Eve, can steal large information from participants after intercepting the partial beams with the photon-monitoring attach strategy. The structure of the proposed CVQKD protocol is useful in simply analyzing how quantum loss in imperfect channels can decrease the performance of the CVQKD protocol. The proposed attack strategy can be implemented under current technology, where a newly developed and versatile no-Gaussian operation can be well employed with the entangled source in middle in order to access to mass information in the EB CVQKD protocol, as well as in the prepare-and-measure (PM) CVQKD protocol.

  8. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    PubMed

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  9. Affordable non-traditional source data mining for context assessment to improve distributed fusion system robustness

    NASA Astrophysics Data System (ADS)

    Bowman, Christopher; Haith, Gary; Steinberg, Alan; Morefield, Charles; Morefield, Michael

    2013-05-01

    This paper describes methods to affordably improve the robustness of distributed fusion systems by opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models, and characterize the model quality. These methods also can measure the conformity of this non-traditional data with fusion system products including situation modeling and mission impact prediction. Non-traditional data can improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are described that automatically learn to characterize and search non-traditional contextual data to enable operators integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven and data-driven data mining methods to discover unknown models from non-traditional and `big data' sources are used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes our context assessment software development, and the demonstration of context assessment of non-traditional data to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.

  10. Plans for a Collaboratively Developed Distributed Control System for the Spallation Neutron Source

    SciTech Connect

    DeVan, W.R.; Gurd, D.P.; Hammonds, J.; Lewis, S.A.; Smith, J.D.

    1999-03-29

    The Spallation Neutron Source (SNS) is an accelerator-based pulsed neutron source to be built in Oak Ridge, Tennessee. The facility has five major sections - a ''front end'' consisting of a 65 keV H{sup -} ion source followed by a 2.5 MeV RFQ; a 1 GeV linac; a storage ring; a 1MW spallation neutron target (upgradeable to 2 MW); the conventional facilities to support these machines and a suite of neutron scattering instruments to exploit them. These components will be designed and implemented by five collaborating institutions: Lawrence Berkeley National Laboratory (Front End), Los Alamos National Laboratory (Linac); Brookhaven National Laboratory (Storage Ring); Argonne National Laboratory (Instruments); and Oak Ridge National Laboratory (Neutron Source and Conventional Facilities). It is proposed to implement a fully integrated control system for all aspects of this complex. The system will be developed collaboratively, with some degree of local autonomy for distributed systems, but centralized accountability. Technical integration will be based upon the widely-used EPICS control system toolkit, and a complete set of hardware and software standards. The scope of the integrated control system includes site-wide timing and synchronization, networking and machine protection. This paper discusses the technical and organizational issues of planning a large control system to be developed collaboratively at five different institutions, the approaches being taken to address those issues, as well as some of the particular technical challenges for the SNS control system.

  11. Fast optical source for quantum key distribution based on semiconductor optical amplifiers

    NASA Astrophysics Data System (ADS)

    Jofre, M.; Gardelein, A.; Anzolin, G.; Amaya, W.; Campmany, J.; Ursin, R.; Penate, L.; Lopez, D.; San Juan, J. L.; Carrasco, J. A.; Garcia, F.; Torcal-Milla, F. J.; Sanchez-Brea, L. M.; Bernabeu, E.; Perdigues, J. M.; Jennewein, T.; Torres, J. P.; Mitchell, M. W.; Pruneri, V.

    2011-02-01

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as $1.14\\times 10^{-2}$ while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication.

  12. Finite-key security analysis of quantum key distribution with imperfect light sources

    DOE PAGES

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; ...

    2015-09-09

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called 'rejected data analysis', and showed that its security in the limit of infinitelymore » long keys is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.« less

  13. Space-bound optical source for satellite-ground decoy-state quantum key distribution.

    PubMed

    Li, Yang; Liao, Sheng-Kai; Chen, Xie-Le; Chen, Wei; Cheng, Kun; Cao, Yuan; Yong, Hai-Lin; Wang, Tao; Yang, Hua-Qiang; Liu, Wei-Yue; Yin, Juan; Liang, Hao; Peng, Cheng-Zhi; Pan, Jian-Wei

    2014-11-03

    Satellite-ground quantum key distribution has embarked on the stage of engineering implementation, and a global quantum-secured network is imminent in the foreseeable future. As one payload of the quantum-science satellite which will be ready before the end of 2015, we report our recent work of the space-bound decoy-state optical source. Specialized 850 nm laser diodes have been manufactured and the integrated optical source has gotten accomplished based on these LDs. The weak coherent pulses produced by our optical source feature a high clock rate of 100 MHz, intensity stability of 99.5%, high polarization fidelity of 99.7% and phase randomization. A series of space environment tests have been conducted to verify the optical source's performance and the results are satisfactory. The emulated final secure keys are about 120 kbits during one usable pass of the low Earth orbit satellite. This work takes a significant step forward towards satellite-ground QKD and the global quantum-secured network.

  14. Finite-key security analysis of quantum key distribution with imperfect light sources

    SciTech Connect

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2015-09-09

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called 'rejected data analysis', and showed that its security in the limit of infinitely long keys is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.

  15. Photoelectron kinetic and angular distributions for the ionization of aligned molecules using a HHG source

    NASA Astrophysics Data System (ADS)

    Rouzée, Arnaud; Kelkensberg, Freek; Kiu Siu, Wing; Gademann, Georg; Lucchese, Robert R.; Vrakking, Marc J. J.

    2012-04-01

    We present an experimental and theoretical investigation of the angular distributions of electrons ejected in aligned molecules by extreme ultra-violet ionization using a high harmonic generation (HHG) source. Impulsive alignment in O2, N2 and CO molecules was achieved using a near-IR laser pulse and the photoelectron angular distribution after ionization by a fs harmonic comb composed of harmonic H11 to H29 (17.5-46 eV) was recorded at the maximum of both alignment and anti-alignment. The experiment reveals signatures that are specific for the electronic orbitals that are ionized as well as the onset of the influence of the molecular structure and is well reproduced by theoretical calculations based on the multichannel Schwinger configuration interaction method.

  16. Electric Field Distribution Excited by Indoor Radio Source for Exposure Compliance Assessment

    NASA Astrophysics Data System (ADS)

    Higashiyama, Junji; Tarusawa, Yoshiaki

    Correction factors are presented for estimating the RF electromagnetic field strength in the compliance assessment of human exposure from an indoor RF radio source in the frequency range from 800MHz to 3.5GHz. The correction factors are derived from the increase in the spatial average electric field strength distribution, which is dependent on the building materials. The spatial average electric field strength is calculated using relative complex dielectric constants of building materials. The relative complex dielectric constant is obtained through measurement of the transmission and reflection losses for eleven kinds of building materials used in business office buildings and single family dwellings.

  17. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    SciTech Connect

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  18. A multiple step random walk Monte Carlo method for heat conduction involving distributed heat sources

    NASA Astrophysics Data System (ADS)

    Naraghi, M. H. N.; Chung, B. T. F.

    1982-06-01

    A multiple step fixed random walk Monte Carlo method for solving heat conduction in solids with distributed internal heat sources is developed. In this method, the probability that a walker reaches a point a few steps away is calculated analytically and is stored in the computer. Instead of moving to the immediate neighboring point the walker is allowed to jump several steps further. The present multiple step random walk technique can be applied to both conventional Monte Carlo and the Exodus methods. Numerical results indicate that the present method compares well with finite difference solutions while the computation speed is much faster than that of single step Exodus and conventional Monte Carlo methods.

  19. Development of Approach for Long-Term Management of Disused Sealed Radioactive Sources - 13630

    SciTech Connect

    Kinker, M.; Reber, E.; Mansoux, H.; Bruno, G.

    2013-07-01

    Radioactive sources are used widely throughout the world in a variety of medical, industrial, research and military applications. When such radioactive sources are no longer used and are not intended to be used for the practice for which an authorization was granted, they are designated as 'disused sources'. Whether appropriate controls are in place during the useful life of a source or not, the end of this useful life is often a turning point after which it is more difficult to ensure the safety and security of the source over time. For various reasons, many disused sources cannot be returned to the manufacturer or the supplier for reuse or recycling. When these attempts fail, disused sources should be declared as radioactive waste and should be managed as such, in compliance with relevant international legal instruments and safety standards. However, disposal remains an unresolved issue in many counties, due to in part to limited public acceptance, insufficient funding, and a lack of practical examples of strategies for determining suitable disposal options. As a result, disused sources are often stored indefinitely at the facilities where they were once used. In order to prevent disused sources from becoming orphan sources, each country must develop and implement a comprehensive waste management strategy that includes disposal of disused sources. The International Atomic Energy Agency (IAEA) fosters international cooperation between countries and encourages the development of a harmonized 'cradle to grave' approach to managing sources consistent with international legal instruments, IAEA safety standards, and international good practices. This 'cradle to grave' approach requires the development of a national policy and implementing strategy, an adequate legal and regulatory framework, and adequate resources and infrastructure that cover the entire life cycle, from production and use of radioactive sources to disposal. (authors)

  20. The occurrence and distribution of a group of organic micropollutants in Mexico City's water sources.

    PubMed

    Félix-Cañedo, Thania E; Durán-Álvarez, Juan C; Jiménez-Cisneros, Blanca

    2013-06-01

    The occurrence and distribution of a group of 17 organic micropollutants in surface and groundwater sources from Mexico City was determined. Water samples were taken from 7 wells, 4 dams and 15 tanks where surface and groundwater are mixed and stored before distribution. Results evidenced the occurrence of seven of the target compounds in groundwater: salicylic acid, diclofenac, di-2-ethylhexylphthalate (DEHP), butylbenzylphthalate (BBP), triclosan, bisphenol A (BPA) and 4-nonylphenol (4-NP). In surface water, 11 target pollutants were detected: same found in groundwater as well as naproxen, ibuprofen, ketoprofen and gemfibrozil. In groundwater, concentration ranges of salicylic acid, 4-NP and DEHP, the most frequently found compounds, were 1-464, 1-47 and 19-232 ng/L, respectively; while in surface water, these ranges were 29-309, 89-655 and 75-2,282 ng/L, respectively. Eleven target compounds were detected in mixed water. Concentrations in mixed water were higher than those determined in groundwater but lower than the detected in surface water. Different to that found in ground and surface water, the pesticide 2,4-D was found in mixed water, indicating that some pollutants can reach areas where they are not originally present in the local water sources. Concentration of the organic micropollutants found in this study showed similar to lower to those reported in water sources from developed countries. This study provides information that enriches the state of the art on the occurrence of organic micropollutants in water sources worldwide, notably in megacities of developing countries.

  1. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    NASA Astrophysics Data System (ADS)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  2. Ultrasonic field modeling in anisotropic materials by distributed point source method.

    PubMed

    Fooladi, Samaneh; Kundu, Tribikram

    2017-03-16

    DPSM (distributed point source method) is a modeling technique which is based on the concept of Green's function. First, a collection of source and target points are distributed over the solution domain based on the problem description and solution requirements. Then, the effects from all source points are superimposed at the location of every individual target point. Therefore, a successful implementation of DPSM entails an effective evaluation of Green's function between many pairs of source and target points. For homogeneous and isotropic media, the Green's function is available as a closed-form analytical expression. But for anisotropic solids, the evaluation of Green's function is more complicated and needs to be done numerically. Nevertheless, important applications such as defect detection in composite materials require anisotropic analysis. In this paper, the DPSM is used for ultrasonic field modeling in anisotropic materials. Considering the prohibitive computational cost of evaluating Green's function numerically for a large number of points, a technique called "windowing" is suggested which employs the repetitive pattern of points in DPSM in order to considerably reduce the number of evaluations of Green's function. In addition, different resolutions of numerical integration are used for computing Green's function corresponding to different distances in order to achieve a good balance between time and accuracy. The developed anisotropic DPSM model equipped with windowing technique and multi-resolution numerical integration is then applied to the problem of ultrasonic wave modeling in a plate immersed in a fluid. The transducers are placed in the fluid on both sides of the plate. First an isotropic plate is considered for the sake of verification and rough calibration of numerical integration. Then a composite plate is considered to demonstrate applicability and effectiveness of the developed model for simulating ultrasonic wave propagation in anisotropic

  3. Long term structural health monitoring by distributed fiber-optic sensing

    NASA Astrophysics Data System (ADS)

    Persichetti, G.; Minardo, A.; Testa, G.; Bernini, R.

    2012-04-01

    Structural health monitoring (SHM) systems allow to detect unusual structural behaviors that indicate a malfunction in the structure, which is an unhealthy structural condition. Depending on the complexity level of the SHM system, it can even perform the diagnosis and the prognosis steps, supplying the required information to carry out the most suitable actuation. While standard SHM systems are based on the use of point sensors (e.g., strain gauges, crackmeters, tiltmeters, etc.), there is an increasing interest towards the use of distributed optical fiber sensors, in which the whole structure is monitored by use of a single optical fiber. In particular, distributed optical fiber sensors based on stimulated Brillouin scattering (SBS) permit to detect the strain in a fully distributed manner, with a spatial resolution in the meter or submeter range, and a sensing length that can reach tens of km. These features, which have no performance equivalent among the traditional electronic sensors, are to be considered extremely valuable. When the sensors are opportunely installed on the most significant structural members, this system can lead to the comprehension of the real static behaviour of the structure rather than merely measuring the punctual strain level on one of its members. In addition, the sensor required by Brillouin technology is an inexpensive, telecom-grade optical fiber that shares most of the typical advantages of other fiber-optic sensors, such as high resistance to moisture and corrosion, immunity to electromagnetic fields and potential for long-term monitoring. In this work, we report the result of a test campaign performed on a concrete bridge. In particular, the tests were performed by an portable prototype based on Brillouin Optical Time-Domain Analysis (BOTDA) [1,2]. This type of analysis makes use of a pulsed laser light and a frequency-shifted continuous-wave (CW) laser light, launched simultaneously at the two opposite ends of an optical fiber

  4. Online data sources for regulation and remediation of chemical production, distribution, use and disposal

    SciTech Connect

    Snow, B.; Arnold, S.

    1995-12-01

    Environmental awareness is essential for todays corporation. Corporations have been held liable for the short-term and long-term effects of such chemicals as pharmaceuticals, agrochemicals and petrochemicals to name a few. Furthermore, corporations have been held accountable for disposal of wastes or by-products of chemical production. Responsibility for the environment either mandated by government agencies or done voluntarily is an economic factor for business operations. Remediation of environmental hazards on a voluntary basis has often created goodwill and a payoff for being socially responsible. Remediation also can result in new business opportunities or savings in production costs. To be environmentally aware and socially responsible, the chemist should know where to find regulatory information for countries worldwide. Using online data sources is an efficient method of seeking this information.

  5. Modeling mass transport in aquifers: The distributed-source problem. Research report, July 1988-June 1990

    SciTech Connect

    Serrano, S.E.

    1990-08-01

    A new methodology to model the time and space evolution of groundwater variables in a system of acquifers when certain components of the model, such as the geohydrologic information, the boundary conditions, the magnitude and variability of the sources or physical parameters are uncertain and defined in stochastic terms. This facilitates a more realistic statistical representation of groundwater flow and groundwater pollution forecasting for either the saturated or the unsaturated zone. The method is based on applications of modern mathematics to the solution of the resulting stochastic transport equations. The procedure exhibits considerable advantages over the existing stochastic modeling techniques.

  6. Long-term records of growth and distribution of conifers: Integration of paleoecology and physiological ecology

    SciTech Connect

    Graumlich, L.J.; Brubaker, L.B.

    1995-07-01

    The specter of human-induced alteration of atmospheric composition, and the associated changes in climate, have focused attention on how species, communities, and ecosystems respond to climate change. One source of information concerning this is the paleoecological record. Paleoecology offers insights in the nature of climate-vegetation interactions that derive from the well-documented response of plant communities to environmental changes of the past. The spatial and temporal resolution of paleoecological data sets has increased in recent decades, so that relatively detailed histories of conifer forests are available for much of North America and Europe. In addition, comparisons of records of past vegetation dynamics to paleoclimatic simulations by general circulation models have improved the understanding of the role of climate in governing past vegetation change. Several major findings of paleoresearch have importance to investigations of the effects of future climate change on the Earth`s biota. These include the findings (1) that changing seasonality may result in unexpected vegetation patterns, (2) that climatic and vegetation changes can be rapid, with ecosystem-wide implications, and (3) that short-term, extreme events can have long-term effects on tree population structures. In this chapter, we discuss patterns of coniferous forest response to climatic variation at two temporal scales: the Late Quaternary and the last millennium. Our examples illustrate the wide range of potential responses of coniferous forests to climatic variation, and emphasize opportunities for applying paleoecological findings to questions of ecophysiological research. Although we rely largely on examples from North America, our conclusions are well-supported by parallel research results in Europe and Asia.

  7. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  8. Coupling between shallow water and solute flow equations: analysis and management of source terms in 2D

    NASA Astrophysics Data System (ADS)

    Murillo, J.; Burguete, J.; Brufau, P.; García-Navarro, P.

    2005-09-01

    A two-dimensional model for the simulation of solute transport by convection and diffusion into shallow water flow over variable bottom is presented. It is based on a finite volume method over triangular unstructured grids. A first order upwind technique is applied to solve the flux terms in both the flow and solute equations and the bed slope source terms and a centred discretization is applied to the diffusion and friction terms. The convenience of considering the fully coupled system of equations is indicated and the methodology is well explained. Three options are suggested and compared in order to deal with the diffusion terms. Some comparisons are carried out in order to show the performance in terms of accuracy and computational effort of the different options.

  9. Hanford tank residual waste – contaminant source terms and release models

    SciTech Connect

    Deutsch, William J.; Cantrell, Kirk J.; Krupka, Kenneth M.; Lindberg, Michael J.; Serne, R. Jeffrey

    2011-08-23

    Residual waste is expected to be left in 177 underground storage tanks after closure at the U.S. Department of Energy’s Hanford Site in Washington State (USA). In the long term, the residual wastes represent a potential source of contamination to the subsurface environment. Residual materials that cannot be completely removed during the tank closure process are being studied to identify and characterize the solid phases and estimate the release of contaminants from these solids to water that might enter the closed tanks in the future. As of the end of 2009, residual waste from five tanks has been evaluated. Residual wastes from adjacent tanks C-202 and C-203 have high U concentrations of 24 and 59 wt%, respectively, while residual wastes from nearby tanks C-103 and C-106 have low U concentrations of 0.4 and 0.03 wt%, respectively. Aluminum concentrations are high (8.2 to 29.1 wt%) in some tanks (C-103, C-106, and S-112) and relatively low (<1.5 wt%) in other tanks (C-202 and C-203). Gibbsite is a common mineral in tanks with high Al concentrations, while non-crystalline U-Na-C-O-P±H phases are common in the U-rich residual wastes from tanks C-202 and C-203. Iron oxides/hydroxides have been identified in all residual waste samples studied to date. Contaminant release from the residual wastes was studied by conducting batch leach tests using distilled deionized water, a Ca(OH)2-saturated solution, or a CaCO3-saturated water. Uranium release concentrations are highly dependent on waste and leachant compositions with dissolved U concentrations one or two orders of magnitude higher in the tests with high U residual wastes, and also higher when leached with the CaCO3-saturated solution than with the Ca(OH)2-saturated solution. Technetium leachability is not as strongly dependent on the concentration of Tc in the waste, and it appears to be slightly more leachable by the Ca(OH)2-saturated solution than by the CaCO3-saturated solution. In general, Tc is much less

  10. Spatiotemporal distribution and short-term trends of particulate matter concentration over China, 2006-2010.

    PubMed

    Yao, Ling; Lu, Ning

    2014-01-01

    Air quality problems caused by atmospheric particulate have drawn broad public concern in the global scope. In the paper, the spatiotemporal distributions of fine particle (PM2.5) and inhalable particle (PM10) concentrations estimated with the artificial neural network (ANN) over China during 2006 to 2010 have been discussed. Most high PM10 concentration appears in Xinjiang, Qinghai, Gansu, Ningxia, Hubei, and parts of Inner Mongolia. The distribution of PM2.5 concentration is consistent with China's three gradient terrains. The seasonal variations of PM2.5 and PM10 concentrations both indicate that they are higher in north China in spring and winter, lowest in summer. In autumn, most provinces in south China appear high value. In particular, high PM2.5 concentration appears in the southeast coastal cities while high PM10 concentration prefers the central regions in south China. On this basis, seasonal Mann-Kendall test method is utilized to analyze the short-term trends. The results also show significant changes of PM2.5 and PM10 concentrations of China in the past 5 years, and most provinces present the tendency of reduction (3-5 μg/m(3) for PM2.5 and 10-20 μg/m(3) for PM10 per year) while a fraction of provinces appear the increasing trend of 8-16 μg/m(3) (PM2.5) and 16-30 μg/m(3) (PM10). Simultaneously, PM2.5 population exposure is discussed with the combination of population density-gridded data. Municipalities get much higher exposure level than other provinces. Shanghai suffers the highest population exposure to PM2.5, followed by Beijing and then Tianjin, Jiangsu province. Most provincial capitals, such as Guangzhou, Nanjing, Chengdu, and Wuhan, face much higher exposure level than other regions of their province. Moreover, the PM2.5 exposure situation is more serious in southeast than northwest regions for Beijing-Tianjin-Hebei region. Also, per capita PM2.5 concentration and population-weighted PM2.5 concentration are calculated. The former shows that

  11. Source attribution, physicochemical properties and spatial distribution of wet deposited mercury to the Ohio River valley

    NASA Astrophysics Data System (ADS)

    White, Emily Mae

    Mercury (Hg) is a bioaccumulative neurotoxin that is emitted from anthropogenic sources through fossil fuel combustion. The spatial scale of atmospheric transport prior to deposition is dependent on the chemical and physical form of Hg emissions, and has yet to be quantitatively defined. A five-year comprehensive Hg monitoring and source apportionment study was conducted in Steubenville, Ohio to investigate atmospheric Hg deposition to the highly industrialized Ohio River Valley region. Long-term event-precipitation measurements revealed a significant 30% to three-fold enrichment of Hg concentrations and total Hg deposition flux to the Steubenville site over other Great Lakes regional sites. Multivariate receptor models attributed ˜70% of Hg wet deposition to local coal combustion sources. While local stagnant atmospheric conditions led to moderately high volume-weighted mean Hg concentrations and the majority of Hg wet deposition flux, regional transport from the Chicago/Gary and Detroit/Windsor urban areas also led to elevated precipitation Hg concentrations, but did not contribute significantly to the overall Hg deposition. The degree of local source influence was established during a summertime field intensive study in which a local scale network of concurrently collected rain samples revealed that 42% of Hg wet deposition measured less than one km from the base of coal fired utilities could be attributed to the adjacent source, corresponding to 170% Hg concentration enhancement over regionally representative precipitation collected concurrently. In addition, 69+/-37% of the Hg collected in rain was in a soluble form, entering the precipitation as reactive gas phase or fine particle associated Hg. The Hg scavenging coefficient (rate of concentration reduction throughout a single precipitation event) was particularly low when compared to other trace elements. Furthermore, when compared to an upwind but non-locally source impacted site, the scavenging

  12. Short-term Music Training Enhances Complex, Distributed Neural Communication during Music and Linguistic Tasks

    PubMed Central

    Carpentier, Sarah M.; Moreno, Sylvain; McIntosh, Anthony R.

    2016-01-01

    Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5–7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects. PMID:27243611

  13. Short-term Music Training Enhances Complex, Distributed Neural Communication during Music and Linguistic Tasks.

    PubMed

    Carpentier, Sarah M; Moreno, Sylvain; McIntosh, Anthony R

    2016-10-01

    Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5-7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects.

  14. Potential for a near-term very-low-energy antiproton source at Brookhaven Bational Laboratory. Special report

    SciTech Connect

    Nordley, G.D.

    1989-04-01

    The resolution of key issues in the use of antimatter for applications ranging from aerospace-materials analysis in the near term and propulsion energy storage in the far term requires experiments with low-energy, relatively slow-moving, or thermal, antiprotons. There is no United States source of antiprotons at that energy; therefore, a task was initiated with Brookhaven National Laboratory to determine what would be required in time, equipment, and money to create a source producing antiprotons at a rate (approx 10{sup 14}/yr) sufficient to support applications experiments. The estimate eventually derived from this first-order analysis was approximately $8.6M for an initial source of 20 KeV antiprotons plus another roughly estimated $5M for cooling to increase the production rate to 10{sup 14} - 10{sup 15} antiprotons per year.

  15. Sources and distribution of NO(x) in the upper troposphere at northern midlatitudes

    NASA Technical Reports Server (NTRS)

    Rohrer, Franz; Ehhalt, Dieter H.; Wahner, Andreas

    1994-01-01

    A simple quasi 2-D model is used to study the zonal distribution of NO(x). The model includes vertical transport in form of eddy diffusion and deep convection, zonal transport by a vertically uniform wind, and a simplified chemistry of NO, NO2 and HNO3. The NO(x) sources considered are surface emissions (mostly from the combustion of fossil fuel), lightning, aircraft emissions, and downward transport from the stratosphere. The model is applied to the latitude band of 40 deg N to 50 deg N during the month of June; the contributions to the zonal NO(x) distribution from the individual sources and transport processes are investigated. The model predicted NO(x) concentration in the upper troposphere is dominated by air lofted from the polluted planetary boundary layer over the large industrial areas of Eastern North America and Europe. Aircraft emissions are also important and contribute on average 30 percent. Stratospheric input is minor about 10 percent, less even than that by lightning. The model provides a clear indication of intercontinental transport of NO(x) and HNO3 in the upper troposphere. Comparison of the modelled NO profiles over the Western Atlantic with those measured during STRATOZ 3 in 1984 shows good agreement at all altitudes.

  16. Pu and 137Cs in the Yangtze River estuary sediments: distribution and source identification.

    PubMed

    Liu, Zhiyong; Zheng, Jian; Pan, Shaoming; Dong, Wei; Yamada, Masatoshi; Aono, Tatsuo; Guo, Qiuju

    2011-03-01

    Pu isotopes and (137)Cs were analyzed using sector field ICP-MS and γ spectrometry, respectively, in surface sediment and core sediment samples from the Yangtze River estuary. (239+240)Pu activity and (240)Pu/(239)Pu atom ratios (>0.18) shows a generally increasing trend from land to sea and from north to south in the estuary. This spatial distribution pattern indicates that the Pacific Proving Grounds (PPG) source Pu transported by ocean currents was intensively scavenged into the suspended sediment under favorable conditions, and mixed with riverine sediment as the water circulated in the estuary. This process is the main control for the distribution of Pu in the estuary. Moreover, Pu is also an important indicator for monitoring the changes of environmental radioactivity in the estuary as the river basin is currently the site of extensive human activities and the sea level is rising because of global climate changes. For core sediment samples the maximum peak of (239+240)Pu activity was observed at a depth of 172 cm. The sedimentation rate was estimated on the basis of the Pu maximum deposition peak in 1963-1964 to be 4.1 cm/a. The contributions of the PPG close-in fallout Pu (44%) and the riverine Pu (45%) in Yangtze River estuary sediments are equally important for the total Pu deposition in the estuary, which challenges the current hypothesis that the riverine Pu input was the major source of Pu budget in this area.

  17. Recurring flood distribution patterns related to short-term Holocene climatic variability.

    PubMed

    Benito, Gerardo; Macklin, Mark G; Panin, Andrei; Rossato, Sandro; Fontana, Alessandro; Jones, Anna F; Machado, Maria J; Matlakhova, Ekaterina; Mozzi, Paolo; Zielhofer, Christoph

    2015-11-09

    Millennial- and multi-centennial scale climate variability during the Holocene has been well documented, but its impact on the distribution and timing of extreme river floods has yet to be established. Here we present a meta-analysis of more than 2000 radiometrically dated flood units to reconstruct centennial-scale Holocene flood episodes in Europe and North Africa. Our data analysis shows a general increase in flood frequency after 5000 cal. yr BP consistent with a weakening in zonal circulation over the second half of the Holocene, and with an increase in winter insolation. Multi-centennial length phases of flooding in UK and central Europe correspond with periods of minimum solar irradiance, with a clear trend of increasing flood frequency over the last 1000 years. Western Mediterranean regions show synchrony of flood episodes associated with negative phases of the North Atlantic Oscillation that are out-of-phase with those evident within the eastern Mediterranean. This long-term flood record reveals complex but geographically highly interconnected climate-flood relationships, and provides a new framework to understand likely future spatial changes of flood frequency.

  18. Hydroxymethylation is uniquely distributed within term placenta, and is associated with gene expression.

    PubMed

    Green, Benjamin B; Houseman, E Andres; Johnson, Kevin C; Guerin, Dylan J; Armstrong, David A; Christensen, Brock C; Marsit, Carmen J

    2016-08-01

    The conversion of cytosine to 5-methylcystosine (5mC) is an important regulator of gene expression. 5mC may be enzymatically converted to 5-hydroxymethylcytosine (5hmC), with a potentially distinct regulatory function. We sought to investigate these cytosine modifications and their effect on gene expression by parallel processing of genomic DNA using bisulfite and oxidative bisulfite conversion in conjunction with RNA sequencing. Although values of 5hmC across the placental genome were generally low, we identified ∼21,000 loci with consistently elevated levels of 5-hydroxymethycytosine. Absence of 5hmC was observed in CpG islands and, to a greater extent, in non-CpG island-associated regions. 5hmC was enriched within poised enhancers, and depleted within active enhancers, as defined by H3K27ac and H3K4me1 measurements. 5hmC and 5mC were significantly elevated in transcriptionally silent genes when compared with actively transcribed genes. 5hmC was positively associated with transcription in actively transcribed genes only. Our data suggest that dynamic cytosine regulation, associated with transcription, provides the most complete epigenomic landscape of the human placenta, and will be useful for future studies of the placental epigenome.-Green, B. B., Houseman, E. A., Johnson, K. C., Guerin, D. J., Armstrong, D. A., Christensen, B. C., Marsit, C. J. Hydroxymethylation is uniquely distributed within term placenta, and is associated with gene expression.

  19. Distribution and long-term trends in various fog types over South Korea

    NASA Astrophysics Data System (ADS)

    Belorid, Miloslav; Lee, Chong Bum; Kim, Jea-Chul; Cheon, Tae-Hun

    2015-11-01

    This study analyzed the spatial and temporal distributions of various fog types over South Korea. Six types of fogs were identified using a classification algorithm based on simple conceptual models of fog formation. The algorithm was applied to a 25-year record of meteorological observations. The most common fog types were radiation fog, prevailing at inland stations, and precipitation fog at coastal and island stations. Declining temporal trends in the frequency of fog events ranging between 2.1 and 10.9 fog events per decade were found at eight inland and two coastal stations. Long-term trends for each fog type show that the decrease in the frequency of fog events is mainly due to a decrease in the frequency of radiation fogs ranging between 1.1 and 8.5 fog events per decade. To identify the potential factors related to the decrease in radiation fog events, the temporal trends in annual mean nocturnal maximal cooling rates and annual mean nocturnal specific humidity during nights with clear sky and clam winds were examined. The results show that the decrease in the frequency of radiation fog events is associated mainly with the pattern of urbanization occurring during the past two decades.

  20. Annual Rates on Seismogenic Italian Sources with Models of Long-Term Predictability for the Time-Dependent Seismic Hazard Assessment In Italy

    NASA Astrophysics Data System (ADS)

    Murru, Maura; Falcone, Giuseppe; Console, Rodolfo

    2016-04-01

    The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).

  1. Surface-level fine particle mass concentrations: from hemispheric distributions to megacity sources.

    PubMed

    Hidy, George M

    2009-07-01

    Since 1990, basic knowledge of the "chemical climate" of fine particles, has greatly improved from Junge's compilation from the 1960s. A worldwide baseline distribution of fine particle concentrations on a synoptic scale of approximately 1000 km can be estimated at least qualitatively from measurements. A geographical distribution of fine particle characteristics is deduced from a synthesis of a variety of disparate data collected at ground level on all continents, especially in the northern hemisphere. On the average, the regional mass concentrations range from 1 to 80 microg/m3, with the highest concentrations in regions of high population density and industrialization. Fine particles by mass on a continental and hemispheric spatial scale are generally dominated by non-sea salt sulfate (0.2 to approximately 20 microg/m3, or approximately 25%) and organic carbon (0.2-> 10 microg/m3, or approximately 25%), with lesser contributions of ammonium, nitrate, elemental carbon, and elements found in sea salt or soil dust. The crustal and trace metal elements contribute a varied amount to fine particle mass depending on location, with a larger contribution in marine conditions or during certain events such as dust storms or volcanic disturbances. The average distribution of mass concentration and major components depends on the proximity to areal aggregations of sources, most of which are continental in origin, with contributions from sea salt emissions in the marine environment. The highest concentrations generally are within or near very large population and industrial centers, especially in Asia, including parts of China and India, as well as North America and Europe. Natural sources of blowing dust, sea salt, and wildfires contribute to large, intermittent spatial-scale particle loadings beyond these ranges. A sampling of 10 megacities illustrates a range of characteristic particle composition, dependent on local and regional sources. Long-range transport of pollution

  2. An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi

    2014-10-01

    In this paper, an approach of estimating signal parameters via rotational invariance technique (ESPRIT) is proposed for two-dimensional (2-D) localization of incoherently distributed (ID) sources in large-scale/massive multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based methods are valid only for one-dimensional (1-D) localization of the ID sources. By contrast, in the proposed approach the signal subspace is constructed for estimating the nominal azimuth and elevation direction-of-arrivals and the angular spreads. The proposed estimator enjoys closed-form expressions and hence it bypasses the searching over the entire feasible field. Therefore, it imposes significantly lower computational complexity than the conventional 2-D estimation approaches. Our analysis shows that the estimation performance of the proposed approach improves when the large-scale/massive MIMO systems are employed. The approximate Cram\\'{e}r-Rao bound of the proposed estimator for the 2-D localization is also derived. Numerical results demonstrate that albeit the proposed estimation method is comparable with the traditional 2-D estimators in terms of performance, it benefits from a remarkably lower computational complexity.

  3. Inverting seismic noise cross-correlations for noise source distribution: A step towards reducing source-induced bias in seismic noise interferometry

    NASA Astrophysics Data System (ADS)

    Ermert, Laura; Afanasiev, Michael; Sager, Korbinian; Gokhberg, Alexey; Fichtner, Andreas

    2016-04-01

    We report on the ongoing development of a new inversion method for the space- and time-dependent power spectral density distribution of ambient seismic noise sources. The method, once complete, will mainly serve two purposes: First, it will allow us to construct more realistic forward models for noise cross-correlation waveforms, thereby opening new possibilities for waveform imaging by ambient noise tomography. Second, it may provide new insights about the properties of ambient noise sources, complementing studies based on beamforming or numerical modeling of noise based on oceanographic observations. To invert for noise sources, we consider surface wave signal energy measurements on the 'causal' (station A to B) and on the 'acausal' (station B to A) correlation branch, and the ratio between them. These and similar measurements have proven useful for locating noise sources using cross-correlations in several past studies. The inversion procedure is the following: We construct correlation forward models based on Green's functions from a spectral element wave propagation code. To construct these models efficiently, we use source-receiver reciprocity and assume spatial uncorrelation of noise sources. In such a setting, correlations can be calculated from a pre-computed set of Green's functions between the seismic receivers and sources located at the Earth's surface. We then calculate spatial sensitivity kernels for the noise source distribution with respect to the correlation signal energy measurements. These in turn allow us to construct a misfit gradient and optimize the source distribution model to fit our observed cross-correlation signal energies or energy ratios. We will present the workflow for calculation of the forward model and sensitivity kernels, as well as results for both forward modeling and kernels for an example data set of long-period noise or 'hum' at a global scale. We will also provide an outlook on the noise source inversion considering the

  4. A distributed continuous simulation model to identify critical source areas of phosphorus at the catchment scale: model description

    NASA Astrophysics Data System (ADS)

    Koo, B. K.; Dunn, S. M.; Ferrier, R. C.

    2005-08-01

    This paper presents CAMEL (Chemicals from Agricultural Management and Erosion Losses), a distributed continuous simulation model to simulate daily phosphorus (P) transformations and transport at the catchment scale. CAMEL is a process-oriented mass-balance model that is based on both analytical and numerical approaches. In the model, a catchment is represented using a network of square grid cells each of which is comprised of various storages of water, sediment and P. Most of hydrological processes, soil erosion, sediment transport, and P transformations and transport are described using process-based equations. The P transformations between five P storages (active organic, stable organic, labile, active inorganic, and stable organic) are described using first-order kinetic equations. A comprehensive cascade routing scheme is used to simulate P retention and transport along the channel system. Dissolved P is also transported by groundwater flows, described using a two-dimensional Boussinesq equation. CAMEL simulates both surface and subsurface processes explicitly and therefore is suitable for catchment-scale applications. The distributed, process-oriented structure of CAMEL enables the model to be used for identifying critical source areas of P at the catchment-scale. CAMEL is also computationally efficient, allowing for long-term scale applications.

  5. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius

  6. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source.

    PubMed

    Armoundas, A A; Feldman, A B; Sherman, D A; Cohen, R J

    2001-09-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius

  7. Distribution and sources of oxygenated non-hydrocarbons in topsoil of Beijing, China.

    PubMed

    Zhang, Zhihuan; Wan, Tiantian; Peng, Xuyang; He, Guangxiu; Liu, Yu; Zeng, Li

    2016-08-01

    The oxygenated non-hydrocarbon compounds are widely distributed in soil. To investigate the distribution and origin of these compounds in topsoil of Beijing, their contents and compositions were measured in topsoil from 62 sites in Beijing. The research results showed that oxygenated non-hydrocarbons were composed primarily of C6∼C28 n-fatty acids, C12∼C28 n-fatty alcohols, n-fatty acid methyl esters, phthalates, sterols, and dehydroabietic acid in the topsoil of Beijing. The contents and compositions of these compounds varied with the sampling site. The concentrations of n-fatty acids and phthalate esters were the highest at all sites, followed by sterols, n-fatty acid methyl esters, fatty alcohols, and dehydroabietic acid in order. The n-fatty acids had a main peak of C16, followed by C18. An odd or even carbon number predominance was not observed in the low-molecular-weight n-fatty acids, indicating a fossil fuel or organic matter source. However, some high-molecular-weight n-fatty acids with an even carbon predominance may derive from a biomass. The n-fatty alcohols showed a main peak of C22 and were predominated by an even carbon number, suggesting plant, microbial, or other natural origins. Phthalates, including diethyl phthalate (DEP), diisobutyl phthalate (DIBP), dibutyl phthalate (DBP), diethylhexyl phthalate (DEHP), and dimethylphthalate (DMP), were detected. The content of phthalate esters was higher in the samples collected from dense human activity areas. The concentrations of DBP, DEHP, and DIBP were relatively high, indicating an anthropogenic source. The sterols (predominantly β-sitosterol) originated from biological sources, especially plants. The n-fatty acid methyl esters and dehydroabietic acid in topsoil showed apparent even carbon predominance with the former mainly derived from microorganisms or plants and the latter from cork combustion products.

  8. Spatial and temporal distribution of the dust deposition in Central Asia - results from a long term monitoring program

    NASA Astrophysics Data System (ADS)

    Groll, M.; Opp, Chr.; Aslanov, I.

    2013-06-01

    The aeolian transport of dust is an important process in Central Asia. Anthropogenic desertification and the desiccation of the Aral Sea have increased the overall dust emission and transport from this region and the local dust storm frequency during the last decades. Reliable ground data, however, are collected only sporadically, so the knowledge about the spatial and temporal distribution and dynamics of the dust deposition in the Aral Sea basin is fragmented and inconsistent at best. A long-term monitoring program was installed and sustained by three research projects. The results included in this article cover the dust deposition between 2003 and 2010 from 21 stations in Uzbekistan, Kazakhstan, and Turkmenistan. They confirm that the aeolian dust transport occurs mainly in the Southern direction. The highest average monthly deposition rate was registered in Uzbekistan (56.2 g m-2), while the percentage of months with a very intense (and potentially harmful) dust deposition flux was highest in Turkmenistan (36.4%). A majority of samples were collected during months with a dust deposition of less than 10.0 g m-2, while only 6% of all samples showed high monthly deposition intensities of more than 100 g m-2. The Kyzyl Kum, Kara Kum, and Aral Kum were identified as the main sources for aeolian dust in the Aral Sea basin. The impact of the Aral Kum as the dominant source of aeolian dust is limited to a region of approximately 500,000 km2 surrounding the former Aral Sea. The Kara Kum is characterized by a very high frequency of dust storms of a local and regional magnitude, and close to the Kyzyl Kum, monthly dust deposition rates of up to 9,600 g m-2 were registered. An analysis of the temporal distribution of the dust deposition showed a slight increase in the dust deposition activity and intensity between 2003 and 2010, with a strong inter-annual and seasonal dynamic. The highest average dust deposition was registered in June, and a second phase of intense dust

  9. Spatial distribution and source identification of trace elements in topsoil from heavily industrialized region, Aliaga, Turkey.

    PubMed

    Kara, Melik; Dumanoğlu, Yetkin; Altıok, Hasan; Elbir, Tolga; Odabası, Mustafa; Bayram, Abdurrahman

    2014-10-01

    Topsoil samples (n = 40) were collected from a heavily industrialized region in Turkey. The region includes several scrap processing iron-steel plants with electric arc furnaces (EAFs), a petroleum refinery, a petrochemical complex, steel rolling mills, a natural gas-fired power plant, ship-breaking yards and very dense transportation activities. The region has undergone a rapid transition from an agricultural region to a heavily industrialized region in the last three decades. Collected soil samples were analyzed for 48 trace elements using inductively coupled plasma-mass spectrometry (ICP-MS). The elemental distribution pattern in the region indicated that Nemrut area with dense iron-steel production activities was a hotspot for elemental pollution. In addition to crustal elements, concentrations of anthropogenic trace elements (i.e., Fe, Zn, Pb, Mn, Cu, Cd, Cr and Mo) were very high in the area influencing many parts of the region. Elemental compositions of fugitive sources polluting the soil (i.e., paved and unpaved roads, slag piles, EAFs filter dust piles and coal piles) were also determined. The methods (enrichment factors [EFs] and the index of geoaccumulation [Igeo]) used for determination of pollution status of soil showed that Cr, Ag, Zn, As and Pb were the strongly contaminating elements for the region. Principal component analysis (PCA) clearly indicated that anthropogenic sources (steel production, refinery and petrochemical processes and traffic) were important sources in this region.

  10. Source attack of decoy-state quantum key distribution using phase information

    NASA Astrophysics Data System (ADS)

    Tang, Yan-Lin; Yin, Hua-Lei; Ma, Xiongfeng; Fung, Chi-Hang Fred; Liu, Yang; Yong, Hai-Lin; Chen, Teng-Yun; Peng, Cheng-Zhi; Chen, Zeng-Bing; Pan, Jian-Wei

    2013-08-01

    Quantum key distribution (QKD) utilizes the laws of quantum mechanics to achieve information-theoretically secure key generation. This field is now approaching the stage of commercialization, but many practical QKD systems still suffer from security loopholes due to imperfect devices. In fact, practical attacks have successfully been demonstrated. Fortunately, most of them only exploit detection-side loopholes, which are now closed by the recent idea of measurement-device-independent QKD. On the other hand, little attention is paid to the source, which may still leave QKD systems insecure. In this work, we propose and demonstrate an attack that exploits a source-side loophole existing in qubit-based QKD systems using a weak coherent state source and decoy states. Specifically, by implementing a linear-optics unambiguous state discrimination measurement, we show that the security of a system without phase randomization—which is a step assumed in conventional security analyses but sometimes neglected in practice—can be compromised. We conclude that implementing phase randomization is essential to the security of decoy-state QKD systems under current security analyses.

  11. Passive decoy-state quantum key distribution with practical light sources

    SciTech Connect

    Curty, Marcos; Ma, Xiongfeng; Qi, Bing; Moroder, Tobias

    2010-02-15

    Decoy states have been proven to be a very useful method for significantly enhancing the performance of quantum key distribution systems with practical light sources. Although active modulation of the intensity of the laser pulses is an effective way of preparing decoy states in principle, in practice passive preparation might be desirable in some scenarios. Typical passive schemes involve parametric down-conversion. More recently, it has been shown that phase-randomized weak coherent pulses (WCP) can also be used for the same purpose [M. Curty et al., Opt. Lett. 34, 3238 (2009).] This proposal requires only linear optics together with a simple threshold photon detector, which shows the practical feasibility of the method. Most importantly, the resulting secret key rate is comparable to the one delivered by an active decoy-state setup with an infinite number of decoy settings. In this article we extend these results, now showing specifically the analysis for other practical scenarios with different light sources and photodetectors. In particular, we consider sources emitting thermal states, phase-randomized WCP, and strong coherent light in combination with several types of photodetectors, like, for instance, threshold photon detectors, photon number resolving detectors, and classical photodetectors. Our analysis includes as well the effect that detection inefficiencies and noise in the form of dark counts shown by current threshold detectors might have on the final secret key rate. Moreover, we provide estimations on the effects that statistical fluctuations due to a finite data size can have in practical implementations.

  12. Comparison between Two Practical Methods of Light Source Monitoring in Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Wang, Gan; Chen, Ziyang; Xu, Bingjie; Li, Zhengyu; Peng, Xiang; Guo, Hong

    2016-05-01

    The practical security of a quantum key distribution (QKD) is a critical issue due to the loopholes opened by the imperfections of practical devices. The untrusted source problem is a fundamental issue that exists in almost every protocol, including the loss-tolerant protocol and the measurement-device-independent protocol. Two practical light source monitoring methods were proposed, i.e., two-threshold detector scheme and photon-number-resolving (PNR) detector scheme. In this work, we test the fluctuation level of different gain-switched pulsed lasers, i.e., the ratio between the standard deviation and the mean of the pulse energy (noted as γ) changes from 1% to 7%. Moreover, we propose an improved practical PNR detector scheme, and discuss in what circumstances one should use which light source monitoring method, i.e., generally speaking when the fluctuation is large the PNR detector method performs better. This provides an instruction of selecting proper monitoring module for different practical systems. This work is supported by the National Science Fund for Distinguished Young Scholars of China (Grant No. 61225003), the State Key Project of National Natural Science Foundation of China (Grant No. 61531003).

  13. Distribution and Sources of Petroleum Hydrocarbons in Recent Sediments of the Imo River, SE Nigeria.

    PubMed

    Oyo-Ita, Inyang O; Oyo-Ita, Orok E; Dosunmu, Miranda I; Domínguez, Carmen; Bayona, Josep M; Albaigés, Joan

    2016-02-01

    The distribution of aliphatic and aromatic hydrocarbons in surface sediments of the lower course of the Imo River (Nigeria) was investigated to determine the sources and fate of these compounds. The aliphatic fraction is characterized by a widespread contribution of highly weathered/biodegraded hydrocarbon residues (reflected in the absence of prominent n-alkane peaks coupled with the presence of 17α(H),21β(H)-25-norhopane, an indicator of heavy hydrocarbon biodegradation) of Nigerian crude oils (confirmed by the occurrence of 18α(H)-oleanane, a compound characteristic of oils of deltaic origin). The concentrations of polycyclic aromatic hydrocarbons (PAHs) ranging from 48 to 117 ng/g dry weight (dw; ∑13PAHs) indicate a moderate pollution, possibly lowered by the sandy lithology and low organic carbon (OC) content of the sediments. Concentrations slightly decrease towards the estuary of the river, probably due to the fact that these stations are affected by tidal flushing of pollutants adsorbed on sediment particles and carried away by occasional storm to the Atlantic Ocean. A number of PAH ratios, including parent/alkylated and isomeric compounds, indicates a predominance of petrogenic sources, with a low contribution of pyrolytic inputs, particularly of fossil fuel combustion. On the basis of OC/ON (>10) and Per/ΣPAHpenta- (>10) values, a diagenetic terrigenous OC was proposed as a source of perylene to the river.

  14. Planck Early Results. XV. Spectral Energy Distributions and Radio Continuum Spectra of Northern Extragalactic Radio Sources

    NASA Technical Reports Server (NTRS)

    Aatrokoski, J.; Ade, P. A. R.; Aghanim, N.; Aller, H. D.; Aller, M. F.; Angelakis, E.; Amaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoit, A.; Berdyugin, A.; Bernard, J. P.; Bersanelli, M.; Bhatia, R.; Bonaldi, A.; Bonavera, L.; Gehrels, N.

    2011-01-01

    Spectral energy distributions (SEDs) and radio continuum spectra are presented for a northern sample of 104 extragalactic radio sources. based on the Planck Early Release Compact Source Catalogue (ERCSC) and simultaneous multi frequency data. The nine Planck frequencies, from 30 to 857 GHz, are complemented by a set of simultaneous observations ranging from radio to gamma-rays. This is the first extensive frequency coverage in the radio and millimetre domains for an essentially complete sample of extragalactic radio sources, and it shows how the individual shocks, each in their own phase of development, shape the radio spectra as they move in the relativistic jet. The SEDs presented in this paper were fitted with second and third degree polynomials to estimate the frequencies of the synchrotron and inverse Compton (IC) peaks, and the spectral indices of low and high frequency radio data, including the Planck ERCSC data, were calculated. SED modelling methods are discussed, with an emphasis on proper. physical modelling of the synchrotron bump using multiple components. Planck ERCSC data also suggest that the original accelerated electron energy spectrum could be much harder than commonly thought, with power-law index around 1.5 instead of the canonical 2.5. The implications of this are discussed for the acceleration mechanisms effective in blazar shock. Furthermore in many cases the Planck data indicate that gamma-ray emission must originate in the same shocks that produce the radio emission.

  15. On the Vertical Distribution of Local and Remote Sources of Water for Precipitation

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.

    2001-01-01

    The vertical distribution of local and remote sources of water for precipitation and total column water over the United States are evaluated in a general circulation model simulation. The Goddard Earth Observing System (GEOS) general circulation model (GCM) includes passive constituent tracers to determine the geographical sources of the water in the column. Results show that the local percentage of precipitable water and local percentage of precipitation can be very different. The transport of water vapor from remote oceanic sources at mid and upper levels is important to the total water in the column over the central United States, while the access of locally evaporated water in convective precipitation processes is important to the local precipitation ratio. This result resembles the conceptual formulation of the convective parameterization. However, the formulations of simple models of precipitation recycling include the assumption that the ratio of the local water in the column is equal to the ratio of the local precipitation. The present results demonstrate the uncertainty in that assumption, as locally evaporated water is more concentrated near the surface.

  16. On the spatial distribution of the reflection sources of different latency components of otoacoustic emissions

    PubMed Central

    Sisto, Renata; Moleti, Arturo; Shera, Christopher A.

    2015-01-01

    The experimental observation of long- and short-latency components in both stimulus-frequency and transient-evoked otoacoustic emissions admits a comprehensive explanation within the coherent reflection mechanism, in a linear active transmission-line cochlear model. A local complex reflectivity function associated with roughness was defined and analyzed by varying the tuning factor of the model, systematically showing, for each frequency, a multiple-peak spatial structure, compatible with the observed multiple-latency structure of otoacoustic emissions. Although this spatial pattern and the peak relative intensity changes with the chosen random roughness function, the multiple-peak structure is a reproducible feature of different “digital ears,” in good agreement with experimental data. If one computes the predicted transmission delays as a function of frequency and position for each source, one gets a good match to the latency-frequency patterns that are directly computed from synthesized otoacoustic spectra using time-frequency analysis. This result clarifies the role of the spatial distribution of the otoacoustic emission sources, further supporting the interpretation of different-latency otoacoustic components as due to reflection sources localized at different places along the basilar membrane. PMID:25698011

  17. Elemental distribution of metals in urban river sediments near an industrial effluent source.

    PubMed

    Tamim, Umma; Khan, Rahat; Jolly, Yeasmin Nahar; Fatema, Kanij; Das, Sopan; Naher, Kamrun; Islam, Mohammad Amirul; Islam, S M Azharul; Hossain, Syed Mohammod

    2016-07-01

    To study the compositional trends associated with the spatial and layer wise distribution of heavy metals as well as the sediment response towards the untreated chemical wastes, we have analyzed river (Buriganga, Bangladesh) sediments by instrumental neutron activation analysis (INAA) and energy dispersive X-ray fluorescence (EDXRF). In nine sediment samples 27 elements were determined where Na, Al, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Dy, Hf, Th and U were determined by INAA and Cu, Sr, Ba, Hg and Pb were determined by EDXRF. Pollution level and the origin of pollutants were evaluated by the aid of geo-accumulation index (Igeo), enrichment factor (EF), pollution load index (PLI) and the inter-element correlation analysis. Major elements are somehow buffered even though the pollution level is severe while the trace metals seem to be highly responsive. Among the heavy metals, Cr is the dominant pollutant, though the pollution level varies systematically with the sampling depth and the distance from the contamination source. Positive linear correlation between Cr and Zn (0.94) ensures the similar anthropogenic source(s) for these two metals, but the sediments of this study respond differently depending upon their geochemical behavior. Rare earth elements (here La, Ce, Sm and Dy), Th and U seem to have crustal origin and the Th/U ratio varies from 2.58 to 4.96.

  18. Kr II and Xe II axial velocity distribution functions in a cross-field ion source

    NASA Astrophysics Data System (ADS)

    Lejeune, A.; Bourgeois, G.; Mazouffre, S.

    2012-07-01

    Laser induced fluorescence measurements were carried out in a cross-field ion source to examine the behaviour of the axial ion velocity distribution functions (VDFs) in the expanding plasma. In the present paper, we focus on the axial VDFs of Kr II and Xe II ions. We examine the contourplots in a 1D-phase space (x,vx) representation in front of the exhaust channel and along the centerline of the ion source. The main ion beam, whose momentum corresponds to the ions that are accelerated through the whole potential drop, is observed. A secondary structure reveals the ions coming from the opposite side of the channel. We show that the formation of the neutralized ion flow is governed by the annular geometry. The assumption of a collisionless shock or a double layer due to supersonic beam interaction is not necessary. A non-negligible fraction of slow ions originates in local ionization or charge-exchange collision events between ions of the expanding plasma and atoms of the background residual gas. Slow ions that are produced near the centerline in the vicinity of the exit plane are accelerated toward the source body with a negative velocity leading to a high sputtering of front face. On the contrary, the ions that are produced in the vicinity of the channel exit plane are partially accelerated by the extended electric field.

  19. Round-robin differential-phase-shift quantum key distribution with heralded pair-coherent sources

    NASA Astrophysics Data System (ADS)

    Wang, Le; Zhao, Shengmei

    2017-04-01

    Round-robin differential-phase-shift (RRDPS) quantum key distribution (QKD) scheme provides an effective way to overcome the signal disturbance from the transmission process. However, most RRDPS-QKD schemes use weak coherent pulses (WCPs) as the replacement of the perfect single-photon source. Considering the heralded pair-coherent source (HPCS) can efficiently remove the shortcomings of WCPs, we propose a RRDPS-QKD scheme with HPCS in this paper. Both infinite-intensity decoy-state method and practical three-intensity decoy-state method are adopted to discuss the tight bound of the key rate of the proposed scheme. The results show that HPCS is a better candidate for the replacement of the perfect single-photon source, and both the key rate and the transmission distance are greatly increased in comparison with those results with WCPs when the length of the pulse trains is small. Simultaneously, the performance of the proposed scheme using three-intensity decoy states is close to that result using infinite-intensity decoy states when the length of pulse trains is small.

  20. The implementation of non-condensable mass equations including a dissolved gas source term in WCOBRA/TRAC

    NASA Astrophysics Data System (ADS)

    Aumiller, David Lee, Jr.

    1997-12-01

    In the era of passively cooled reactors, condensation is playing a larger role in the emergency cooling of the reactor. With this added emphasis it is more important than ever to be able to accurately predict the condensation rate. It is well known that the presence of non-condensable gases will diminish this rate. This thesis describes the changes which were necessary to enable WCOBRA/TRAC to calculate the non-condensable distribution in the reactor system. WCOBRA/TRAC is Westinghouse's Large Break LOCA code. The ability to track both gaseous nitrogen and hydrogen was implemented in both portions of WCOBRA/TRAC. Furthermore, a mass equation was also added for dissolved hydrogen. This dissolved mass equation when combined with the calculation of the dissolved hydrogen solubility forms the basis of a gaseous hydrogen source term. This source had not previously been included in reactor safety codes. Several different models for this flashing mechanism are presented. The effect on the interfacial heat transfer for both portions of WCOBRA/TRAC is also discussed. This discussion primarily focuses on the necessary changes required to maintain the proper driving forces for the interfacial heat transfer as well the changes required to assure conservation of energy in the new system. Several types of test problems are used to show the importance of the models which have been implemented on the modelling of reactor transients. The most important effect is that by properly accounting for the effects of the non-condensables on condensation, the ability to model the impact of the ECCS injection during a Large Break Loca is drastically improved. The ability of the code to properly track the dissolved hydrogen and its emergence from solution was demonstrated. Several different scenarios for the emergence of the hydrogen have been studied. The results of these studies show that the hydrogen which is liberated has the ability to play a significant role in the progression of reactor

  1. Distribution and Sources of Lead in Urban Soil in El Paso, Texas

    NASA Astrophysics Data System (ADS)

    Pingitore, N. E.; Amaya, M. A.; Clague, J.

    2005-05-01

    The geographic distribution of lead in El Paso soils is presented in maps based on 500 composite soil samples collected in the region. Each composite sample comprises equal volumes of samples taken from the public space in front of individual houses or structures around a single municipal block. The use of such composites highlights the distribution of lead at the neighborhood level, and de-emphasizes any anomalous elevated level associated with an individual house or structure. Lead levels are highest in the downtown commercial district, in the adjacent area to the east, which comprises an old central business, transport, and light industry complex, and to the west in the area of a century-old smelter, placed on standby six years ago. The continuity of this zone, and the age of its structures, make it difficult to differentiate lead sources. Lead values decrease systematically away from this urban core zone, with the lowest levels generally encountered in the peripheral, lightly populated developments and communities. This geographic distribution of Pb in soil is consistent with Pb measurements reported on particulate matter taken from four air monitoring stations during the 1990s. Soil data thus can complement air studies by providing an essentially infinite geographic network of sampling sites that, with varying accuracy, record and integrate air conditions over years and decades. Research supported by NIEHS Grant 1RO1-ES11367.

  2. Optimal operation management of fuel cell/wind/photovoltaic power sources connected to distribution networks

    NASA Astrophysics Data System (ADS)

    Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid

    2011-10-01

    In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.

  3. Toward a Mechanistic Source Term in Advanced Reactors: Characterization of Radionuclide Transport and Retention in a Sodium Cooled Fast Reactor

    SciTech Connect

    Brunett, Acacia J.; Bucknor, Matthew; Grabaskas, David

    2016-04-17

    A vital component of the U.S. reactor licensing process is an integrated safety analysis in which a source term representing the release of radionuclides during normal operation and accident sequences is analyzed. Historically, source term analyses have utilized bounding, deterministic assumptions regarding radionuclide release. However, advancements in technical capabilities and the knowledge state have enabled the development of more realistic and best-estimate retention and release models such that a mechanistic source term assessment can be expected to be a required component of future licensing of advanced reactors. Recently, as part of a Regulatory Technology Development Plan effort for sodium cooled fast reactors (SFRs), Argonne National Laboratory has investigated the current state of knowledge of potential source terms in an SFR via an extensive review of previous domestic experiments, accidents, and operation. As part of this work, the significant sources and transport processes of radionuclides in an SFR have been identified and characterized. This effort examines all stages of release and source term evolution, beginning with release from the fuel pin and ending with retention in containment. Radionuclide sources considered in this effort include releases originating both in-vessel (e.g. in-core fuel, primary sodium, cover gas cleanup system, etc.) and ex-vessel (e.g. spent fuel storage, handling, and movement). Releases resulting from a primary sodium fire are also considered as a potential source. For each release group, dominant transport phenomena are identified and qualitatively discussed. The key product of this effort was the development of concise, inclusive diagrams that illustrate the release and retention mechanisms at a high level, where unique schematics have been developed for in-vessel, ex-vessel and sodium fire releases. This review effort has also found that despite the substantial range of phenomena affecting radionuclide release, the

  4. Investigation of spatial distributions and sources of persistent organic pollutants (POPs) in a heavily polluted industrial region using tree components.

    PubMed

    Odabasi, Mustafa; Dumanoglu, Yetkin; Ozgunerge Falay, Ezgi; Tuna, Gizem; Altiok, Hasan; Kara, Melik; Bayram, Abdurrahman; Tolunay, Doganay; Elbir, Tolga

    2016-10-01

    Spatial distributions of persistent organic pollutants (POPs) were investigated in Iskenderun industrial region in Turkey. POP concentrations were measured in different tree components (i.e., pine needle, branch, bark, and stem) collected at several industrial and background sites (n = 27). Also, air, litter, and soil samples were analyzed to determine the relationship of atmospheric pollutants with tree components, litter, and soil. Spatial variation of measured concentrations and factor analysis showed that the iron-steel plants are the most important POP sources in Iskenderun area. Correlations of ambient air levels to those measured in soil, litter, and tree components were significant showing that POPs are exchanged between atmosphere and these compartments. Results have suggested that tree components, litter and soil could be used to determine the spatial distributions and sources of atmospheric POPs in a region. POP quantities (mg/ha) accumulated in different tree components, litter, and soil were also predicted. Generally, the highest quantities were accumulated by stem and needles. In terms of overall inventory, including trees, litter and soil, the highest accumulated quantities were found in soil followed by trees and litter, indicating that in addition to soil, vegetation is a notable reservoir accumulating POPs. Ambient air POP concentrations were also estimated using a bark/air partitioning model. The estimated/measured ratios were close to 1.0 for several compounds and the results showed that the atmospheric POP concentrations could be estimated from the bark measurements within factors of 0.23-3.07, 1.02-6.67, 0.63-7.44, 1.07-3.37 for polycyclic aromatic hydrocarbons, polychlorinated biphenyls, polychlorinated naphthalenes, and polybrominated diphenyl ethers, respectively.

  5. Spectral distribution and wave function of electrons emitted from a single-particle source in the quantum Hall regime

    NASA Astrophysics Data System (ADS)

    Battista, F.; Samuelsson, P.

    2012-02-01

    We investigate theoretically a scheme for spectroscopy of electrons emitted by an on-demand single-particle source. The total system, with an electron turnstile source and a single-level quantum dot spectrometer, is implemented with edge states in a conductor in the quantum Hall regime. Employing a Floquet scattering approach, the source and the spectrometer are analyzed within a single theoretical framework. The nonequilibrium distribution of the emitted electrons is analyzed via the direct current at the dot spectrometer. In the adiabatic and intermediate source frequency regimes, the distribution is found to be strongly peaked around the active resonant level of the turnstile. At high frequencies the distribution is split up into a set of fringes, resulting from the interplay of resonant transport through the turnstile and absorption or emission of individual Floquet quanta. For ideal source operation, with exactly one electron emitted per cycle, an expression for the single-electron wave function is derived.

  6. Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen

    2016-06-01

    We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.

  7. Assessing the altitude effect on distributions of volatile organic compounds from different sources by principal component analysis.

    PubMed

    Yang, Jhih-Jhe; Liu, Chih-Chung; Chen, Wei-Hsiang; Yuan, Chung-Shin; Lin, Chitsan

    2013-05-01

    Emissions of volatile organic compounds (VOCs), particularly those from industrial sources, have been of substantial concern because they have had adverse effects on the nearby environment and human health. In this study, the effect of altitude on the distributions of VOCs from petrochemical industrial sources was studied by analyzing the VOC concentrations at ground level and three different altitudes (100, 300, and 500 m above the ground) during three monitoring seasons from 2009 to 2010 and assessing the results by principal component analysis (PCA) and cluster analysis. Kaohsiung city in southern Taiwan, known for its high levels of air contaminants due to many pollution-intensive industries in the city, was selected as the area to be examined. Of various types of aliphatic and aromatic hydrocarbons being detected, acetone and toluene were the dominant VOC species with relatively high concentrations. By PCA application and cluster analysis, aromatic and aliphatic compounds were found to be the main VOCs accounting for the maximum variance of the data observed at ground level and high altitude, respectively. The presence of mono-aromatic hydrocarbons at ground level suggested an important contribution from traffic, while the presence of both saturated and unsaturated hydrocarbons at high altitudes was likely to be due to the local petrochemical industries given the heights of flare stacks in the examined areas and short lifetimes of unsaturated hydrocarbons such as alkenes. 3-D loading plots exhibited clear grouping of the VOCs in terms of their chemical structures and/or physicochemical characteristics for the data at ground level and 500 m and less clear differentiation for the data at 100 and 300 m, possibly resulted by atmospheric dispersion and mixing. The influence of altitude on the VOC distributions appeared not to be negligible and was greatly impacted by the location (e.g., height) of emission sources and the physicochemical properties of the VOCs

  8. [Distribution Characteristics and Source Analysis of Dustfall Trace Elements During Winter in Beijing].

    PubMed

    Xiong, Qiu-lin; Zhao, Wen-ji; Guo, Xiao-yu; Chen, Fan-tao; Shu, Tong-tong; Zheng, Xiao-xia; Zhao, Wen-hui

    2015-08-01

    The dustfall content is one of the evaluation indexes of atmospheric pollution. Trace elements especially heavy metals in dustfall can lead to risks to ecological environment and human health. In order to study the distribution characteristics of trace elements, heavy metals pollution and their sources in winter atmospheric dust, 49 dustfall samples were collected in Beijing City and nearby during November 2013 to March 2014. Then the contents (ma