Science.gov

Sample records for distributed source term

  1. Energy decay for viscoelastic plates with distributed delay and source term

    NASA Astrophysics Data System (ADS)

    Mustafa, Muhammad I.; Kafini, Mohammad

    2016-06-01

    In this paper we consider a viscoelastic plate equation with distributed delay and source term. Under suitable conditions on the delay and source term, we establish an explicit and general decay rate result without imposing restrictive assumptions on the behavior of the relaxation function at infinity. Our result allows a wider class of relaxation functions and improves earlier results in the literature.

  2. DUSTMS-D: DISPOSAL UNIT SOURCE TERM - MULTIPLE SPECIES - DISTRIBUTED FAILURE DATA INPUT GUIDE.

    SciTech Connect

    SULLIVAN, T.M.

    2006-01-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). Many of these physical processes are influenced by the design of the disposal facility (e.g., how the engineered barriers control infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This has been done and the resulting models have been incorporated into the computer code DUST-MS (Disposal Unit Source Term-Multiple Species). The DUST-MS computer code is designed to model water flow, container degradation, release of contaminants from the wasteform to the contacting solution and transport through the subsurface media. Water flow through the facility over time is modeled using tabular input. Container degradation models include three types of failure rates: (a) instantaneous (all containers in a control volume fail at once), (b) uniformly distributed failures (containers fail at a linear rate between a specified starting and ending time), and (c) gaussian failure rates (containers fail at a rate determined by a mean failure time, standard deviation and gaussian distribution). Wasteform release models include four release mechanisms: (a) rinse with partitioning (inventory is released instantly upon container failure subject to equilibrium partitioning (sorption) with

  3. Spatial distribution of HTO activity in unsaturated soil depth in the vicinity of long-term release source

    SciTech Connect

    Golubev, A.; Golubeva, V.; Mavrin, S.

    2015-03-15

    Previous studies reported about a correlation between HTO activity distribution in unsaturated soil layer and atmospheric long-term releases of HTO in the vicinity of Savannah River Site. The Tritium Working Group of BIOMASS Programme has performed a model-model intercomparison study of HTO transport from atmosphere to unsaturated soil and has evaluated HTO activity distribution in the unsaturated soil layer in the vicinity of permanent atmospheric sources. The Tritium Working Group has also reported about such a correlation, however the conclusion was that experimental data sets are needed to confirm this conclusion and also to validate appropriate computer models. (authors)

  4. Chernobyl source term estimation

    SciTech Connect

    Gudiksen, P.H.; Harvey, T.F.; Lange, R.

    1990-09-01

    The Chernobyl source term available for long-range transport was estimated by integration of radiological measurements with atmospheric dispersion modeling and by reactor core radionuclide inventory estimation in conjunction with WASH-1400 release fractions associated with specific chemical groups. The model simulations revealed that the radioactive cloud became segmented during the first day, with the lower section heading toward Scandinavia and the upper part heading in a southeasterly direction with subsequent transport across Asia to Japan, the North Pacific, and the west coast of North America. By optimizing the agreement between the observed cloud arrival times and duration of peak concentrations measured over Europe, Japan, Kuwait, and the US with the model predicted concentrations, it was possible to derive source term estimates for those radionuclides measured in airborne radioactivity. This was extended to radionuclides that were largely unmeasured in the environment by performing a reactor core radionuclide inventory analysis to obtain release fractions for the various chemical transport groups. These analyses indicated that essentially all of the noble gases, 60% of the radioiodines, 40% of the radiocesium, 10% of the tellurium and about 1% or less of the more refractory elements were released. These estimates are in excellent agreement with those obtained on the basis of worldwide deposition measurements. The Chernobyl source term was several orders of magnitude greater than those associated with the Windscale and TMI reactor accidents. However, the {sup 137}Cs from the Chernobyl event is about 6% of that released by the US and USSR atmospheric nuclear weapon tests, while the {sup 131}I and {sup 90}Sr released by the Chernobyl accident was only about 0.1% of that released by the weapon tests. 13 refs., 2 figs., 7 tabs.

  5. Approximate factorization with source terms

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Chyu, W. J.

    1991-01-01

    A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.

  6. Long-term measurements of particle number size distributions and the relationships with air mass history and source apportionment in the summer of Beijing

    NASA Astrophysics Data System (ADS)

    Wang, Z. B.; Hu, M.; Wu, Z. J.; Yue, D. L.; He, L. Y.; Huang, X. F.; Liu, X. G.; Wiedensohler, A.

    2013-10-01

    A series of long-term and temporary measurements were conducted to study the improvement of air quality in Beijing during the Olympic Games period (8-24 August 2008). To evaluate actions taken to improve the air quality, comparisons of particle number and volume size distributions of August 2008 and 2004-2007 were performed. The total particle number and volume concentrations were 14 000 cm-3 and 37 μm-3 cm-3 in August of 2008, respectively. These were reductions of 41% and 35% compared with mean values of August 2004-2007. A cluster analysis on air mass history and source apportionment were performed, exploring reasons for the reduction of particle concentrations. Back trajectories were classified into five major clusters. Air masses from the south direction are always associated with pollution events during the summertime in Beijing. In August 2008, the frequency of air mass arriving from the south was 1.3 times higher compared to the average of the previous years, which however did not result in elevated particle volume concentrations in Beijing. Therefore, the reduced particle number and volume concentrations during the 2008 Beijing Olympic Games cannot be only explained by meteorological conditions. Four factors were found influencing particle concentrations using a positive matrix factorization (PMF) model. They were identified as local and remote traffic emissions, combustion sources as well as secondary transformation. The reductions of the four sources were calculated to 47%, 44%, 43% and 30%, respectively. The significant reductions of particle number and volume concentrations may attribute to actions taken, focusing on primary emissions, especially related to the traffic and combustion sources.

  7. Long-term measurements of particle number size distributions and the relationships with air mass history and source apportionment in the summer of Beijing

    NASA Astrophysics Data System (ADS)

    Wang, Z. B.; Hu, M.; Wu, Z. J.; Yue, D. L.; He, L. Y.; Huang, X. F.; Liu, X. G.; Wiedensohler, A.

    2013-02-01

    A series of long-term and temporary measurements were conducted to study the improvement of air quality in Beijing during Olympic Games period (8-24 August 2008). To evaluate actions taken to improve the air quality, comparisons of particle number and volume size distributions of August 2008 and 2004-2007 were performed. The total particle number and volume concentrations were 14 000 cm-3 and 37 μm3 cm-3 in August of 2008, respectively. These were reductions of 41% and 35% compared with the mean values of August 2004-2007. A cluster analysis on air mass history and source apportionment were performed, exploring reasons of the reduction of particle concentrations. Back trajectories were classified into five major clusters. Air mass from south direction are always associated with pollution events during the summertime of Beijing. In August 2008, the frequency of air mass arriving from south has been twice higher compared to the average of the previous years, these southerly air masses did however not result in elevated particle volume concentrations in Beijing. This result implied that the air mass history was not the key factor, explaining reduced particle number and volume concentrations during the Beijing 2008 Olympic Games. Four factors were found influencing particle concentrations using a Positive matrix factorization (PMF) model. They were identified to local and remote traffic emissions, combustion sources as well as secondary transformation. The reductions of the four sources were calculated to 47%, 44%, 43% and 30%, respectively. The significant reductions of particle number and volume concentrations may attribute to actions taken, focusing on primary emissions, especially related to the traffic and combustion sources.

  8. Infrared image processing devoted to thermal non-contact characterization-Applications to Non-Destructive Evaluation, Microfluidics and 2D source term distribution for multispectral tomography

    NASA Astrophysics Data System (ADS)

    Batsale, Jean-Christophe; Pradere, Christophe

    2015-11-01

    The cost of IR cameras is more and more decreasing. Beyond the preliminary calibration step and the global instrumentation, the infrared image processing is then one of the key step for achieving in very broad domains. Generally the IR images are coming from the transient temperature field related to the emission of a black surface in response to an external or internal heating (active IR thermography). The first applications were devoted to the so called thermal Non-Destructive Evaluation methods by considering a thin sample and 1D transient heat diffusion through the sample (transverse diffusion). With simplified assumptions related to the transverse diffusion, the in-plane diffusion and transport phenomena can be also considered. A general equation can be applied in order to balance the heat transfer at the pixel scale or between groups of pixels in order to estimate several fields of thermophysical properties (heterogeneous field of in-plane diffusivity, flow distributions, source terms). There is a lot of possible strategies to process the space and time distributed big amount of data (previous integral transformation of the images, compression, elimination of the non useful areas...), generally based on the necessity to analyse the derivative versus space and time of the temperature field. Several illustrative examples related to the Non-Destructive Evaluation of heterogeneous solids, the thermal characterization of chemical reactions in microfluidic channels and the design of systems for multispectral tomography, will be presented.

  9. Long-Term Distribution and Transport of Nitrate and Ammonium Within a Groundwater Sewage Plume, Cape Cod, MA, After Removal of the Contaminant Source.

    NASA Astrophysics Data System (ADS)

    Repert, D. A.; Smith, R. L.

    2002-12-01

    Disposal of treated sewage for 60 yrs. onto infiltration beds at a site on Cape Cod, MA produced a groundwater contaminant plume >6 km long. The plume was characterized by an anoxic ammonium-containing core, surrounded by an oxic-suboxic outer zone within the sand and gravel aquifer. In Dec. 1995 the sewage treatment facility ceased operation. A long-term study to characterize the distribution of sewage plume constituents was conducted along a 500 m-long transect (source to 3 yrs. groundwater travel distance). Prior to sewage-disposal cessation, total inorganic N within 30 m vertical profiles decreased from 6.6 moles N/m2 (92% NO3-, 8% NH4+) at the point of discharge to 3.3 moles N/m2 (77% NO3-, 23% NH4+) at the furthest point along the transect. Post-cessation nitrate concentrations increased within the first 6 mo. and then gradually decreased. The nitrate decrease was accompanied by an initial nitrite increase, an indication that denitrification was reducing nitrate after the oxygenated sewage discharge was discontinued. There was also an apparent increase in ammonium concentration in the first 6 mo. after cessation. Previous laboratory experiments on pre-cessation cores showed that nitrification was important in converting sorbed ammonium to nitrate under the sewage beds. However, with the removal of the oxygenated sewage source, nitrification ceased, allowing ammonium to initially increase. This increase was correlated with dissolved organic carbon concentrations within the groundwater. Ammonium concentrations decreased dramatically after a year, but subsequently increased in the core of the plume to pre-cessation levels through mineralization of organic N. Recent laboratory core experiments and extractions show that there is a large pool of sorbed organic carbon, although dissolved organic carbon concentrations have been consistently less than 3 mg/L for 6 yrs. Seven yrs. after cessation of the sewage disposal, there is still a significant amount (0.6 moles N

  10. Phase 1 immobilized low-activity waste operational source term

    SciTech Connect

    Burbank, D.A.

    1998-03-06

    This report presents an engineering analysis of the Phase 1 privatization feeds to establish an operational source term for storage and disposal of immobilized low-activity waste packages at the Hanford Site. The source term information is needed to establish a preliminary estimate of the numbers of remote-handled and contact-handled waste packages. A discussion of the uncertainties and their impact on the source term and waste package distribution is also presented. It should be noted that this study is concerned with operational impacts only. Source terms used for accident scenarios would differ due to alpha and beta radiation which were not significant in this study.

  11. HTGR Mechanistic Source Terms White Paper

    SciTech Connect

    Wayne Moe

    2010-07-01

    The primary purposes of this white paper are: (1) to describe the proposed approach for developing event specific mechanistic source terms for HTGR design and licensing, (2) to describe the technology development programs required to validate the design methods used to predict these mechanistic source terms and (3) to obtain agreement from the NRC that, subject to appropriate validation through the technology development program, the approach for developing event specific mechanistic source terms is acceptable

  12. Source term calculations for assessing radiation dose to equipment

    SciTech Connect

    Denning, R.S.; Freeman-Kelly, R.; Cybulskis, P.; Curtis, L.A.

    1989-07-01

    This study examines results of analyses performed with the Source Term Code Package to develop updated source terms using NUREG-0956 methods. The updated source terms are to be used to assess the adequacy of current regulatory source terms used as the basis for equipment qualification. Time-dependent locational distributions of radionuclides within a containment following a severe accident have been developed. The Surry reactor has been selected in this study as representative of PWR containment designs. Similarly, the Peach Bottom reactor has been used to examine radionuclide distributions in boiling water reactors. The time-dependent inventory of each key radionuclide is provided in terms of its activity in curies. The data are to be used by Sandia National Laboratories to perform shielding analyses to estimate radiation dose to equipment in each containment design. See NUREG/CR-5175, Beta and Gamma Dose Calculations for PWR and BWR Containments.'' 6 refs., 11 tabs.

  13. Calculation of source terms for NUREG-1150

    SciTech Connect

    Breeding, R.J.; Williams, D.C.; Murfin, W.B.; Amos, C.N.; Helton, J.C.

    1987-10-01

    The source terms estimated for NUREG-1150 are generally based on the Source Term Code Package (STCP), but the actual source term calculations used in computing risk are performed by much smaller codes which are specific to each plant. This was done because the method of estimating the uncertainty in risk for NUREG-1150 requires hundreds of source term calculations for each accident sequence. This is clearly impossible with a large, detailed code like the STCP. The small plant-specific codes are based on simple algorithms and utilize adjustable parameters. The values of the parameters appearing in these codes are derived from the available STCP results. To determine the uncertainty in the estimation of the source terms, these parameters were varied as specified by an expert review group. This method was used to account for the uncertainties in the STCP results and the uncertainties in phenomena not considered by the STCP.

  14. SOURCE TERMS FOR HLW GLASS CANISTERS

    SciTech Connect

    J.S. Tang

    2000-08-15

    This calculation is prepared by the Monitored Geologic Repository (MGR) Waste Package Design Section. The objective of this calculation is to determine the source terms that include radionuclide inventory, decay heat, and radiation sources due to gamma rays and neutrons for the high-level radioactive waste (HLW) from the, West Valley Demonstration Project (WVDP), Savannah River Site (SRS), Hanford Site (HS), and Idaho National Engineering and Environmental Laboratory (INEEL). This calculation also determines the source terms of the canister containing the SRS HLW glass and immobilized plutonium. The scope of this calculation is limited to source terms for a time period out to one million years. The results of this calculation may be used to carry out performance assessment of the potential repository and to evaluate radiation environments surrounding the waste packages (WPs). This calculation was performed in accordance with the Development Plan ''Source Terms for HLW Glass Canisters'' (Ref. 7.24).

  15. Mechanistic facility safety and source term analysis

    SciTech Connect

    PLYS, M.G.

    1999-06-09

    A PC-based computer program was created for facility safety and source term analysis at Hanford The program has been successfully applied to mechanistic prediction of source terms from chemical reactions in underground storage tanks, hydrogen combustion in double contained receiver tanks, and proccss evaluation including the potential for runaway reactions in spent nuclear fuel processing. Model features include user-defined facility room, flow path geometry, and heat conductors, user-defined non-ideal vapor and aerosol species, pressure- and density-driven gas flows, aerosol transport and deposition, and structure to accommodate facility-specific source terms. Example applications are presented here.

  16. Assessing sensitivity of source term estimation

    NASA Astrophysics Data System (ADS)

    Long, Kerrie J.; Haupt, Sue Ellen; Young, George S.

    2010-04-01

    Source term estimation algorithms compute unknown atmospheric transport and dispersion modeling variables from concentration observations made by sensors in the field. Insufficient spatial and temporal resolution in the meteorological data as well as inherent uncertainty in the wind field data make source term estimation and the prediction of subsequent transport and dispersion extremely difficult. This work addresses the question: how many sensors are necessary in order to successfully estimate the source term and meteorological variables required for atmospheric transport and dispersion modeling? The source term estimation system presented here uses a robust optimization technique - a genetic algorithm (GA) - to find the combination of source location, source height, source strength, surface wind direction, surface wind speed, and time of release that produces a concentration field that best matches the sensor observations. The approach is validated using the Gaussian puff as the dispersion model in identical twin numerical experiments. The limits of the system are tested by incorporating additive and multiplicative noise into the synthetic data. The minimum requirements for data quantity and quality are determined by an extensive grid sensitivity analysis. Finally, a metric is developed for quantifying the minimum number of sensors necessary to accurately estimate the source term and to obtain the relevant wind information.

  17. Dose distributions in regions containing beta sources: Irregularly shaped source distributions in homogeneous media

    SciTech Connect

    Werner, B.L. )

    1991-11-01

    Methods are introduced by which dose rate distributions due to nonuniform, irregularly shaped distributions of beta emitters can be calculated using dose rate distributions for uniform, spherical source distributions. The dose rate distributions can be written in the MIRD formalism.

  18. Subsurface Shielding Source Term Specification Calculation

    SciTech Connect

    S.Su

    2001-04-12

    The purpose of this calculation is to establish appropriate and defensible waste-package radiation source terms for use in repository subsurface shielding design. This calculation supports the shielding design for the waste emplacement and retrieval system, and subsurface facility system. The objective is to identify the limiting waste package and specify its associated source terms including source strengths and energy spectra. Consistent with the Technical Work Plan for Subsurface Design Section FY 01 Work Activities (CRWMS M&O 2001, p. 15), the scope of work includes the following: (1) Review source terms generated by the Waste Package Department (WPD) for various waste forms and waste package types, and compile them for shielding-specific applications. (2) Determine acceptable waste package specific source terms for use in subsurface shielding design, using a reasonable and defensible methodology that is not unduly conservative. This calculation is associated with the engineering and design activity for the waste emplacement and retrieval system, and subsurface facility system. The technical work plan for this calculation is provided in CRWMS M&O 2001. Development and performance of this calculation conforms to the procedure, AP-3.12Q, Calculations.

  19. BWR Source Term Generation and Evaluation

    SciTech Connect

    J.C. Ryman

    2003-07-31

    This calculation is a revision of a previous calculation (Ref. 7.5) that bears the same title and has the document identifier BBAC00000-01717-0210-00006 REV 01. The purpose of this revision is to remove TBV (to-be-verified) -41 10 associated with the output files of the previous version (Ref. 7.30). The purpose of this and the previous calculation is to generate source terms for a representative boiling water reactor (BWR) spent nuclear fuel (SNF) assembly for the first one million years after the SNF is discharged from the reactors. This calculation includes an examination of several ways to represent BWR assemblies and operating conditions in SAS2H in order to quantify the effects these representations may have on source terms. These source terms provide information characterizing the neutron and gamma spectra in particles per second, the decay heat in watts, and radionuclide inventories in curies. Source terms are generated for a range of burnups and enrichments (see Table 2) that are representative of the waste stream and stainless steel (SS) clad assemblies. During this revision, it was determined that the burnups used for the computer runs of the previous revision were actually about 1.7% less than the stated, or nominal, burnups. See Section 6.6 for a discussion of how to account for this effect before using any source terms from this calculation. The source term due to the activation of corrosion products deposited on the surfaces of the assembly from the coolant is also calculated. The results of this calculation support many areas of the Monitored Geologic Repository (MGR), which include thermal evaluation, radiation dose determination, radiological safety analyses, surface and subsurface facility designs, and total system performance assessment. This includes MGR items classified as Quality Level 1, for example, the Uncanistered Spent Nuclear Fuel Disposal Container (Ref. 7.27, page 7). Therefore, this calculation is subject to the requirements of the

  20. Hazardous constituent source term. Revision 2

    SciTech Connect

    Not Available

    1994-11-17

    The Department of Energy (DOE) has several facilities that either generate and/or store transuranic (TRU)-waste from weapons program research and production. Much of this waste also contains hazardous waste constituents as regulated under Subtitle C of the Resource Conservation and Recovery Act (RCRA). Toxicity characteristic metals in the waste principally include lead, occurring in leaded rubber gloves and shielding. Other RCRA metals may occur as contaminants in pyrochemical salt, soil, debris, and sludge and solidified liquids, as well as in equipment resulting from decontamination and decommissioning activities. Volatile organic compounds (VOCS) contaminate many waste forms as a residue adsorbed on surfaces or occur in sludge and solidified liquids. Due to the presence of these hazardous constituents, applicable disposal regulations include land disposal restrictions established by Hazardous and Solid Waste Amendments (HSWA). The DOE plans to dispose of TRU-mixed waste from the weapons program in the Waste Isolation Pilot Plant (WIPP) by demonstrating no-migration of hazardous constituents. This paper documents the current technical basis for methodologies proposed to develop a post-closure RCRA hazardous constituent source term. For the purposes of demonstrating no-migration, the hazardous constituent source term is defined as the quantities of hazardous constituents that are available for transport after repository closure. Development of the source term is only one of several activities that will be involved in the no-migration demonstration. The demonstration will also include uncertainty and sensitivity analyses of contaminant transport.

  1. Distributed Power Sources for Mars Colonization

    NASA Astrophysics Data System (ADS)

    Miley, George H.; Shaban, Yasser

    2003-01-01

    One of the fundamental needs for Mars colonization is an abundant source of energy. The total energy system will probably use a mixture of sources based on solar energy, fuel cells, and nuclear energy. Here we concentrate on the possibility of developing a distributed system employing several unique new types of nuclear energy sources, specifically small fusion devices using inertial electrostatic confinement and portable ``battery type'' proton reaction cells.

  2. Spiral arms as cosmic ray source distributions

    NASA Astrophysics Data System (ADS)

    Werner, M.; Kissmann, R.; Strong, A. W.; Reimer, O.

    2015-04-01

    The Milky Way is a spiral galaxy with (or without) a bar-like central structure. There is evidence that the distribution of suspected cosmic ray sources, such as supernova remnants, are associated with the spiral arm structure of galaxies. It is yet not clearly understood what effect such a cosmic ray source distribution has on the particle transport in our Galaxy. We investigate and measure how the propagation of Galactic cosmic rays is affected by a cosmic ray source distribution associated with spiral arm structures. We use the PICARD code to perform high-resolution 3D simulations of electrons and protons in galactic propagation scenarios that include four-arm and two-arm logarithmic spiral cosmic ray source distributions with and without a central bar structure as well as the spiral arm configuration of the NE2001 model for the distribution of free electrons in the Milky Way. Results of these simulation are compared to an axisymmetric radial source distribution. Also, effects on the cosmic ray flux and spectra due to different positions of the Earth relative to the spiral structure are studied. We find that high energy electrons are strongly confined to their sources and the obtained spectra largely depend on the Earth's position relative to the spiral arms. Similar finding have been obtained for low energy protons and electrons albeit at smaller magnitude. We find that even fractional contributions of a spiral arm component to the total cosmic ray source distribution influences the spectra on the Earth. This is apparent when compared to an axisymmetric radial source distribution as well as with respect to the Earth's position relative to the spiral arm structure. We demonstrate that the presence of a Galactic bar manifests itself as an overall excess of low energy electrons at the Earth. Using a spiral arm geometry as a cosmic ray source distributions offers a genuine new quality of modeling and is used to explain features in cosmic ray spectra at the Earth

  3. STACE: Source Term Analyses for Containment Evaluations of transport casks

    SciTech Connect

    Seager, K. D.; Gianoulakis, S. E.; Barrett, P. R.; Rashid, Y. R.; Reardon, P. C.

    1992-01-01

    Following the guidance of ANSI N14.5, the STACE methodology provides a technically defensible means for estimating maximum permissible leakage rates. These containment criteria attempt to reflect the true radiological hazard by performing a detailed examination of the spent fuel, CRUD, and residual contamination contributions to the releasable source term. The evaluation of the spent fuel contribution to the source term has been modeled fairly accurately using the STACE methodology. The structural model predicts the cask drop load history, the mechanical response of the fuel assembly, and the probability of cladding breach. These data are then used to predict the amount of fission gas, volatile species, and fuel fines that are releasable from the cask. There are some areas where data are sparse or lacking (e.g., the quantity and size distribution of fuel rod breaches) in which experimental validation is planned. The CRUD spallation fraction is the major area where no quantitative data has been found; therefore, this also requires experimental validation. In the interim, STACE conservatively assumes a 100% spallation fraction for computing the releasable activity. The source term methodology also conservatively assumes that there is 1 Ci of residual contamination available for release in the transport cask. However, residual contamination is still by far the smallest contributor to the source term activity.

  4. Source term evaluation for combustion modeling

    NASA Technical Reports Server (NTRS)

    Sussman, Myles A.

    1993-01-01

    A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.

  5. Improved source term estimation using blind outlier detection

    NASA Astrophysics Data System (ADS)

    Martinez-Camara, Marta; Bejar Haro, Benjamin; Vetterli, Martin; Stohl, Andreas

    2014-05-01

    Emissions of substances into the atmosphere are produced in situations such as volcano eruptions, nuclear accidents or pollutant releases. It is necessary to know the source term - how the magnitude of these emissions changes with time - in order to predict the consequences of the emissions, such as high radioactivity levels in a populated area or high concentration of volcanic ash in an aircraft flight corridor. However, in general, we know neither how much material was released in total, nor the relative variation of emission strength with time. Hence, estimating the source term is a crucial task. Estimating the source term generally involves solving an ill-posed linear inverse problem using datasets of sensor measurements. Several so-called inversion methods have been developed for this task. Unfortunately, objective quantitative evaluation of the performance of inversion methods is difficult due to the fact that the ground truth is unknown for practically all the available measurement datasets. In this work we use the European Tracer Experiment (ETEX) - a rare example of an experiment where the ground truth is available - to develop and to test new source estimation algorithms. Knowledge of the ground truth grants us access to the additive error term. We show that the distribution of this error is heavy-tailed, which means that some measurements are outliers. We also show that precisely these outliers severely degrade the performance of traditional inversion methods. Therefore, we develop blind outlier detection algorithms specifically suited to the source estimation problem. Then, we propose new inversion methods that combine traditional regularization techniques with blind outlier detection. Such hybrid methods reduce the error of reconstruction of the source term up to 45% with respect to previously proposed methods.

  6. TRIGA MARK-II source term

    NASA Astrophysics Data System (ADS)

    Usang, M. D.; Hamzah, N. S.; J. B., Abi M.; M. Z., M. Rawi; Abu, M. P.

    2014-02-01

    ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences of results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.

  7. TRIGA MARK-II source term

    SciTech Connect

    Usang, M. D. Hamzah, N. S. Abi, M. J. B. Rawi, M. Z. M. Rawi Abu, M. P.

    2014-02-12

    ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences of results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.

  8. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  9. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The... to January 10, 1997, who seek to revise the current accident source term used in their design...

  10. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The... to January 10, 1997, who seek to revise the current accident source term used in their design...

  11. Bayesian Estimation of Prior Variance in Source Term Determination

    NASA Astrophysics Data System (ADS)

    Smidl, Vaclav; Hofman, Radek

    2015-04-01

    The problem of determination of source term of an atmospheric release is studied. We assume that the observations y are obtained as linear combination of the source term, x, and source-receptor sensitivities, which can be written in matrix notation as y = Mx with source receptor sensitivity matrix M. Direct estimation of the source term vector x is not possible since the system is often ill-conditioned. The solution is thus found by minimization of a cost function with regularization terms. A typical cost function is: C (x) = (y - M x)TR-1(y- M x) + αxTDT Dx, (1) where the first term minimizes the error of the measurements with covariance matrix R, and the second term is the regularization with weight α. Various types of regularization arise for different choices of matrix D. For example, Tikhonov regularization arises for D in the form of identity matrix, and smoothing regularization for D in the form of a tri-diagonal matrix (Laplacian operator). Typically, the form of matrix D is assumed to be known, and the weight α is optimized manually by a trial and error procedure. In this contribution, we use the probabilistic formulation of the problem, where term (αDTD)-1 is interpreted as a covariance matrix of the prior distribution of x. Following the Bayesian approach, we relax the assumption of known α and D and assume that these are unknown and estimated from the data. The general problem is not analytically tractable and approximate estimation techniques has to be used. We present Variational Bayesian solution of two special cases of the prior covariance matrix. First, the structure of D is assumed to be known and only the weight α is estimated. Application of the Variational Bayes method to this case yields an iterative estimation algorithm. In the first step, the usual optimization problem is solved for an estimate of α. In the next step, the value of α is re-estimated and the procedure returns to the first step. Positivity of the solution is guaranteed

  12. Long-term source monitoring with BATSE

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Harmon, B. A.; Finger, M. H.; Fishman, G. J.; Meegan, C. A.; Paciesas, W. S.

    1992-01-01

    The uncollimated Burst and Transient Source Experiment (BATSE) large area detectors (LADs) are well suited to nearly continuous monitoring of the stronger hard x-ray sources, and time series analysis for pulsars. An overview of the analysis techniques presently being applied to the data are discussed, including representative observations of the Crab Nebula, Crab pulsar, and summaries of the sources detected to data. Results of a search for variability in the Crab Pulsar pulse profile are presented.

  13. Distributed transform coding via source-splitting

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2012-12-01

    Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

  14. Calculation of external dose from distributed source

    SciTech Connect

    Kocher, D.C.

    1986-01-01

    This paper discusses a relatively simple calculational method, called the point kernel method (Fo68), for estimating external dose from distributed sources that emit photon or electron radiations. The principles of the point kernel method are emphasized, rather than the presentation of extensive sets of calculations or tables of numerical results. A few calculations are presented for simple source geometries as illustrations of the method, and references and descriptions are provided for other caluclations in the literature. This paper also describes exposure situations for which the point kernel method is not appropriate and other, more complex, methods must be used, but these methods are not discussed in any detail.

  15. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  16. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model. PMID:19687829

  17. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  18. Particle size distribution of indoor aerosol sources

    SciTech Connect

    Shah, K.B.

    1990-10-24

    As concern about Indoor Air Quality (IAQ) has grown in recent years, it has become necessary to determine the nature of particles produced by different indoor aerosol sources and the typical concentration that these sources tend to produce. These data are important in predicting the dose of particles to people exposed to these sources and it will also enable us to take effective mitigation procedures. Further, it will also help in designing appropriate air cleaners. A new state of the art technique, DMPS (Differential Mobility Particle Sizer) System is used to determine the particle size distributions of a number of sources. This system employs the electrical mobility characteristics of these particles and is very effective in the 0.01--1.0 {mu}m size range. A modified system that can measure particle sizes in the lower size range down to 3 nm was also used. Experimental results for various aerosol sources is presented in the ensuing chapters. 37 refs., 20 figs., 2 tabs.

  19. Quantum key distribution with entangled photon sources

    NASA Astrophysics Data System (ADS)

    Ma, Xiongfeng; Fung, Chi-Hang Fred; Lo, Hoi-Kwong

    2007-07-01

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDC source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill’s security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70dB combined channel losses ( 35dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53dB channel losses.

  20. State of the hydrologic source term

    SciTech Connect

    Kersting, A.

    1996-12-01

    The Underground Test Area (UGTA) Operable Unit was defined by the U.S. Department of energy, Nevada operations Office to characterize and potentially remediate groundwaters impacted by nuclear testing at the Nevada Test Site (NTS). Between 1955 and 1992, 828 nuclear devices were detonated underground at the NTS (DOE), 1994. Approximately one third of the nuclear tests were detonated at or below the standing water table and the remainder were located above the water table in the vadose zone. As a result, the distribution of radionuclides in the subsurface and, in particular, the availability of radionuclides for transport away from individual test cavities are major concerns at the NTS. The approach taken is to carry out field-based studies of both groundwaters and host rocks within the near-field in order to develop a detailed understanding of the present-day concentration and spatial distribution of constituent radionuclides. Understanding the current distribution of contamination within the near-field and the conditions under and processes by which the radionuclides were transported make it possible to predict future transport behavior. The results of these studies will be integrated with archival research, experiments and geochemical modeling for complete characterization.

  1. Experimental quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong

    2015-09-01

    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.

  2. High power distributed x-ray source

    NASA Astrophysics Data System (ADS)

    Frutschy, Kris; Neculaes, Bogdan; Inzinna, Lou; Caiafa, Antonio; Reynolds, Joe; Zou, Yun; Zhang, Xi; Gunturi, Satish; Cao, Yang; Waters, Bill; Wagner, Dave; De Man, Bruno; McDevitt, Dan; Roffers, Rick; Lounsberry, Brian; Pelc, Norbert J.

    2010-04-01

    This paper summarizes the development of a distributed x-ray source with up to 60kW demonstrated instantaneous power. Component integration and test results are shown for the dispenser cathode electron gun, fast switching controls, high voltage stand-off insulator, brazed anode, and vacuum system. The current multisource prototype has been operated for over 100 hours without failure, and additional testing is needed to discover the limiting component. Example focal spot measurements and x-ray radiographs are included. Lastly, future development opportunities are highlighted.

  3. Stochastic Models for the Distribution of Index Terms.

    ERIC Educational Resources Information Center

    Nelson, Michael J.

    1989-01-01

    Presents a probability model of the occurrence of index terms used to derive discrete distributions which are mixtures of Poisson and negative binomial distributions. These distributions give better fits than the simpler Zipf distribution, have the advantage of being more explanatory, and can incorporate a time parameter if necessary. (25…

  4. Optimum target source term estimation for high energy electron accelerators

    NASA Astrophysics Data System (ADS)

    Nayak, M. K.; Sahu, T. K.; Nair, Haridas G.; Nandedkar, R. V.; Bandyopadhyay, Tapas; Tripathi, R. M.; Hannurkar, P. R.

    2016-05-01

    Optimum target for bremsstrahlung emission is defined as the thickness of the target material, which produces maximum bremsstrahlung yield, on interaction of electron with the target. The bremsstrahlung dose rate per unit electron beam power at a distance of 1 m from the target material gives the optimum target source term. In the present work, simulations were performed for three different electron energies, 450, 1000 and 2500 MeV using EGSnrc Monte-Carlo code to determine the optimum thickness. An empirical relation for optimum target as a function of electron energy and atomic number of the target materials is found out from results. Using the simulated optimum target thickness, experiments are conducted to determine the optimum target source term. For the experimental determination, two available electron energies, 450 MeV and 550 MeV from booster synchrotron of Indus facility is used. The optimum target source term for these two energies are also simulated. The experimental and simulated source term are found to be in very good agreement within ±3%. Based on the agreement of the simulated source term with the experimental source term at 450 MeV and 550 MeV, the same simulation methodology is used to simulate optimum target source term up to 2500 MeV. The paper describes the simulations and experiments carried out on optimum target bremsstrahlung source term and the results obtained.

  5. CONSTRAINING SOURCE REDSHIFT DISTRIBUTIONS WITH GRAVITATIONAL LENSING

    SciTech Connect

    Wittman, D.; Dawson, W. A.

    2012-09-10

    We introduce a new method for constraining the redshift distribution of a set of galaxies, using weak gravitational lensing shear. Instead of using observed shears and redshifts to constrain cosmological parameters, we ask how well the shears around clusters can constrain the redshifts, assuming fixed cosmological parameters. This provides a check on photometric redshifts, independent of source spectral energy distribution properties and therefore free of confounding factors such as misidentification of spectral breaks. We find that {approx}40 massive ({sigma}{sub v} = 1200 km s{sup -1}) cluster lenses are sufficient to determine the fraction of sources in each of six coarse redshift bins to {approx}11%, given weak (20%) priors on the masses of the highest-redshift lenses, tight (5%) priors on the masses of the lowest-redshift lenses, and only modest (20%-50%) priors on calibration and evolution effects. Additional massive lenses drive down uncertainties as N{sub lens}{sup -1/2}, but the improvement slows as one is forced to use lenses further down the mass function. Future large surveys contain enough clusters to reach 1% precision in the bin fractions if the tight lens-mass priors can be maintained for large samples of lenses. In practice this will be difficult to achieve, but the method may be valuable as a complement to other more precise methods because it is based on different physics and therefore has different systematic errors.

  6. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-01-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  7. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-10-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  8. Scoping Analysis of Source Term and Functional Containment Attenuation Factors

    SciTech Connect

    Pete Lowry

    2012-02-01

    In order to meet future regulatory requirements, the Next Generation Nuclear Plant (NGNP) Project must fully establish and validate the mechanistic modular high temperature gas-cooled reactor (HTGR) source term. This is not possible at this stage in the project, as significant uncertainties in the final design remain unresolved. In the interim, however, there is a need to establish an approximate characterization of the source term. The NGNP team developed a simplified parametric model to establish mechanistic source term estimates for a set of proposed HTGR configurations.

  9. Panchromatic spectral energy distributions of Herschel sources

    NASA Astrophysics Data System (ADS)

    Berta, S.; Lutz, D.; Santini, P.; Wuyts, S.; Rosario, D.; Brisbin, D.; Cooray, A.; Franceschini, A.; Gruppioni, C.; Hatziminaoglou, E.; Hwang, H. S.; Le Floc'h, E.; Magnelli, B.; Nordon, R.; Oliver, S.; Page, M. J.; Popesso, P.; Pozzetti, L.; Pozzi, F.; Riguccini, L.; Rodighiero, G.; Roseboom, I.; Scott, D.; Symeonidis, M.; Valtchanov, I.; Viero, M.; Wang, L.

    2013-03-01

    Combining far-infrared Herschel photometry from the PACS Evolutionary Probe (PEP) and Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time programs with ancillary datasets in the GOODS-N, GOODS-S, and COSMOS fields, it is possible to sample the 8-500 μm spectral energy distributions (SEDs) of galaxies with at least 7-10 bands. Extending to the UV, optical, and near-infrared, the number of bands increases up to 43. We reproduce the distribution of galaxies in a carefully selected restframe ten colors space, based on this rich data-set, using a superposition of multivariate Gaussian modes. We use this model to classify galaxies and build median SEDs of each class, which are then fitted with a modified version of the magphys code that combines stellar light, emission from dust heated by stars and a possible warm dust contribution heated by an active galactic nucleus (AGN). The color distribution of galaxies in each of the considered fields can be well described with the combination of 6-9 classes, spanning a large range of far- to near-infrared luminosity ratios, as well as different strength of the AGN contribution to bolometric luminosities. The defined Gaussian grouping is used to identify rare or odd sources. The zoology of outliers includes Herschel-detected ellipticals, very blue z ~ 1 Ly-break galaxies, quiescent spirals, and torus-dominated AGN with star formation. Out of these groups and outliers, a new template library is assembled, consisting of 32 SEDs describing the intrinsic scatter in the restframe UV-to-submm colors of infrared galaxies. This library is tested against L(IR) estimates with and without Herschel data included, and compared to eightother popular methods often adopted in the literature. When implementing Herschel photometry, these approaches produce L(IR) values consistent with each other within a median absolute deviation of 10-20%, the scatter being dominated more by fine tuning of the codes, rather than by the choice of

  10. Evaluation of source-term data for plutonium aerosolization

    SciTech Connect

    Haschke, J.M.

    1992-07-01

    Relevant data are reviewed and evaluated in an effort to define the time dependence and maximum value of the source term for plutonium aerosolization during a fuel fire. The rate of plutonium oxidation at high temperatures is a major determinant of the time dependence. Analysis of temperature-time data for oxidation of plutonium shows that the rate is constant (0.2 g PUO{sub 2}/cm{sup 2} of metal surface per min) and independent of temperature above 500{degrees}C. Total mass and particle distributions are derived for oxide products formed by reactions of plutonium metal and hydride. The mass distributions for products of all metal-gas reactions are remarkably similar with approximately 0.07 mass% of the oxide particles having geometric diameters {le} 10 {mu}m. In comparison, 25 mass% of the oxide formed by the PuH{sub 2}+O{sub 2} reaction is in this range. Experimental values of mass fractions released during oxidation are evaluated and factors that alter the release fraction are discussed.

  11. Source Term Model for an Array of Vortex Generator Vanes

    NASA Technical Reports Server (NTRS)

    Buning, P. G. (Technical Monitor); Waithe, Kenrick A.

    2003-01-01

    A source term model was developed for numerical simulations of an array of vortex generators. The source term models the side force created by a vortex generator being modeled. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on a local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low-profile vortex generator vane, which is only a fraction of the boundary layer thickness, over a flat plate. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data.

  12. Revised accident source terms for light-water reactors

    SciTech Connect

    Soffer, L.

    1995-02-01

    This paper presents revised accident source terms for light-water reactors incorporating the severe accident research insights gained in this area over the last 15 years. Current LWR reactor accident source terms used for licensing date from 1962 and are contained in Regulatory Guides 1.3 and 1.4. These specify that 100% of the core inventory of noble gases and 25% of the iodine fission products are assumed to be instantaneously available for release from the containment. The chemical form of the iodine fission products is also assumed to be predominantly elemental iodine. These assumptions have strongly affected present nuclear air cleaning requirements by emphasizing rapid actuation of spray systems and filtration systems optimized to retain elemental iodine. A proposed revision of reactor accident source terms and some im implications for nuclear air cleaning requirements was presented at the 22nd DOE/NRC Nuclear Air Cleaning Conference. A draft report was issued by the NRC for comment in July 1992. Extensive comments were received, with the most significant comments involving (a) release fractions for both volatile and non-volatile species in the early in-vessel release phase, (b) gap release fractions of the noble gases, iodine and cesium, and (c) the timing and duration for the release phases. The final source term report is expected to be issued in late 1994. Although the revised source terms are intended primarily for future plants, current nuclear power plants may request use of revised accident source term insights as well in licensing. This paper emphasizes additional information obtained since the 22nd Conference, including studies on fission product removal mechanisms, results obtained from improved severe accident code calculations and resolution of major comments, and their impact upon the revised accident source terms. Revised accident source terms for both BWRS and PWRS are presented.

  13. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  14. An Evaluation of Short-Term Distributed Online Learning Events

    ERIC Educational Resources Information Center

    Barker, Bradley; Brooks, David

    2005-01-01

    The purpose of this study was to evaluate the effectiveness of short-term distributed online training events using an adapted version of the compressed evaluation form developed by Wisher and Curnow (1998). Evaluating online distributed training events provides insight into course effectiveness, the contribution of prior knowledge to learning, and…

  15. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  16. Selection of models to calculate the LLW source term

    SciTech Connect

    Sullivan, T.M. )

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.

  17. Effect of convective term on temperature distribution in biological tissue

    NASA Astrophysics Data System (ADS)

    Kengne, Emmanuel; Saydé, Michel; Lakhssassi, Ahmed

    2013-08-01

    We introduce a phase imprint into the order parameter describing the influence of blood flow on the temperature distribution in the tissue described by the one-dimensional Pennes equation and then engineer the imprinted phase suitably to generate a modified Pennes equation with a gradient term (known in the theory of biological systems as convective term) which is associated with the heat convected by the flowing blood. Using the derived model, we analytically investigate temperature distribution in biological tissues subject to two different spatial heating methods. The applicability of our results is illustrated by one of typical bio-heat transfer problems which is often encountered in therapeutic treatment, cancer hyperthermia, laser surgery, thermal injury evaluation, etc. Analyzing the effect of the convective term on temperature distribution, we found that an optimum heating of a biological system can be obtained through regulating the convective term.

  18. Source term and radiological consequences of the Chernobyl accident

    SciTech Connect

    Mourad, R.; Snell, V.

    1987-01-01

    The objective of this work is to assess the source term and to evaluate the maximum hypothetical individual doses in European countries (including the Soviet Union) from the Chernobyl accident through the analyses of measurements of meteorological data, radiation fields, and airborne and deposited activity in these countries. Applying this information to deduce the source term involves a reversal of the techniques of nuclear accident analysis, which estimate the off-site consequences of postulated accidents. In this study the authors predict the quantities of radionuclides that, if released at Chernobyl and following the calculated trajectories, would explain and unify the observed radiation levels and radionuclide concentrations as measured by European countries and the Soviet Union. The simulation uses the PEAR microcomputer program following the methodology described in Canadian Standards Association standard N288.2. The study was performed before the Soviets published their estimate of the source term and the two results are compared.

  19. Problem solving as intelligent retrieval from distributed knowledge sources

    NASA Technical Reports Server (NTRS)

    Chen, Zhengxin

    1987-01-01

    Distributed computing in intelligent systems is investigated from a different perspective. From the viewpoint that problem solving can be viewed as intelligent knowledge retrieval, the use of distributed knowledge sources in intelligent systems is proposed.

  20. Flowsheets and source terms for radioactive waste projections

    SciTech Connect

    Forsberg, C.W.

    1985-03-01

    Flowsheets and source terms used to generate radioactive waste projections in the Integrated Data Base (IDB) Program are given. Volumes of each waste type generated per unit product throughput have been determined for the following facilities: uranium mining, UF/sub 6/ conversion, uranium enrichment, fuel fabrication, boiling-water reactors (BWRs), pressurized-water reactors (PWRs), and fuel reprocessing. Source terms for DOE/defense wastes have been developed. Expected wastes from typical decommissioning operations for each facility type have been determined. All wastes are also characterized by isotopic composition at time of generation and by general chemical composition. 70 references, 21 figures, 53 tables.

  1. Spallation Neutron Source Accident Terms for Environmental Impact Statement Input

    SciTech Connect

    Devore, J.R.; Harrington, R.M.

    1998-08-01

    This report is about accidents with the potential to release radioactive materials into the environment surrounding the Spallation Neutron Source (SNS). As shown in Chap. 2, the inventories of radioactivity at the SNS are dominated by the target facility. Source terms for a wide range of target facility accidents, from anticipated events to worst-case beyond-design-basis events, are provided in Chaps. 3 and 4. The most important criterion applied to these accident source terms is that they should not underestimate potential release. Therefore, conservative methodology was employed for the release estimates. Although the source terms are very conservative, excessive conservatism has been avoided by basing the releases on physical principles. Since it is envisioned that the SNS facility may eventually (after about 10 years) be expanded and modified to support a 4-MW proton beam operational capability, the source terms estimated in this report are applicable to a 4-MW operating proton beam power unless otherwise specified. This is bounding with regard to the 1-MW facility that will be built and operated initially. See further discussion below in Sect. 1.2.

  2. Common Calibration Source for Monitoring Long-term Ozone Trends

    NASA Technical Reports Server (NTRS)

    Kowalewski, Matthew

    2004-01-01

    Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.

  3. Long-Term Stability of Radio Sources in VLBI Analysis

    NASA Technical Reports Server (NTRS)

    Engelhardt, Gerald; Thorandt, Volkmar

    2010-01-01

    Positional stability of radio sources is an important requirement for modeling of only one source position for the complete length of VLBI data of presently more than 20 years. The stability of radio sources can be verified by analyzing time series of radio source coordinates. One approach is a statistical test for normal distribution of residuals to the weighted mean for each radio source component of the time series. Systematic phenomena in the time series can thus be detected. Nevertheless, an inspection of rate estimation and weighted root-mean-square (WRMS) variations about the mean is also necessary. On the basis of the time series computed by the BKG group in the frame of the ICRF2 working group, 226 stable radio sources with an axis stability of 10 as could be identified. They include 100 ICRF2 axes-defining sources which are determined independently of the method applied in the ICRF2 working group. 29 stable radio sources with a source structure index of less than 3.0 can also be used to increase the number of 295 ICRF2 defining sources.

  4. Source Term Code Package: a user's guide (Mod 1)

    SciTech Connect

    Gieseke, J.A.; Cybulskis, P.; Jordan, H.; Lee, K.W.; Schumacher, P.M.; Curtis, L.A.; Wooton, R.O.; Quayle, S.F.; Kogan, V.

    1986-07-01

    As part of a major reassessment of the release of radioactive materials to the environment (source terms) in severe reactor accidents, a group of state-of-the-art computer codes was utilized to perform extensive analyses. A major product of this source term reassessment effort was a demonstrated methodology for analyzing specific accident situations to provide source term predictions. The computer codes forming this methodology have been upgraded and modified for release and further use. This system of codes has been named the Source Term Code Package (STCP) and is the subject of this user's guide. The guide is intended to provide an understanding of the STCP structure and to facilitate STCP use. The STCP was prepared for operation on a CDC system but is written in FORTRAN-77 to permit transportability. In the current version (Mod 1) of the STCP, the various calculational elements fall into four major categories represented by the codes MARCH3, TRAP-MELT3, VANESA, and NAUA/SPARC/ICEDF. The MARCH3 code is a combination of the MARCH2, CORSOR-M, and CORCON-Mod 2 codes. The TRAP-MELT3 code is a combination of the TRAP-MELT2.0 and MERGE codes.

  5. Disposal Unit Source Term (DUST) data input guide

    SciTech Connect

    Sullivan, T.M.

    1993-05-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). The computer code DUST (Disposal Unit Source Term) has been developed to model these processes. This document presents the models used to calculate release from a disposal facility, verification of the model, and instructions on the use of the DUST code. In addition to DUST, a preprocessor, DUSTIN, which helps the code user create input decks for DUST and a post-processor, GRAFXT, which takes selected output files and plots them on the computer terminal have been written. Use of these codes is also described.

  6. BWR ASSEMBLY SOURCE TERMS FOR WASTE PACKAGE DESIGN

    SciTech Connect

    T.L. Lotz

    1997-02-15

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide boiling water reactor (BWR) assembly radiation source term data for use during Waste Package (WP) design. The BWR assembly radiation source terms are to be used for evaluation of radiolysis effects at the WP surface, and for personnel shielding requirements during assembly or WP handling operations. The objectives of this evaluation are to generate BWR assembly radiation source terms that bound selected groupings of BWR assemblies, with regard to assembly average burnup and cooling time, which comprise the anticipated MGDS BWR commercial spent nuclear fuel (SNF) waste stream. The source term data is to be provided in a form which can easily be utilized in subsequent shielding/radiation dose calculations. Since these calculations may also be used for Total System Performance Assessment (TSPA), with appropriate justification provided by TSPA, or radionuclide release rate analysis, the grams of each element and additional cooling times out to 25 years will also be calculated and the data included in the output files.

  7. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Power source capacity and distribution. 23.1310 Section 23.1310 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose...

  8. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Power source capacity and distribution. 23.1310 Section 23.1310 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Equipment General § 23.1310 Power source capacity and distribution. (a) Each installation whose...

  9. Energy efficient wireless sensor networks using asymmetric distributed source coding

    NASA Astrophysics Data System (ADS)

    Rao, Abhishek; Kulkarni, Murlidhar

    2013-01-01

    Wireless Sensor Networks (WSNs) are networks of sensor nodes deployed over a geographical area to perform a specific task. WSNs pose many design challenges. Energy conservation is one such design issue. In literature a wide range of solutions addressing this issue have been proposed. Generally WSNs are densely deployed. Thus the nodes with the close proximity are more likely to have the same data. Transmission of such non-aggregated data may lead to an inefficient energy management. Hence the data fusion has to be performed at the nodes so as to combine the edundant information into a single data unit. Distributed Source Coding is an efficient approach in achieving this task. In this paper an attempt has been made in modeling such a system. Various energy efficient codes were considered for the analysis. System performance in terms of energy efficiency has been made.

  10. Pressure distribution in unsteady sink and source flows.

    PubMed

    Voropayev, S I

    2015-05-01

    Basic flow generated in a viscous incompressible fluid by a "point" sink (source) of mass is revised. In practice, such flow can be modeled by sucking (pushing) fluid from a thin tube with a small porous sphere at one end. Intuitively, by sucking (pushing) fluid, one creates low (high) pressure near the origin and a positive (negative) radial pressure gradient drives the fluid to (from) the origin. A simple analysis, however, shows that the pressure distribution for both steady flows is the same. Then a question arises: How does the fluid "know" in what direction to flow? To explain this "paradox" an unsteady flow is considered and the pressure terms responsible for the flow direction are derived. PMID:26066255

  11. Open source portal to distributed image repositories

    NASA Astrophysics Data System (ADS)

    Tao, Wenchao; Ratib, Osman M.; Kho, Hwa; Hsu, Yung-Chao; Wang, Cun; Lee, Cason; McCoy, J. M.

    2004-04-01

    In large institution PACS, patient data may often reside in multiple separate systems. While most systems tend to be DICOM compliant, none of them offer the flexibility of seamless integration of multiple DICOM sources through a single access point. We developed a generic portal system with a web-based interactive front-end as well as an application programming interface (API) that allows both web users and client applications to query and retrieve image data from multiple DICOM sources. A set of software tools was developed to allow accessing several DICOM archives through a single point of access. An interactive web-based front-end allows user to search image data seamlessly from the different archives and display the results or route the image data to another DICOM compliant destination. An XML-based API allows other software programs to easily benefit from this portal to query and retrieve image data as well. Various techniques are employed to minimize the performance overhead inherent in the DICOM. The system is integrated with a hospital-wide HIPAA-compliant authentication and auditing service that provides centralized management of access to patient medical records. The system is provided under open source free licensing and developed using open-source components (Apache Tomcat for web server, MySQL for database, OJB for object/relational data mapping etc.). The portal paradigm offers a convenient and effective solution for accessing multiple image data sources in a given healthcare enterprise and can easily be extended to multi-institution through appropriate security and encryption mechanisms.

  12. IMPACTS OF SOURCE TERM HETEROGENEITIES ON WATER PATHWAY DOSE.

    SciTech Connect

    SULLIVAN, T.; GUSKOV, A.; POSKAS, P.; RUPERTI, N.; HANUSIK, V.; ET AL.

    2004-09-15

    and for which a solution has to be found in term of long-term disposal. Together with their casing and packaging, they are one form of heterogeneous waste; many other forms of waste with heterogeneous properties exist. They may arise in very small quantities and with very specific characteristics in the case of small producers, or in larger streams with standard characteristics in others. This wide variety of waste induces three main different levels of waste heterogeneity: (1) hot spot (e.g. disused sealed sources); (2) large item inside a package (e.g. metal components); and (3) very large items to be disposed of directly in the disposal unit (e.g. irradiated pipes, vessels). Safety assessments generally assume a certain level of waste homogeneity in most of the existing or proposed disposal facilities. There is a need to evaluate the appropriateness of such an assumption and the influence on the results of safety assessment. This need is especially acute in the case of sealed sources. There are many cases where are storage conditions are poor, or there is improper management leading to a radiological accident, some with significant or detrimental impacts. Disposal in a near surface disposal facility has been used in the past for some disused sealed sources. This option is currently in use for others sealed sources, or is being studied for the rest of them. The regulatory framework differs greatly between countries. In some countries, large quantities of disused sealed sources have been disposed of without any restriction, in others their disposal is forbidden by law. In any case, evaluation of the acceptability of disposal of disused sealed sources in near surface disposal facility is of utmost importance.

  13. Short and long term representation of an unfamiliar tone distribution

    PubMed Central

    Diercks, Charlette; Troje, Nikolaus F.; Cuddy, Lola L.

    2016-01-01

    We report on a study conducted to extend our knowledge about the process of gaining a mental representation of music. Several studies, inspired by research on the statistical learning of language, have investigated statistical learning of sequential rules underlying tone sequences. Given that the mental representation of music correlates with distributional properties of music, we tested whether participants are able to abstract distributional information contained in tone sequences to form a mental representation. For this purpose, we created an unfamiliar music genre defined by an underlying tone distribution, to which 40 participants were exposed. Our stimuli allowed us to differentiate between sensitivity to the distributional properties contained in test stimuli and long term representation of the distributional properties of the music genre overall. Using a probe tone paradigm and a two-alternative forced choice discrimination task, we show that listeners are able to abstract distributional properties of music through mere exposure into a long term representation of music. This lends support to the idea that statistical learning is involved in the process of gaining musical knowledge.

  14. Phonatory sound sources in terms of Lagrangian Coherent Structures

    NASA Astrophysics Data System (ADS)

    McPhail, Michael; Krane, Michael

    2015-11-01

    Lagrangian Coherent Structures (LCS) are used to identify sound sources in phonation. Currently, it is difficult to causally relate changes in airflow topology from voice disorders to changes in voiced sound production. LCS reveals a flow's topology by decomposing the flow into regions of distinct dynamics. The aeroacoustic sources can be written in terms of the motion of these regions in terms of the motion of the boundaries of the distinct regions. Breaking down the flow into constituent parts shows how each distinct region contributes to sound production. This approach provides a framework to connect changes in anatomy from a voice disorder to measurable changes in the resulting sound. This approach is presented for simulations of some canonical cases of vortex sound generation, and a two-dimensional simulation of phonation. Acknowledge NIH grant 2R01 2R01DC005642.

  15. A nuclear source term analysis for spacecraft power systems

    SciTech Connect

    McCulloch, W.H.

    1998-12-01

    All US space missions involving on board nuclear material must be approved by the Office of the President. To be approved the mission and the hardware systems must undergo evaluations of the associated nuclear health and safety risk. One part of these evaluations is the characterization of the source terms, i.e., the estimate of the amount, physical form, and location of nuclear material, which might be released into the environment in the event of credible accidents. This paper presents a brief overview of the source term analysis by the Interagency Nuclear Safety Review Panel for the NASA Cassini Space Mission launched in October 1997. Included is a description of the Energy Interaction Model, an innovative approach to the analysis of potential releases from high velocity impacts resulting from launch aborts and reentries.

  16. Actinide Source Term Program, position paper. Revision 1

    SciTech Connect

    Novak, C.F.; Papenguth, H.W.; Crafts, C.C.; Dhooge, N.J.

    1994-11-15

    The Actinide Source Term represents the quantity of actinides that could be mobilized within WIPP brines and could migrate with the brines away from the disposal room vicinity. This document presents the various proposed methods for estimating this source term, with a particular focus on defining these methods and evaluating the defensibility of the models for mobile actinide concentrations. The conclusions reached in this document are: the 92 PA {open_quotes}expert panel{close_quotes} model for mobile actinide concentrations is not defensible; and, although it is extremely conservative, the {open_quotes}inventory limits{close_quotes} model is the only existing defensible model for the actinide source term. The model effort in progress, {open_quotes}chemical modeling of mobile actinide concentrations{close_quotes}, supported by a laboratory effort that is also in progress, is designed to provide a reasonable description of the system and be scientifically realistic and supplant the {open_quotes}Inventory limits{close_quotes} model.

  17. An approach to distribution short-term load forecasting

    SciTech Connect

    Stratton, R.C.; Gaustad, K.L.

    1995-03-01

    This paper reports on the developments and findings of the Distribution Short-Term Load Forecaster (DSTLF) research activity. The objective of this research is to develop a distribution short-term load forecasting technology consisting of a forecasting method, development methodology, theories necessary to support required technical components, and the hardware and software tools required to perform the forecast The DSTLF consists of four major components: monitored endpoint load forecaster (MELF), nonmonitored endpoint load forecaster (NELF), topological integration forecaster (TIF), and a dynamic tuner. These components interact to provide short-term forecasts at various points in the, distribution system, eg., feeder, line section, and endpoint. This paper discusses the DSTLF methodology and MELF component MELF, based on artificial neural network technology, predicts distribution endpoint loads for an hour, a day, and a week in advance. Predictions are developed using time, calendar, historical load, and weather data. The overall DSTLF architecture and a prototype MELF module for retail endpoints have been developed. Future work will be focused on refining and extending MELF and developing NELF and TIF capabilities.

  18. Dose distributions in regions containing beta sources: Uniform spherical source regions in homogeneous media

    SciTech Connect

    Werner, B.L.; Rahman, M.; Salk, W.N. ); Kwok, C.S. )

    1991-11-01

    The energy-averaged transport model for the calculation of dose rate distributions is applied to uniform, spherical source distributions in homogeneous media for radii smaller than the electron range. The model agrees well with Monte Carlo based calculations for source distributions with radii greater than half the continuous slowing down approximation range. The dose rate distributions can be written in the medical internal radiation dose (MIRD) formalism.

  19. The source and distribution of Galactic positrons

    NASA Technical Reports Server (NTRS)

    Purcell, W. R.; Dixon, D. D.; Cheng, L.-X.; Leventhal, M.; Kinzer, R. L.; Kurfess, J. D.; Skibo, J. G.; Smith, D. M.; Tueller, J.

    1997-01-01

    The oriented scintillation spectrometer experiment (OSSE) observations of the Galactic plane and the Galactic center region were combined with observations acquired with other instruments in order to produce a map of the Galactic 511 keV annihilation radiation. Two mapping techniques were applied to the data: the maximum entropy method, and the basis pursuit inversion method. The resulting maps are qualitatively similar and show evidence for a central bulge and a weak galactic disk component. The weak disk is consistent with that expected from positrons produced by the decay of radioactive Al-26 in the interstellar medium. Both maps suggest an enhanced region of emission near l = -4 deg, b = 7 deg, with a flux of approximately 50 percent of that of the bulge. The existence of this emission appears significant, although the location is not well determined. The source of this enhanced emission is presently unknown.

  20. Tetrodotoxin: chemistry, toxicity, source, distribution and detection.

    PubMed

    Bane, Vaishali; Lehane, Mary; Dikshit, Madhurima; O'Riordan, Alan; Furey, Ambrose

    2014-02-01

    Tetrodotoxin (TTX) is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection. PMID:24566728

  1. Tetrodotoxin: Chemistry, Toxicity, Source, Distribution and Detection

    PubMed Central

    Bane, Vaishali; Lehane, Mary; Dikshit, Madhurima; O’Riordan, Alan; Furey, Ambrose

    2014-01-01

    Tetrodotoxin (TTX) is a naturally occurring toxin that has been responsible for human intoxications and fatalities. Its usual route of toxicity is via the ingestion of contaminated puffer fish which are a culinary delicacy, especially in Japan. TTX was believed to be confined to regions of South East Asia, but recent studies have demonstrated that the toxin has spread to regions in the Pacific and the Mediterranean. There is no known antidote to TTX which is a powerful sodium channel inhibitor. This review aims to collect pertinent information available to date on TTX and its analogues with a special emphasis on the structure, aetiology, distribution, effects and the analytical methods employed for its detection. PMID:24566728

  2. Sources and distributions of dark matter

    SciTech Connect

    Sikivie, P. |

    1995-12-31

    In the first section, the author tries to convey a sense of the variety of observational inputs that tell about the existence and the spatial distribution of dark matter in the universe. In the second section, he briefly reviews the four main dark matter candidates, taking note of each candidate`s status in the world of particle physics, its production in the early universe, its effect upon large scale structure formation and the means by which it may be detected. Section 3 concerns the energy spectrum of (cold) dark matter particles on earth as may be observed some day in a direct detection experiment. It is a brief account of work done in collaboration with J. Ipser and, more recently, with I. Tkachev and Y. Wang.

  3. Contamination on LDEF: Sources, distribution, and history

    NASA Technical Reports Server (NTRS)

    Pippin, Gary; Crutcher, Russ

    1993-01-01

    An introduction to contamination effects observed on the Long Duration Exposure Facility (LDEF) is presented. The activities reported are part of Boeing's obligation to the LDEF Materials Special Investigation Group. The contamination films and particles had minimal influence on the thermal performance of the LDEF. Some specific areas did have large changes in optical properties. Films also interfered with recession rate determination by reacting with the oxygen or physically shielding underlying material. Generally, contaminant films lessen the measured recession rate relative to 'clean' surfaces. On orbit generation of particles may be an issue for sensitive optics. Deposition on lenses may lead to artifacts on photographic images or cause sensors to respond inappropriately. Particles in the line of sight of sensors can cause stray light to be scattered into sensors. Particles also represent a hazard for mechanisms in that they can physically block and/or increase friction or wear on moving surfaces. LDEF carried a rather complex mixture of samples and support hardware into orbit. The experiments were assembled under a variety of conditions and time constraints and stored for up to five years before launch. The structure itself was so large that it could not be baked after the interior was painted with chemglaze Z-306 polyurethane based black paint. Any analysis of the effects of molecular and particulate contamination must account for a complex array of sources, wide variation in processes over time, and extreme variation in environment from ground to launch to flight. Surface conditions at certain locations on LDEF were established by outgassing of molecular species from particular materials onto adjacent surfaces, followed by alteration of those species due to exposure to atomic oxygen and/or solar radiation.

  4. Separating More Sources Than Sensors Using Time-Frequency Distributions

    NASA Astrophysics Data System (ADS)

    Linh-Trung, Nguyen; Belouchrani, Adel; Abed-Meraim, Karim; Boashash, Boualem

    2005-12-01

    We examine the problem of blind separation of nonstationary sources in the underdetermined case, where there are more sources than sensors. Since time-frequency (TF) signal processing provides effective tools for dealing with nonstationary signals, we propose a new separation method that is based on time-frequency distributions (TFDs). The underlying assumption is that the original sources are disjoint in the time-frequency (TF) domain. The successful method recovers the sources by performing the following four main procedures. First, the spatial time-frequency distribution (STFD) matrices are computed from the observed mixtures. Next, the auto-source TF points are separated from cross-source TF points thanks to the special structure of these mixture STFD matrices. Then, the vectors that correspond to the selected auto-source points are clustered into different classes according to the spatial directions which differ among different sources; each class, now containing the auto-source points of only one source, gives an estimation of the TFD of this source. Finally, the source waveforms are recovered from their TFD estimates using TF synthesis. Simulated experiments indicate the success of the proposed algorithm in different scenarios. We also contribute with two other modified versions of the algorithm to better deal with auto-source point selection.

  5. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  6. The costs of long-term care: distribution and responsibility.

    PubMed

    Wallack, S S; Cohen, M A

    1988-01-01

    Long-term care costs will result in financial hardship for millions of elderly Americans and their families. The growing number of elderly people has focused public attention on the catastrophic problem of coverage for long-term care. Social insurance is unlikely to emerge as a solution in the USA. One reason is that the expected total cost is viewed as an unmanageable burden by both Federal and State governments. To others, it is the uncertainty surrounding the projected costs. This paper reports on the results of a double-decrement life-table analysis, based on a national survey of the elderly taken in early 1977 and one year later, that estimated the distribution and total lifetime nursing-home costs of the elderly. Combining the probability of nursing-home entry and length of stay, a 65-year-old faces a 43% chance of entering a nursing home and spending about +11,000 (1980 dollars). The distribution of lifetime costs is however very skewed with 13% of the elderly consuming 90% of the resources. Thus, while the costs of nursing-home care can be catastrophic for an individual, spread across a group they are not unmanageable. Given the distribution of income and assets among the elderly, a sizeable proportion could readily afford the necessary premiums of different emerging insurance and delivery programmes. Alternative private and public models of long-term care must be evaluated in terms of the goals of a finance and delivery system for long-term care. PMID:3129256

  7. Near term climate projections for invasive species distributions

    USGS Publications Warehouse

    Jarnevich, C.S.; Stohlgren, T.J.

    2009-01-01

    Climate change and invasive species pose important conservation issues separately, and should be examined together. We used existing long term climate datasets for the US to project potential climate change into the future at a finer spatial and temporal resolution than the climate change scenarios generally available. These fine scale projections, along with new species distribution modeling techniques to forecast the potential extent of invasive species, can provide useful information to aide conservation and invasive species management efforts. We created habitat suitability maps for Pueraria montana (kudzu) under current climatic conditions and potential average conditions up to 30 years in the future. We examined how the potential distribution of this species will be affected by changing climate, and the management implications associated with these changes. Our models indicated that P. montana may increase its distribution particularly in the Northeast with climate change and may decrease in other areas. ?? 2008 Springer Science+Business Media B.V.

  8. 14 CFR 25.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... functioning normally. (2) Essential loads, after failure of any one prime mover, power converter, or energy... source of power is required, after any failure or malfunction in any one power supply system... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Power source capacity and distribution....

  9. 14 CFR 23.1310 - Power source capacity and distribution.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Power source capacity and distribution. 23.1310 Section 23.1310 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Equipment General § 23.1310 Power source...

  10. Trace metal source terms in carbon sequestration environments.

    PubMed

    Karamalidis, Athanasios K; Torres, Sharon G; Hakala, J Alexandra; Shao, Hongbo; Cantrell, Kirk J; Carroll, Susan

    2013-01-01

    Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising; however, possible CO(2) or CO(2)-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define a range of concentrations that can be used as the trace element source term for reservoirs and leakage pathways in risk simulations. Storage source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from cements and sandstones, shales, carbonates, evaporites, and basalts from the Frio, In Salah, Illinois Basin, Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands, and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution was tracked by measuring solution concentrations over time under conditions (e.g., pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for maximum contaminant levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments because of the presence of CO(2). Results indicate that Cr and Pb released from sandstone reservoir and shale cap rocks exceed the MCLs by an order of magnitude, while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the trace element source term for reservoirs and leakage pathways in risk simulations to further evaluate the impact of leakage on groundwater quality. PMID:23215015

  11. Trace Metal Source Terms in Carbon Sequestration Environments

    SciTech Connect

    Karamalidis, Athanasios; Torres, Sharon G.; Hakala, Jacqueline A.; Shao, Hongbo; Cantrell, Kirk J.; Carroll, Susan A.

    2013-01-01

    ABSTRACT: Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising; however, possible CO2 or CO2-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define to provide a range of concentrations that can be used as the trace element source term for reservoirs and leakage pathways in risk simulations. Storage source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from cements and sandstones, shales, carbonates, evaporites, and basalts from the Frio, In Salah, Illinois Basin, Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands, and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution was tracked by measuring solution concentrations over time under conditions (e.g., pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for maximum contaminant levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments because of the presence of CO2. Results indicate that Cr and Pb released from sandstone reservoir and shale cap rocks exceed the MCLs byan order of magnitude, while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the trace element source term for reservoirs and leakage pathways in risk simulations to further evaluate the impact of leakage on groundwater quality.

  12. Trace Metal Source Terms in Carbon Sequestration Environments

    SciTech Connect

    Karamalidis, Athanasios K; Torres, Sharon G; Hakala, J Alexandra; Shao, Hongbo; Cantrell, Kirk J; Carroll, Susan

    2012-02-05

    Carbon dioxide sequestration in deep saline and depleted oil geologic formations is feasible and promising, however, possible CO₂ or CO₂-saturated brine leakage to overlying aquifers may pose environmental and health impacts. The purpose of this study was to experimentally define trace metal source terms from the reaction of supercritical CO₂, storage reservoir brines, reservoir and cap rocks. Storage reservoir source terms for trace metals are needed to evaluate the impact of brines leaking into overlying drinking water aquifers. The trace metal release was measured from sandstones, shales, carbonates, evaporites, basalts and cements from the Frio, In Salah, Illinois Basin – Decatur, Lower Tuscaloosa, Weyburn-Midale, Bass Islands and Grand Ronde carbon sequestration geologic formations. Trace metal dissolution is tracked by measuring solution concentrations over time under conditions (e.g. pressures, temperatures, and initial brine compositions) specific to the sequestration projects. Existing metrics for Maximum Contaminant Levels (MCLs) for drinking water as defined by the U.S. Environmental Protection Agency (U.S. EPA) were used to categorize the relative significance of metal concentration changes in storage environments due to the presence of CO₂. Results indicate that Cr and Pb released from sandstone reservoir and shale cap rock exceed the MCLs by an order of magnitude while Cd and Cu were at or below drinking water thresholds. In carbonate reservoirs As exceeds the MCLs by an order of magnitude, while Cd, Cu, and Pb were at or below drinking water standards. Results from this study can be used as a reasonable estimate of the reservoir and caprock source term to further evaluate the impact of leakage on groundwater quality.

  13. Long-term cycles in cosmic X-ray sources

    NASA Technical Reports Server (NTRS)

    Priedhorsky, W. C.; Holt, S. S.

    1987-01-01

    Data on long-term cycles in galactic X-ray sources are reviewed, and classes of variations are identified including precessional activity, recurrent outbursts in Population II sources, and Be/neutron star flare cycles. Cycles of 30-300 days have been found in LMC X-4, Her X-1, SS433, and Cyg X-1 which represent cyclic variations in both the inner and outer parts of the accretion disk. Quasi-periodic cycles with periods ranging from 1/2 to 2 years have been noted in several low-mass X-ray binaries. It is suggested that periodic outbursts in the Be/neutron star systems may result from variable mass transfer in a wide eccentric orbit.

  14. Development of alternate methods of determining integrated SMR source terms

    SciTech Connect

    Barry, Kenneth

    2014-06-10

    The Nuclear Energy Institute (NEI) Small Modular Reactor (SMR) Licensing Task Force (TF) has been evaluating licensing issues unique and important to iPWRs, ranking these issues, and developing NEI position papers for submittal to the U.S. Nuclear Regulatory Commission (NRC) during the past three years. Papers have been developed and submitted to the NRC in a range of areas including: Price-Anderson Act, NRC annual fees, security, modularity, and staffing. In December, 2012, NEI completed a draft position paper on SMR source terms and participated in an NRC public meeting presenting a summary of this paper, which was subsequently submitted to the NRC. One important conclusion of the source term paper was the evaluation and selection of high importance areas where additional research would have a significant impact on source terms. The highest ranked research area was iPWR containment aerosol natural deposition. The NRC accepts the use of existing aerosol deposition correlations in Regulatory Guide 1.183, but these were developed for large light water reactor (LWR) containments. Application of these correlations to an iPWR design has resulted in greater than a ten-fold reduction of containment airborne aerosol inventory as compared to large LWRs. Development and experimental justification of containment aerosol natural deposition correlations specifically for the unique iPWR containments is expected to result in a large reduction of design basis and beyond-design-basis accident source terms with concomitantly smaller dose to workers and the public. Therefore, NRC acceptance of iPWR containment aerosol natural deposition correlations will directly support the industry’s goal of reducing the Emergency Planning Zone (EPZ) for SMRs. Based on the results in this work, it is clear that thermophoresis is relatively unimportant for iPWRs. Gravitational settling is well understood, and may be the dominant process for a dry environment. Diffusiophoresis and enhanced

  15. Long-Term Stability of the NIST Standard Ultrasonic Source.

    PubMed

    Fick, Steven E

    2008-01-01

    The National Institute of Standards and Technology (NIST) Standard Ultrasonic Source (SUS) is a system comprising a transducer capable of output power levels up to 1 W at multiple frequencies between 1 MHz and 30 MHz, and an electrical impedance-matching network that allows the system to be driven by a conventional 50 Ω rf (radio-frequency) source. It is designed to allow interlaboratory replication of ultrasonic power levels with high accuracy using inexpensive readily available ancillary equipment. The SUS was offered for sale for 14 years (1985 to 1999). Each system was furnished with data for the set of calibration points (combinations of power level and frequency) specified by the customer. Of the systems that had been ordered with some calibration points in common, three were returned more than once to NIST for recalibration. Another system retained at NIST has been recalibrated periodically since 1984. The collective data for these systems comprise 9 calibration points and 102 measurements spanning a 17 year interval ending in 2001, the last year NIST ultrasonic power measurement services were available to the public. These data have been analyzed to compare variations in output power with frequency, power level, and time elapsed since the first calibration. The results verify the claim, made in the instruction sheet furnished with every SUS, that "long-term drift, if any, in the calibration of NIST Standard Sources is insignificant compared to the uncertainties associated with a single measurement of ultrasonic power by any method available at NIST." PMID:27096127

  16. A comparison of world-wide uses of severe reactor accident source terms

    SciTech Connect

    Ang, M.L.; Frid, W.; Kersting, E.J.; Friederichs, H.G.; Lee, R.Y.; Meyer-Heine, A.; Powers, D.A.; Soda, K.; Sweet, D.

    1994-09-01

    The definitions of source terms to reactor containments and source terms to the environment are discussed. A comparison is made between the TID-14844 example source term and the alternative source term described in NUREG-1465. Comparisons of these source terms to the containments and those used in France, Germany, Japan, Sweden, and the United Kingdom are made. Source terms to the environment calculated in NUREG-1500 and WASH-1400 are discussed. Again, these source terms are compared to those now being used in France, Germany, Japan, Sweden, and the United Kingdom. It is concluded that source terms to the containment suggested in NUREG-1465 are not greatly more conservative than those used in other countries. Technical bases for the source terms are similar. The regulatory use of the current understanding of radionuclide behavior varies among countries.

  17. Accident source terms for light-water nuclear power plants using high-burnup or MOX fuel.

    SciTech Connect

    Salay, Michael; Gauntt, Randall O.; Lee, Richard Y.; Powers, Dana Auburn; Leonard, Mark Thomas

    2011-01-01

    Representative accident source terms patterned after the NUREG-1465 Source Term have been developed for high burnup fuel in BWRs and PWRs and for MOX fuel in a PWR with an ice-condenser containment. These source terms have been derived using nonparametric order statistics to develop distributions for the timing of radionuclide release during four accident phases and for release fractions of nine chemical classes of radionuclides as calculated with the MELCOR 1.8.5 accident analysis computer code. The accident phases are those defined in the NUREG-1465 Source Term - gap release, in-vessel release, ex-vessel release, and late in-vessel release. Important differences among the accident source terms derived here and the NUREG-1465 Source Term are not attributable to either fuel burnup or use of MOX fuel. Rather, differences among the source terms are due predominantly to improved understanding of the physics of core meltdown accidents. Heat losses from the degrading reactor core prolong the process of in-vessel release of radionuclides. Improved understanding of the chemistries of tellurium and cesium under reactor accidents changes the predicted behavior characteristics of these radioactive elements relative to what was assumed in the derivation of the NUREG-1465 Source Term. An additional radionuclide chemical class has been defined to account for release of cesium as cesium molybdate which enhances molybdenum release relative to other metallic fission products.

  18. Tank waste source term inventory validation. Volume 1. Letter report

    SciTech Connect

    Brevick, C.H.; Gaddis, L.A.; Johnson, E.D.

    1995-04-28

    The sample data for selection of 11 radionuclides and 24 chemical analytes were extracted from six separate sample data sets, were arranged in a tabular format and were plotted on scatter plots for all of the 149 single-shell tanks, the 24 double-shell tanks and the four aging waste tanks. The solid and liquid sample data was placed in separate tables and plots. The sample data and plots were compiled from the following data sets: characterization raw sample data, recent core samples, D. Braun data base, Wastren (Van Vleet) data base, TRAC and HTCE inventories. This document is Volume I of the Letter Report entitled Tank Waste Source Term Inventory Validation.

  19. Estimating Source Terms for Diverse Spent Nuclear Fuel Types

    SciTech Connect

    Brett Carlsen; Layne Pincock

    2004-11-01

    The U.S. Department of Energy (DOE) National Spent Nuclear Fuel Program is responsible for developing a defensible methodology for determining the radionuclide inventory for the DOE spent nuclear fuel (SNF) to be dispositioned at the proposed Monitored Geologic Repository at the Yucca Mountain Site. SNF owned by DOE includes diverse fuels from various experimental, research, and production reactors. These fuels currently reside at several DOE sites, universities, and foreign research reactor sites. Safe storage, transportation, and ultimate disposal of these fuels will require radiological source terms as inputs to safety analyses that support design and licensing of the necessary equipment and facilities. This paper summarizes the methodology developed for estimating radionuclide inventories associated with DOE-owned SNF. The results will support development of design and administrative controls to manage radiological risks and may later be used to demonstrate conformance with repository acceptance criteria.

  20. Computational determination of absorbed dose distributions from gamma ray sources

    NASA Astrophysics Data System (ADS)

    Zhou, Chuanyu; Inanc, Feyzi

    2001-04-01

    A biomedical procedure known as brachytherapy involves insertion of many radioactive seeds into a sick gland for eliminating sick tissue. For such implementations, the spatial distribution of absorbed dose is very important. A simulation tool has been developed to determine the spatial distribution of absorbed dose in heterogeneous environments where the gamma ray source consists of many small internal radiation emitters. The computation is base on integral transport method and the computations are done in a parallel fashion. Preliminary results involving 137Cs and 125I sources surrounded by water and comparison of the results to the experimental and computational data available in the literature are presented.

  1. The brightness and spatial distributions of terrestrial radio sources

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; de Bruyn, A. G.; Zaroubi, S.; Koopmans, L. V. E.; Wijnholds, S. J.; Abdalla, F. B.; Brouw, W. N.; Ciardi, B.; Iliev, I. T.; Harker, G. J. A.; Mellema, G.; Bernardi, G.; Zarka, P.; Ghosh, A.; Alexov, A.; Anderson, J.; Asgekar, A.; Avruch, I. M.; Beck, R.; Bell, M. E.; Bell, M. R.; Bentum, M. J.; Best, P.; Bîrzan, L.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Fallows, R. A.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Hassall, T. E.; Horneffer, A.; Iacobelli, M.; Juette, E.; Karastergiou, A.; Klijn, W.; Kondratiev, V. I.; Kuniyoshi, M.; Kuper, G.; van Leeuwen, J.; Loose, M.; Maat, P.; Macario, G.; Mann, G.; McKean, J. P.; Meulman, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pizzo, R.; Polatidis, A. G.; Rafferty, D.; Reich, W.; van Nieuwpoort, R.; Röttgering, H.; Scaife, A. M. M.; Sluman, J.; Smirnov, O.; Sobey, C.; Tagger, M.; Tang, Y.; Tasse, C.; Veen, S. ter; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wise, M. W.; Wucknitz, O.

    2013-10-01

    Faint undetected sources of radio-frequency interference (RFI) might become visible in long radio observations when they are consistently present over time. Thereby, they might obstruct the detection of the weak astronomical signals of interest. This issue is especially important for Epoch of Reionization (EoR) projects that try to detect the faint redshifted H I signals from the time of the earliest structures in the Universe. We explore the RFI situation at 30-163 MHz by studying brightness histograms of visibility data observed with Low-Frequency Array (LOFAR), similar to radio-source-count analyses that are used in cosmology. An empirical RFI distribution model is derived that allows the simulation of RFI in radio observations. The brightness histograms show an RFI distribution that follows a power-law distribution with an estimated exponent around -1.5. With several assumptions, this can be explained with a uniform distribution of terrestrial radio sources whose radiation follows existing propagation models. Extrapolation of the power law implies that the current LOFAR EoR observations should be severely RFI limited if the strength of RFI sources remains strong after time integration. This is in contrast with actual observations, which almost reach the thermal noise and are thought not to be limited by RFI. Therefore, we conclude that it is unlikely that there are undetected RFI sources that will become visible in long observations. Consequently, there is no indication that RFI will prevent an EoR detection with LOFAR.

  2. Continuous-variable quantum key distribution with Gaussian source noise

    SciTech Connect

    Shen Yujie; Peng Xiang; Yang Jian; Guo Hong

    2011-05-15

    Source noise affects the security of continuous-variable quantum key distribution (CV QKD) and is difficult to analyze. We propose a model to characterize Gaussian source noise through introducing a neutral party (Fred) who induces the noise with a general unitary transformation. Without knowing Fred's exact state, we derive the security bounds for both reverse and direct reconciliations and show that the bound for reverse reconciliation is tight.

  3. Electric Transport Traction Power Supply System With Distributed Energy Sources

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.

    2016-04-01

    The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.

  4. Distributed source x-ray tube technology for tomosynthesis imaging

    PubMed Central

    Sprenger, F.; Calderon-Colon, X.; Cheng, Y.; Englestad, K.; Lu, J.; Maltz, J.; Paidi, A.; Qian, X.; Spronk, D.; Sultana, S.; Yang, G.; Zhou, O.

    2011-01-01

    Tomosynthesis imaging requires projection images from different viewing angles. Conventional systems use a moving xray source to acquire the individual projections. Using a stationary distributed x-ray source with a number of sources that equals the number of required projections, this can be achieved without any mechanical motion. Advantages are a potentially faster image acquisition speed, higher spatial and temporal resolution and simple system design. We present distributed x-ray sources based on carbon nanotube (CNT) field emission cathodes. The field emission cathodes deliver the electrons required for x-ray production. CNT emitters feature a stable emission at high current density, a cold emission, excellent temporal control of the emitted electrons and good configurability. We discuss the use of stationary sources for two applications: (i) a linear tube for stationary digital breast tomosynthesis (sDBT), and (ii) a square tube for on-board tomosynthesis image-guided radiation therapy (IGRT). Results from high energy distributed sources up to 160kVp are also presented. PMID:21785671

  5. Method for image reconstruction of moving radionuclide source distribution

    DOEpatents

    Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick

    2012-12-18

    A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.

  6. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  7. Wave spectra partitioning and long term statistical distribution

    NASA Astrophysics Data System (ADS)

    Portilla-Yandún, Jesús; Cavaleri, Luigi; Van Vledder, Gerbrant Ph.

    2015-12-01

    A new method is presented for a physically based statistical description of wind wave climatology. The method applies spectral partitioning to identify individual wave systems (partitions) in time series of 2D-wave spectra, followed by computing the probability of occurrence of their (peak) position in frequency-direction space. This distribution can be considered as a spectral density function to which another round of partitioning is applied to obtain spectral domains, each representing a typical wave system or population in a statistical sense. This two-step partitioning procedure allows identifying aggregate wave systems without the need to discuss specific characteristics as wind sea and swell systems. We suggest that each of these aggregate wave systems (populations) is linked to a specific generation pattern opening the way to dedicated analyses. Each population (of partitions) can be subjected to further analyses to add dimension carrying information based on integrated wave parameters of each partition, such as significant wave height, wave age, mean wave period and direction, among others. The new method is illustrated by analysing model spectra from a numerical wave prediction model and measured spectra from a directional wave buoy located in the Southern North Sea. It is shown that these two sources of information yield consistent results. Examples are given of computing the statistical distribution of significant wave height, spectral energy distribution and the spatial variation of wind wave characteristics along a north-south transect in the North Sea. Wind or wave age information can be included as an extra attribute of the members of a population to label them as wind sea or swell systems. Finally, suggestions are given for further applications of this new method.

  8. Production, Distribution, and Applications of Californium-252 Neutron Sources

    SciTech Connect

    Balo, P.A.; Knauer, J.B.; Martin, R.C.

    1999-10-03

    The radioisotope {sup 252}Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-year half-life. A source the size of a person's little finger can emit up to 10{sup 11} neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells The radioisotope {sup 252}Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6- year half-life. A source the size of a person's little finger can emit up to 10 neutrons/s. Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement, and minerals, as well as for detection and identification of explosives, laud mines, and unexploded military ordnance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards, and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 years of experience and by U.S. Bureau of Mines tests of source survivability during explosions. The production and distribution center for the U. S Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory(ORNL). DOE sells {sup 252}Cf to commercial

  9. Security of quantum key distribution with light sources that are not independently and identically distributed

    NASA Astrophysics Data System (ADS)

    Nagamatsu, Yuichi; Mizutani, Akihiro; Ikuta, Rikizo; Yamamoto, Takashi; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2016-04-01

    Although quantum key distribution (QKD) is theoretically secure, there is a gap between the theory and practice. In fact, real-life QKD may not be secure because component devices in QKD systems may deviate from the theoretical models assumed in security proofs. To solve this problem, it is necessary to construct the security proof under realistic assumptions on the source and measurement unit. In this paper, we prove the security of a QKD protocol under practical assumptions on the source that accommodate fluctuation of the phase and intensity modulations. As long as our assumptions hold, it does not matter at all how the phase and intensity distribute or whether or not their distributions over different pulses are independently and identically distributed. Our work shows that practical sources can be safely employed in QKD experiments.

  10. Production, distribution and applications of californium-252 neutron sources.

    PubMed

    Martin, R C; Knauer, J B; Balo, P A

    2000-01-01

    The radioisotope 252Cf is routinely encapsulated into compact, portable, intense neutron sources with a 2.6-yr half-life. A source the size of a person's little finger can emit up to 10(11) neutrons s(-1). Californium-252 is used commercially as a reliable, cost-effective neutron source for prompt gamma neutron activation analysis (PGNAA) of coal, cement and minerals, as well as for detection and identification of explosives, land mines and unexploded military ordinance. Other uses are neutron radiography, nuclear waste assays, reactor start-up sources, calibration standards and cancer therapy. The inherent safety of source encapsulations is demonstrated by 30 yr of experience and by US Bureau of Mines tests of source survivability during explosions. The production and distribution center for the US Department of Energy (DOE) Californium Program is the Radiochemical Engineering Development Center (REDC) at Oak Ridge National Laboratory (ORNL). DOE sells 252Cf to commercial reencapsulators domestically and internationally. Sealed 252Cf sources are also available for loan to agencies and subcontractors of the US government and to universities for educational, research and medical applications. The REDC has established the Californium User Facility (CUF) for Neutron Science to make its large inventory of 252Cf sources available to researchers for irradiations inside uncontaminated hot cells. Experiments at the CUF include a land mine detection system, neutron damage testing of solid-state detectors, irradiation of human cancer cells for boron neutron capture therapy experiments and irradiation of rice to induce genetic mutations. PMID:11003521

  11. Understanding the electrical behavior of the action potential in terms of elementary electrical sources.

    PubMed

    Rodriguez-Falces, Javier

    2015-03-01

    A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity. PMID:25727465

  12. Verification test calculations for the Source Term Code Package

    SciTech Connect

    Denning, R S; Wooton, R O; Alexander, C A; Curtis, L A; Cybulskis, P; Gieseke, J A; Jordan, H; Lee, K W; Nicolosi, S L

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs.

  13. SOURCE TERM TARGETED THRUST FY 2005 NEW START PROJECTS

    SciTech Connect

    NA

    2005-10-05

    While a significant amount of work has been devoted to developing thermodynamic data. describing the sorption of radionuclides to iron oxides and other geomedia, little data exist to describe the interaction of key radionuclides found in high-level radioactive waste with the uranium surfaces expected in corroded spent nuclear fuel (SNF) waste packages. Recent work indicates that actinide adsorption to the U(VI) solids expected in the engineered barrier system may play a key role in the reduction of dissolved concentrations of radionuclides such as Np(V). However, little is known about the mechanism(s) of adsorption, nor are the thermodynamic data available to represent the phenomenon in predictive modeling codes. Unfortunately, this situation makes it difficult to consider actinide adsorption to the U(VI) silicates in either geochemical or performance assessment (PA) predictions. The primary goal in the Source Term Targeted Thrust area is to ''study processes that control radionuclide release from the waste form''. Knowledge of adsorption of actinides to U(VI) silicate solids its and parameterization in geochemical models will be an important step towards this goal.

  14. Challenges in defining a radiologic and hydrologic source term for underground nuclear test centers, Nevada Test Site, Nye County, Nevada

    SciTech Connect

    Smith, D.K.

    1995-06-01

    The compilation of a radionuclide inventory for long-lived radioactive contaminants residual from nuclear testing provides a partial measure of the radiologic source term at the Nevada Test Site. The radiologic source term also includes potentially mobile short-lived radionuclides excluded from the inventory. The radiologic source term for tritium is known with accuracy and is equivalent to the hydrologic source term within the saturated zone. Definition of the total hydrologic source term for fission and activation products that have high activities for decades following underground testing involves knowledge and assumptions which are presently unavailable. Systematic investigation of the behavior of fission products, activation products and actinides under saturated or Partially saturated conditions is imperative to define a representative total hydrologic source term. This is particularly important given the heterogeneous distribution of radionuclides within testing centers. Data quality objectives which emphasize a combination of measurements and credible estimates of the hydrologic source term are a priority for near-field investigations at the Nevada Test Site.

  15. Oil source bed distribution in upper Tertiary of Gulf Coast

    SciTech Connect

    Dow, W.G.

    1985-02-01

    Effective oil source beds have not been reported in Miocene and younger Gulf Coast sediments and the organic matter present is invariably immature and oxidized. Crude oil composition, however, indicates origin from mature source beds containing reduced kerogen. Oil distribution suggests extensive vertical migration through fracture systems from localized sources in deeply buried, geopressured shales. A model is proposed in which oil source beds were deposited in intraslope basins that formed behind salt ridges. The combination of silled basin topography, rapid sedimentation, and enhanced oxygen-minimum zones during global warmups resulted in periodic anoxic environments and preservation of oil-generating organic matter. Anoxia was most widespread during the middle Miocene and Pliocene transgressions and rare during regressive cycles when anoxia occurred primarily in hypersaline conditions such as exist today in the Orca basin.

  16. Secure quantum key distribution with an uncharacterized source.

    PubMed

    Koashi, Masato; Preskill, John

    2003-02-01

    We prove the security of the Bennett-Brassard (BB84) quantum key distribution protocol for an arbitrary source whose averaged states are basis independent, a condition that is automatically satisfied if the source is suitably designed. The proof is based on the observation that, to an adversary, the key extraction process is equivalent to a measurement in the sigma(x) basis performed on a pure sigma(z)-basis eigenstate. The dependence of the achievable key length on the bit error rate is the same as that established by Shor and Preskill [Phys. Rev. Lett. 85, 441 (2000)

  17. Space distribution of extragalactic sources - Cosmology versus evolution

    NASA Technical Reports Server (NTRS)

    Cavaliere, A.; Maccacaro, T.

    1990-01-01

    Alternative cosmologies have been recurrently invoked to explain in terms of global spacetime structure the apparent large increase, with increasing redshift, in the average luminosity of active galactic nuclei. These models interestingly seek to avoid the complexities of the canonical interpretation in terms of intrinsic population evolutions in a Friedmann universe. However, a problem of consistency for these cosmologies is pointed out, since they have to include also other classes of extragalactic sources, such as clusters of galaxies and BL Lac objects, for which there is preliminary evidence of a different behavior.

  18. Review: Particle number size distributions from seven major sources and implications for source apportionment studies

    NASA Astrophysics Data System (ADS)

    Vu, Tuan V.; Delgado-Saborit, Juana Maria; Harrison, Roy M.

    2015-12-01

    The particle number size distribution (PNSD) of airborne particles not only provides us with information about sources and atmospheric processing of particles, but also plays an important role in determining regional lung dose. As a result, urban particles and their size distributions have received much attention with a rapid increase of publications in recent years. The object of this review is to synthesise and analyse existing knowledge on particles in urban environments with a focus on their number concentration and size distribution. This study briefly reviews the characterization of PNSD from seven major sources of urban particles including traffic emissions, industrial emissions, biomass burning, cooking, transported aerosol, marine aerosol and nucleation. It then discusses atmospheric physical processes such as coagulation or condensation which have a strong influence on PNSD. Finally, the implications of PNSD datasets for source modelling are briefly discussed. Based on this review, it is concluded that the concentrations, modal structures and temporal patterns of urban particles are strongly influenced by traffic emissions, which are identified as the main source of particle number in urban environments. Information derived from particle number size distributions is beginning to play an important role in source apportionment studies.

  19. Distributed policy based access to networked heterogeneous ISR data sources

    NASA Astrophysics Data System (ADS)

    Bent, G.; Vyvyan, D.; Wood, David; Zerfos, Petros; Calo, Seraphin

    2010-04-01

    Within a coalition environment, ad hoc Communities of Interest (CoI's) come together, perhaps for only a short time, with different sensors, sensor platforms, data fusion elements, and networks to conduct a task (or set of tasks) with different coalition members taking different roles. In such a coalition, each organization will have its own inherent restrictions on how it will interact with the others. These are usually stated as a set of policies, including security and privacy policies. The capability that we want to enable for a coalition operation is to provide access to information from any coalition partner in conformance with the policies of all. One of the challenges in supporting such ad-hoc coalition operations is that of providing efficient access to distributed sources of data, where the applications requiring the data do not have knowledge of the location of the data within the network. To address this challenge the International Technology Alliance (ITA) program has been developing the concept of a Dynamic Distributed Federated Database (DDFD), also know as a Gaian Database. This type of database provides a means for accessing data across a network of distributed heterogeneous data sources where access to the information is controlled by a mixture of local and global policies. We describe how a network of disparate ISR elements can be expressed as a DDFD and how this approach enables sensor and other information sources to be discovered autonomously or semi-autonomously and/or combined, fused formally defined local and global policies.

  20. Spatiotemporal distributions of tsunami sources and discovered periodicities

    NASA Astrophysics Data System (ADS)

    Levin, B. W.; Sasorova, E. V.

    2014-09-01

    Both spatial and spatiotemporal distributions of the sources of tsunamigenic earthquakes of tectonic origin over the last 112 years have been analyzed. This analysis has been made using tsunami databases published by the Institute of Computational Mathematics and Mathematical Geophysics (Siberian Branch, Russian Academy of Sciences) and the National Aeronautics and Space Administration (United States), as well as earthquake catalogs published by the National Earthquake Information Center (United States). It has been found that the pronounced activation of seismic processes and an increase in the total energy of tsunamigenic earthquakes were observed at the beginning of both the 20th (1905-1920) and 21st (2004-2011) centuries. Studying the spatiotemporal periodicity of such events on the basis of an analysis of the two-dimensional distributions of the sources of tectonic tsunamis has made it possible to determine localized latitudinal zones with a total lack of such events (90°-75° N, 45°-90° S, and 35°-25° N) and regions with a periodic occurrence of tsunamis mainly within the middle (65°-35° N and 25°-40° S) and subequatorial (15° N-20° S) latitudes of the Northern and Southern hemispheres. The objective of this work is to analyze the spatiotemporal distributions of sources of tsunamigenic earthquakes and the effect of the periodic occurrence of such events on the basis of data taken from global tsunami catalogs.

  1. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints. PMID:19703801

  2. Long-term Trend of Solar Coronal Hole Distribution from 1975 to 2014

    NASA Astrophysics Data System (ADS)

    Fujiki, K.; Tokumaru, M.; Hayashi, K.; Satonaka, D.; Hakamada, K.

    2016-08-01

    We developed an automated prediction technique for coronal holes using potential magnetic field extrapolation in the solar corona to construct a database of coronal holes appearing from 1975 February to 2015 July (Carrington rotations from 1625 to 2165). Coronal holes are labeled with the location, size, and average magnetic field of each coronal hole on the photosphere and source surface. As a result, we identified 3335 coronal holes and found that the long-term distribution of coronal holes shows a similar pattern known as the magnetic butterfly diagram, and polar/low-latitude coronal holes tend to decrease/increase in the last solar minimum relative to the previous two minima.

  3. Atmospheric PAHs in North China: Spatial distribution and sources.

    PubMed

    Zhang, Yanjun; Lin, Yan; Cai, Jing; Liu, Yue; Hong, Linan; Qin, Momei; Zhao, Yifan; Ma, Jin; Wang, Xuesong; Zhu, Tong; Qiu, Xinghua; Zheng, Mei

    2016-09-15

    Polycyclic aromatic hydrocarbons (PAHs), formed through incomplete combustion process, have adverse health effects. To investigate spatial distribution and sources of PAHs in North China, PAHs with passive sampling in 90 gridded sites during June to September in 2011 were analyzed. The average concentration of the sum of fifteen PAHs in North China is 220±14ng/m(3), with the highest in Shanxi, followed by Shandong and Hebei, and then the Beijing-Tianjin area. Major sources of PAHs are identified for each region of North China, coke process for Shanxi, biomass burning for Hebei and Shandong, and coal combustion for Beijing-Tianjin area, respectively. Emission inventory is combined with back trajectory analysis to study the influence of emissions from surrounding areas at receptor sites. Shanxi and Beijing-Tianjin areas are more influenced by sources nearby while regional sources have more impact on Hebei and Shandong areas. Results from this study suggest the areas where local emission should be the major target for control and areas where both local and regional sources should be considered for PAH abatement in North China. PMID:27241206

  4. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    NASA Astrophysics Data System (ADS)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  5. CMP reflection imaging via interferometry of distributed subsurface sources

    NASA Astrophysics Data System (ADS)

    Kim, D.; Brown, L. D.; Quiros, D. A.

    2015-12-01

    The theoretical foundations of recovering body wave energy via seismic interferometry are well established. However in practice, such recovery remains problematic. Here, synthetic seismograms computed for subsurface sources are used to evaluate the geometrical combinations of realistic ambient source and receiver distributions that result in useful recovery of virtual body waves. This study illustrates how surface receiver arrays that span a limited distribution suite of sources, can be processed to reproduce virtual shot gathers that result in CMP gathers which can be effectively stacked with traditional normal moveout corrections. To verify the feasibility of the approach in practice, seismic recordings of 50 aftershocks following the magnitude of 5.8 Virginia earthquake occurred in August, 2011 have been processed using seismic interferometry to produce seismic reflection images of the crustal structure above and beneath the aftershock cluster. Although monotonic noise proved to be problematic by significantly reducing the number of usable recordings, the edited dataset resulted in stacked seismic sections characterized by coherent reflections that resemble those seen on a nearby conventional reflection survey. In particular, "virtual" reflections at travel times of 3 to 4 seconds suggest reflector sat approximately 7 to 12 km depth that would seem to correspond to imbricate thrust structures formed during the Appalachian orogeny. The approach described here represents a promising new means of body wave imaging of 3D structure that can be applied to a wide array of geologic and energy problems. Unlike other imaging techniques using natural sources, this technique does not require precise source locations or times. It can thus exploit aftershocks too small for conventional analyses. This method can be applied to any type of microseismic cloud, whether tectonic, volcanic or man-made.

  6. Mapping the source distribution of microseisms using noise covariogram envelopes

    NASA Astrophysics Data System (ADS)

    Sadeghisorkhani, Hamzeh; Gudmundsson, Ólafur; Roberts, Roland; Tryggvason, Ari

    2016-06-01

    We introduce a method for mapping the noise-source distribution of microseisms which uses information from the full length of covariograms (cross-correlations). We derive a forward calculation based on the plane-wave assumption in 2-D, to formulate an iterative, linearized inversion of covariogram envelopes in the time domain. The forward calculation involves bandpass filtering of the covariograms. The inversion exploits the well-known feature of noise cross-correlation, that is, an anomaly in the noise field that is oblique to the interstation direction appears as cross-correlation amplitude at a smaller time lag than the in-line, surface wave arrival. Therefore, the inversion extracts more information from the covariograms than that contained at the expected surface wave arrival, and this allows us to work with few stations to find the propagation directions of incoming energy. The inversion is naturally applied to data that retain physical units that are not amplitude normalized in any way. By dividing a network into groups of stations, we can constrain the source location by triangulation. We demonstrate results of the method with synthetic data and one year (2012) of data from the Swedish National Seismic Network and also look at the seasonal variation of source distribution around Scandinavia. After preprocessing and cross-correlation, the stations are divided into five groups of 9-12 stations. We invert the envelopes of each group in eight period ranges between 2 and 25 s. Results show that the noise sources at short periods (less than 12 s) lie predominantly in the North Atlantic Ocean and the Barents Sea, and at longer periods the energy appears to have a broader distribution. The strongly anisotropic source distribution in this area is estimated to cause significant biases of velocity measurements compared to the level of heterogeneity in the region. The amplitude of the primary microseisms varies little over the year, but secondary microseisms are much

  7. Mapping the source distribution of microseisms using noise covariogram envelopes

    NASA Astrophysics Data System (ADS)

    Sadeghisorkhani, Hamzeh; Gudmundsson, Ólafur; Roberts, Roland; Tryggvason, Ari

    2016-03-01

    We introduce a method for mapping the noise-source distribution of microseisms which uses information from the full length of covariograms (cross-correlations). We derive a forward calculation based on the plane-wave assumption in 2D, to formulate an iterative, linearized inversion of covariogram envelopes in the time domain. The forward calculation involves bandpass filtering of the covariograms. The inversion exploits the well-known feature of noise cross-correlation, i.e., that an anomaly in the noise field that is oblique to the inter-station direction appears as cross-correlation amplitude at a smaller time lag than the in-line, surface-wave arrival. Therefore, the inversion extracts more information from the covariograms than that contained at the expected surface-wave arrival, and this allows us to work with few stations to find the propagation directions of incoming energy. The inversion is naturally applied to data that retain physical units, i.e., that are not amplitude normalized in any way. By dividing a network into groups of stations, we can constrain the source location by triangulation. We demonstrate results of the method with synthetic data and one year (2012) of data from the Swedish National Seismic Network (SNSN) and also look at the seasonal variation of source distribution around Scandinavia. After preprocessing and cross-correlation, the stations are divided into 5 groups of 9 to 12 stations. We invert the envelopes of each group in 8 period ranges between 2 to 25 sec. Results show that the noise sources at short periods (less than 12 sec) lie predominantly in the North Atlantic Ocean and the Barents Sea, and at longer periods the energy appears to have a broader distribution. The strongly anisotropic source distribution in this area is estimated to cause significant biases of velocity measurements compared to the level of heterogeneity in the region. The amplitude of the primary microseisms varies little over the year, but secondary

  8. The Impact of Source Distribution on Scalar Transport over Forested Hills

    NASA Astrophysics Data System (ADS)

    Ross, Andrew N.; Harman, Ian N.

    2015-08-01

    Numerical simulations of neutral flow over a two-dimensional, isolated, forested ridge are conducted to study the effects of scalar source distribution on scalar concentrations and fluxes over forested hills. Three different constant-flux sources are considered that span a range of idealized but ecologically important source distributions: a source at the ground, one uniformly distributed through the canopy, and one decaying with depth in the canopy. A fourth source type, where the in-canopy source depends on both the wind speed and the difference in concentration between the canopy and a reference concentration on the leaf, designed to mimic deposition, is also considered. The simulations show that the topographically-induced perturbations to the scalar concentration and fluxes are quantitatively dependent on the source distribution. The net impact is a balance of different processes affecting both advection and turbulent mixing, and can be significant even for moderate topography. Sources that have significant input in the deep canopy or at the ground exhibit a larger magnitude advection and turbulent flux-divergence terms in the canopy. The flows have identical velocity fields and so the differences are entirely due to the different tracer concentration fields resulting from the different source distributions. These in-canopy differences lead to larger spatial variations in above-canopy scalar fluxes for sources near the ground compared to cases where the source is predominantly located near the canopy top. Sensitivity tests show that the most significant impacts are often seen near to or slightly downstream of the flow separation or reattachment points within the canopy flow. The qualitative similarities to previous studies using periodic hills suggest that important processes occurring over isolated and periodic hills are not fundamentally different. The work has important implications for the interpretation of flux measurements over forests, even in

  9. Extending Marine Species Distribution Maps Using Non-Traditional Sources

    PubMed Central

    Moretzsohn, Fabio; Gibeaut, James

    2015-01-01

    Abstract Background Traditional sources of species occurrence data such as peer-reviewed journal articles and museum-curated collections are included in species databases after rigorous review by species experts and evaluators. The distribution maps created in this process are an important component of species survival evaluations, and are used to adapt, extend and sometimes contract polygons used in the distribution mapping process. New information During an IUCN Red List Gulf of Mexico Fishes Assessment Workshop held at The Harte Research Institute for Gulf of Mexico Studies, a session included an open discussion on the topic of including other sources of species occurrence data. During the last decade, advances in portable electronic devices and applications enable 'citizen scientists' to record images, location and data about species sightings, and submit that data to larger species databases. These applications typically generate point data. Attendees of the workshop expressed an interest in how that data could be incorporated into existing datasets, how best to ascertain the quality and value of that data, and what other alternate data sources are available. This paper addresses those issues, and provides recommendations to ensure quality data use. PMID:25941453

  10. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds. (a... security interest of the Government. (c) Source of funds. Loans under this subpart will be made as...

  11. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 12 2011-01-01 2011-01-01 false Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds. (a... security interest of the Government. (c) Source of funds. Loans under this subpart will be made as...

  12. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 12 2012-01-01 2012-01-01 false Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds. (a... security interest of the Government. (c) Source of funds. Loans under this subpart will be made as...

  13. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 12 2014-01-01 2013-01-01 true Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds. (a... security interest of the Government. (c) Source of funds. Loans under this subpart will be made as...

  14. 7 CFR 1822.268 - Rates, terms, and source of funds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 12 2013-01-01 2013-01-01 false Rates, terms, and source of funds. 1822.268 Section... Site Loan Policies, Procedures, and Authorizations § 1822.268 Rates, terms, and source of funds. (a... security interest of the Government. (c) Source of funds. Loans under this subpart will be made as...

  15. Diversity, distribution and sources of bacteria in residential kitchens.

    PubMed

    Flores, Gilberto E; Bates, Scott T; Caporaso, J Gregory; Lauber, Christian L; Leff, Jonathan W; Knight, Rob; Fierer, Noah

    2013-02-01

    Bacteria readily colonize kitchen surfaces, and the exchange of microbes between humans and the kitchen environment can impact human health. However, we have a limited understanding of the overall diversity of these communities, how they differ across surfaces and sources of bacteria to kitchen surfaces. Here we used high-throughput sequencing of the 16S rRNA gene to explore biogeographical patterns of bacteria across > 80 surfaces within the kitchens of each of four households. In total, 34 bacterial and two archaeal phyla were identified, with most sequences belonging to the Actinobacteria, Bacteroidetes, Firmicutes and Proteobacteria. Genera known to contain common food-borne pathogens were low in abundance but broadly distributed throughout the kitchens, with different taxa exhibiting distinct distribution patterns. The most diverse communities were associated with infrequently cleaned surfaces such as fans above stoves, refrigerator/freezer door seals and floors. In contrast, the least diverse communities were observed in and around sinks, which were dominated by biofilm-forming Gram-negative lineages. Community composition was influenced by conditions on individual surfaces, usage patterns and dispersal from source environments. Human skin was the primary source of bacteria across all kitchen surfaces, with contributions from food and faucet water dominating in a few specific locations. This study demonstrates that diverse bacterial communities are widely distributed in residential kitchens and that the composition of these communities is often predictable. These results also illustrate the ease with which human- and food-associated bacteria can be transferred in residential settings to kitchen surfaces. PMID:23171378

  16. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  17. Sources and distribution of hexabromocyclododecanes (HBCDs) in Japanese river sediment.

    PubMed

    Managaki, Satoshi; Enomoto, Iku; Masunaga, Shigeki

    2012-03-01

    The distribution of hexabromocyclododecane (HBCD) in the sediment of three Japanese rivers with different characteristics (i.e., population and potential source in the catchment) was investigated and compared with the results estimated using a multimedia fate model (ChemCAN). High concentrations of HBCD in sediments in the range of 134-2060 ng g(-1) were found in a river receiving textile wastewater. This contrasted with the much lower concentrations (0.8-4.8 ng g(-1)) observed for an urban river (with a surrounding population of 1.8 million). The medians of observed HBCD concentrations in each river were close to those estimated based on the assumed input source (e.g., 1810 ng g(-1) for the observed median concentration, and 1436 ng g(-1) for the estimation, in the Kuzuryu River). These results demonstrated the importance of considering source contributions of HBCD, including both industrial and consumer sources, to aquatic environments, for reliable risk management. PMID:22286550

  18. Considering sources and detectors distributions for quantitative photoacoustic tomography

    PubMed Central

    Song, Ningning; Deumié, Carole; Da Silva, Anabela

    2014-01-01

    Photoacoustic tomography (PAT) is a hybrid imaging modality that takes advantage of high optical contrast brought by optical imaging and high spatial resolution brought by ultrasound imaging. However, the quantification in photoacoustic imaging is challenging. Multiple optical illumination approach has proven to achieve uncoupling of diffusion and absorption effects. In this paper, this protocol is adopted and synthetic photoacoustic data, blurred with some noise, were generated. The influence of the distribution of optical sources and transducers on the reconstruction of the absorption and diffusion coefficients maps is studied. Specific situations with limited view angles were examined. The results show multiple illuminations with a wide field improve the reconstructions. PMID:25426322

  19. Size distributions, sources and source areas of water-soluble organic carbon in urban background air

    NASA Astrophysics Data System (ADS)

    Timonen, H.; Saarikoski, S.; Tolonen-Kivimäki, O.; Aurela, M.; Saarnio, K.; Petäjä, T.; Aalto, P. P.; Kulmala, M.; Pakkanen, T.; Hillamo, R.

    2008-04-01

    This paper represents the results of one year long measurement period of the size distributions of water-soluble organic carbon (WSOC), inorganic ions and gravimetric mass of particulate matter. Measurements were done at an urban background station (SMEAR III) by using a micro-orifice uniform deposit impactor (MOUDI). The site is located in northern European boreal region in Helsinki, Finland. The WSOC size distribution measurements were completed with the chemical analysis of inorganic ions, organic carbon (OC) and monosaccharide anhydrides from the filter samples. During the measurements gravimetric mass in the MOUDI collections varied between 3.4 and 55.0 μg m-3 and the WSOC concentration was between 0.3 and 7.4 μg m-3. On average, water-soluble particulate organic matter (WSPOM, WSOC multiplied by 1.6) comprised 25±7.7% and 7.5±3.4% of aerosol PM1 mass and the PM1-10 mass, respectively. Inorganic ions contributed 33±12% and 28±19% of the analyzed PM1 and PM1-10 aerosol mass. Five different aerosol categories corresponding to different sources or source areas were identified (long-range transport aerosols, biomass burning aerosols from wild land fires and from small-scale wood combustion, aerosols originating from marine areas and from the clean arctic areas). Clear differences in WSOC concentrations and size distributions originating from different sources or source areas were observed, although there are also many other factors which might affect the results. E.g. the local conditions and sources of volatile organic compounds (VOCs) and aerosols as well as various transformation processes are likely to have an impact on the measured aerosol composition. Using the source categories, it was identified that especially the oxidation products of biogenic VOCs in summer had a clear effect on WSOC concentrations.

  20. Long-term optical behavior of 114 extragalactic sources

    NASA Astrophysics Data System (ADS)

    Pica, A. J.; Pollock, J. T.; Smith, A. G.; Leacock, R. J.; Edwards, P. L.; Scott, R. L.

    1980-11-01

    Photographic observations of over 200 quasars and related objects have been obtained at the Rosemary Hill Observatory since 1968. Twenty that are optically violent variables were reported on by Pollock et al. (1979). This paper presents data for 114 less active sources, 58 of which exhibit optical variations at a confidence level of 95% or greater. Light curves are given for the 26 most active sources. In addition, the overall monitoring program at the Observatory is reviewed, and information on the status of 206 objects is provided.

  1. Local tsunamis and distributed slip at the source

    USGS Publications Warehouse

    Geist, E.L.; Dmowska, R.

    1999-01-01

    Variations in the local tsunami wave field are examined in relation to heterogeneous slip distributions that are characteristic of many shallow subduction zone earthquakes. Assumptions inherent in calculating the coseismic vertical displacement field that defines the initial condition for tsunami propagation are examined. By comparing the seafloor displacement from uniform slip to that from an ideal static crack, we demonstrate that dip-directed slip variations significantly affect the initial cross-sectional wave profile. Because of the hydrodynamic stability of tsunami wave forms, these effects directly impact estimates of maximum runup from the local tsunami. In most cases, an assumption of uniform slip in the dip direction significantly underestimates the maximum amplitude and leading wave steepness of the local tsunami. Whereas dip-directed slip variations affect the initial wave profile, strike-directed slip variations result in wavefront-parallel changes in amplitude that are largely preserved during propagation from the source region toward shore, owing to the effects of refraction. Tests of discretizing slip distributions indicate that small fault surface elements of dimensions similar to the source depth can acceptably approximate the vertical displacement field in comparison to continuous slip distributions. Crack models for tsunamis generated by shallow subduction zone earthquakes indicate that a rupture intersecting the free surface results in approximately twice the average slip. Therefore, the observation of higher slip associated with tsunami earthquakes relative to typical subduction zone earthquakes of the same magnitude suggests that tsunami earthquakes involve rupture of the seafloor, whereas rupture of deeper subduction zone earthquakes may be imbedded and not reach the seafloor.

  2. Reservoir, seal, and source rock distribution in Essaouira Rift Basin

    SciTech Connect

    Ait Salem, A. )

    1994-07-01

    The Essaouira onshore basin is an important hydrocarbon generating basin, which is situated in western Morocco. There are seven oil and gas-with-condensate fields; six are from Jurassic reservoirs and one from a Triassic reservoir. As a segment of the Atlantic passive continental margin, the Essaouira basin was subjected to several post-Hercynian basin deformation phases, which resulted in distribution, in space and time, of reservoir, seal, and source rock. These basin deformations are synsedimentary infilling of major half grabens with continental red buds and evaporite associated with the rifting phase, emplacement of a thick postrifting Jurassic and Cretaceous sedimentary wedge during thermal subsidence, salt movements, and structural deformations in relation to the Atlas mergence. The widely extending lower Oxfordian shales are the only Jurassic shale beds penetrated and recognized as potential and mature source rocks. However, facies analysis and mapping suggested the presence of untested source rocks in Dogger marine shales and Triassic to Liassic lacustrine shales. Rocks with adequate reservoir characteristics were encountered in Triassic/Liassic fluvial sands, upper Liassic dolomites, and upper Oxfordian sandy dolomites. The seals are provided by Liassic salt for the lower reservoirs and Middle to Upper Jurassic anhydrite for the upper reservoirs. Recent exploration studies demonstrate that many prospective structure reserves remain untested.

  3. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve

  4. CHALLENGES IN SOURCE TERM MODELING OF DECONTAMINATION AND DECOMMISSIONING WASTES.

    SciTech Connect

    SULLIVAN, T.M.

    2006-08-01

    Development of real-time predictive modeling to identify the dispersion and/or source(s) of airborne weapons of mass destruction including chemical, biological, radiological, and nuclear material in urban environments is needed to improve response to potential releases of these materials via either terrorist or accidental means. These models will also prove useful in defining airborne pollution dispersion in urban environments for pollution management/abatement programs. Predicting gas flow in an urban setting on a scale of less than a few kilometers is a complicated and challenging task due to the irregular flow paths that occur along streets and alleys and around buildings of different sizes and shapes, i.e., ''urban canyons''. In addition, air exchange between the outside and buildings and subway areas further complicate the situation. Transport models that are used to predict dispersion of WMD/CBRN materials or to back track the source of the release require high-density data and need defensible parameterizations of urban processes. Errors in the data or any of the parameter inputs or assumptions will lead to misidentification of the airborne spread or source release location(s). The need for these models to provide output in a real-time fashion if they are to be useful for emergency response provides another challenge. To improve the ability of New York City's (NYC's) emergency management teams and first response personnel to protect the public during releases of hazardous materials, the New York City Urban Dispersion Program (UDP) has been initiated. This is a four year research program being conducted from 2004 through 2007. This paper will discuss ground level and subway Perfluorocarbon tracer (PFT) release studies conducted in New York City. The studies released multiple tracers to study ground level and vertical transport of contaminants. This paper will discuss the results from these tests and how these results can be used for improving transport models

  5. Modeling the voice source in terms of spectral slopes.

    PubMed

    Garellek, Marc; Samlan, Robin; Gerratt, Bruce R; Kreiman, Jody

    2016-03-01

    A psychoacoustic model of the voice source spectrum is proposed. The model is characterized by four spectral slope parameters: the difference in amplitude between the first two harmonics (H1-H2), the second and fourth harmonics (H2-H4), the fourth harmonic and the harmonic nearest 2 kHz in frequency (H4-2 kHz), and the harmonic nearest 2 kHz and that nearest 5 kHz (2 kHz-5 kHz). As a step toward model validation, experiments were conducted to establish the acoustic and perceptual independence of these parameters. In experiment 1, the model was fit to a large number of voice sources. Results showed that parameters are predictable from one another, but that these relationships are due to overall spectral roll-off. Two additional experiments addressed the perceptual independence of the source parameters. Listener sensitivity to H1-H2, H2-H4, and H4-2 kHz did not change as a function of the slope of an adjacent component, suggesting that sensitivity to these components is robust. Listener sensitivity to changes in spectral slope from 2 kHz to 5 kHz depended on complex interactions between spectral slope, spectral noise levels, and H4-2 kHz. It is concluded that the four parameters represent non-redundant acoustic and perceptual aspects of voice quality. PMID:27036277

  6. Environmental radiation safety: source term modification by soil aerosols. Interim report

    SciTech Connect

    Moss, O.R.; Allen, M.D.; Rossignol, E.J.; Cannon, W.C.

    1980-08-01

    The goal of this project is to provide information useful in estimating hazards related to the use of a pure refractory oxide of /sup 238/Pu as a power source in some of the space vehicles to be launched during the next few years. Although the sources are designed and built to withstand re-entry into the earth's atmosphere, and to impact with the earth's surface without releasing any plutonium, the possibility that such an event might produce aerosols composed of soil and /sup 238/PuO/sub 2/ cannot be absolutely excluded. This report presents the results of our most recent efforts to measure the degree to which the plutonium aerosol source term might be modified in a terrestrial environment. The five experiments described represent our best effort to use the original experimental design to study the change in the size distribution and concentration of a /sup 238/PuO/sub 2/ aerosol due to coagulation with an aerosol of clay or sandy loam soil.

  7. Plutonium isotopes and 241Am in the atmosphere of Lithuania: A comparison of different source terms

    NASA Astrophysics Data System (ADS)

    Lujanienė, G.; Valiulis, D.; Byčenkienė, S.; Šakalys, J.; Povinec, P. P.

    2012-12-01

    137Cs, 241Am and Pu isotopes collected in aerosol samples during 1994-2011 were analyzed with special emphasis on better understanding of Pu and Am behavior in the atmosphere. The results from long-term measurements of 240Pu/239Pu atom ratios showed a bimodal frequency distribution with median values of 0.195 and 0.253, indicating two main sources contributing to the Pu activities at the Vilnius sampling station. The low Pu atom ratio of 0.141 could be attributed to the weapon-grade plutonium derived from the nuclear weapon test sites. The frequency of air masses arriving from the North-West and North-East correlated with the Pu atom ratio indicating the input from the sources located in these regions (the Novaya Zemlya test site, Siberian nuclear plants), while no correlation with the Chernobyl region was observed. Measurements carried out during the Fukushima accident showed a negligible impact of this source with Pu activities by four orders of magnitude lower as compared to the Chernobyl accident. The activity concentration of actinides measured in the integrated sample collected in March-April, 2011 showed a small contribution of Pu with unusual activity and atom ratios indicating the presence of the spent fuel of different origin than that of the Chernobyl accident.

  8. Processes driving short-term temporal dynamics of small mammal distribution in human-disturbed environments.

    PubMed

    Martineau, Julie; Pothier, David; Fortin, Daniel

    2016-07-01

    As the impact of anthropogenic activities intensifies worldwide, an increasing proportion of landscape is converted to early successional stages every year. To understand and anticipate the global effects of the human footprint on wildlife, assessing short-term changes in animal populations in response to disturbance events is becoming increasingly important. We used isodar habitat selection theory to reveal the consequences of timber harvesting on the ecological processes that control the distribution dynamics of a small mammal, the red-backed vole (Myodes gapperi). The abundance of voles was estimated in pairs of cut and uncut forest stands, prior to logging and up to 2 years afterwards. A week after logging, voles did not display any preference between cut and uncut stands, and a non-significant isodar indicated that their distribution was not driven by density-dependent habitat selection. One month after harvesting, however, juvenile abundance increased in cut stands, whereas the highest proportions of reproductive females were observed in uncut stands. This distribution pattern appears to result from interference competition, with juveniles moving into cuts where there was weaker competition with adults. In fact, the emergence of source-sink dynamics between uncut and cut stands, driven by interference competition, could explain why the abundance of red-backed voles became lower in cut (the sink) than uncut (the source) stands 1-2 years after logging. Our study demonstrates that the influences of density-dependent habitat selection and interference competition in shaping animal distribution can vary frequently, and for several months, following anthropogenic disturbance. PMID:27003700

  9. Reference-frame-independent quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Wang, Can; Sun, Shi-Hai; Ma, Xiang-Chun; Tang, Guang-Zhao; Liang, Lin-Mei

    2015-10-01

    Compared with the traditional protocols of quantum key distribution (QKD), the reference-frame-independent (RFI)-QKD protocol has been generally proved to be very useful and practical, since its experimental implementation can be simplified without the alignment of a reference frame. In most RFI-QKD systems, the encoding states are always taken to be perfect, which, however, is not practical in realizations. In this paper, we consider the security of RFI QKD with source flaws based on the loss-tolerant method proposed by Tamaki et al. [Phys. Rev. A 90, 052314 (2014), 10.1103/PhysRevA.90.052314]. As the six-state protocol can be realized with four states, we show that the RFI-QKD protocol can also be performed with only four encoding states instead of six encoding states in its standard version. Furthermore, the numerical simulation results show that the source flaws in the key-generation basis (Z basis) will reduce the key rate but are loss tolerant, while the ones in X and Y bases almost have no effect and the key rate remains almost the same even when they are very large. Hence, our method and results will have important significance in practical experiments, especially in earth-to-satellite or chip-to-chip quantum communications.

  10. Size distributions, sources and source areas of water-soluble organic carbon in urban background air

    NASA Astrophysics Data System (ADS)

    Timonen, H.; Saarikoski, S.; Tolonen-Kivimäki, O.; Aurela, M.; Saarnio, K.; Petäjä, T.; Aalto, P. P.; Kulmala, M.; Pakkanen, T.; Hillamo, R.

    2008-09-01

    This paper represents the results of one year long measurement period of the size distributions of water-soluble organic carbon (WSOC), inorganic ions and gravimetric mass of particulate matter. Measurements were done at an urban background station (SMEAR III) by using a micro-orifice uniform deposit impactor (MOUDI). The site is located in northern European boreal region in Helsinki, Finland. The WSOC size distribution measurements were completed with the chemical analysis of inorganic ions, organic carbon (OC) and monosaccharide anhydrides from the filter samples (particle aerodynamic diameter smaller than 1 μm, PM1). Gravimetric mass concentration varied during the MOUDI samplings between 3.4 and 55.0 μg m-3 and the WSOC concentrations were between 0.3 and 7.4 μg m-3. On average, water-soluble particulate organic matter (WSPOM, WSOC multiplied by 1.6 to convert the analyzed carbon mass to organic matter mass) comprised 25±7.7% and 7.5±3.4% of aerosol PM1 mass and the PM1-10 mass, respectively. Inorganic ions contributed 33±12% and 28±19% of the analyzed PM1 and PM1-10 aerosol mass. Five different aerosol categories corresponding to different sources or source areas were identified (long-range transport aerosols, biomass burning aerosols from wild land fires and from small-scale wood combustion, aerosols originating from marine areas and from the clean arctic areas). Categories were identified mainly using levoglucosan concentration level for wood combustion and air mass backward trajectories for other groups. Clear differences in WSOC concentrations and size distributions originating from different sources or source areas were observed, although there are also many other factors which might affect the results. E.g. the local conditions and sources of volatile organic compounds (VOCs) and aerosols as well as various transformation processes are likely to have an impact on the measured aerosol composition. Using the source categories, it was identified that

  11. Source-rock distribution model of the periadriatic region

    SciTech Connect

    Zappaterra, E. )

    1994-03-01

    The Periadriatic area is a mosaic of geological provinces comprised of spatially and temporally similar tectonic-sedimentary cycles. Tectonic evolution progressed from a Triassic-Early Jurassic (Liassic) continental rifting stage on the northern edge of the African craton, through an Early Jurassic (Middle Liassic)-Late Cretaceous/Eocene oceanic rifting stage and passive margin formation, to a final continental collision and active margin deformation stage in the Late Cretaceous/Eocene to Holocene. Extensive shallow-water carbonate platform deposits covered large parts of the Periadriatic region in the Late Triassic. Platform breakup and development of a platform-to-basin carbonate shelf morphology began in the Late Triassic and extended through the Cretaceous. On the basis of this paleogeographic evolution, the regional geology of the Periadriatic region can be expressed in terms of three main Upper Triassic-Paleogene sedimentary sequences: (A), the platform sequence; (B), the platform to basin sequence; and (C), the basin sequence. These sequences developed during the initial rifting and subsequent passive-margin formation tectonic stages. The principal Triassic source basins and most of the surface hydrocarbon indications and economically important oil fields of the Periadriatic region are associated with sequence B areas. No major hydrocarbon accumulations can be directly attributed to the Jurassic-Cretaceous epioceanic and intraplatform source rock sequences. The third episode of source bed deposition characterizes the final active margin deformation stage and is represented by Upper Tertiary organic-rich terrigenous units, mostly gas-prone. These are essentially associated with turbiditic and flysch sequences of foredeep basins and have generated the greater part of the commercial biogenic gases of the Periadriatic region. 82 refs., 11 figs., 2 tabs.

  12. Source-Manipulating Wavelength-Dependent Continuous-Variable Quantum Key Distribution with Heterodyne Detectors

    NASA Astrophysics Data System (ADS)

    Lv, Geli; Huang, Dazu; Guo, Ying

    2016-05-01

    The intensities of signal and local oscillator (LO) can be elegantly manipulated for the noise-based quantum system while manipulating the wavelength-dependent modulation in source to increase the performance of the continuous-variable key distribution in terms of the secret key rate and maximal transmission distance. The source-based additional noises can be tuned and stabilized to the suitable values to eliminate the effect of the LO fluctuations and defeat the potential attacks in imperfect quantum channels. It is firmly proved that the secret key rate can be manipulated in source over imperfect channels by the intensities of signal and LO with different wavelengths, which have an effect on the optimal signal-to-noise ratio of the heterodyne detectors resulting from the detection efficiency and the additional electronic noise as well. Simulation results show that there is a nice balance between the secret key rate and the maximum transmission distance.

  13. Spatial distribution and migration of nonylphenol in groundwater following long-term wastewater irrigation.

    PubMed

    Wang, Shiyu; Wu, Wenyong; Liu, Fei; Yin, Shiyang; Bao, Zhe; Liu, Honglu

    2015-01-01

    Seen as a solution to water shortages, wastewater reuse for crop irrigation does however poses a risk owing to the potential release of organic contaminants into soil and water. The frequency of detection (FOD), concentration, and migration of nonylphenol (NP) isomers in reclaimed water (FODRW), surface water (FODSW), and groundwater (FODGW) were investigated in a long-term wastewater irrigation area in Beijing. The FODRW, FODSW and FODGW of any or all of 12 NP isomers were 66.7% to 100%, 76.9% to 100% and 13.3% to 60%, respectively. The mean (±standard deviation) NP concentrations of the reclaimed water, surface water, and groundwater (NPRW, NPSW, NPGW, repectively) were 469.4±73.4 ng L(-1), 694.6±248.7 ng(-1) and 244.4±230.8 ng(-1), respectively. The existence of external pollution sources during water transmission and distribution resulted in NPSW exceeding NPRW. NP distribution in groundwater was related to the duration and quantity of wastewater irrigation, the sources of aquifer recharge, and was seen to decrease with increasing aquifer depth. Higher riverside infiltration rate nearby leads to higher FODGW values. The migration rate of NP isomers was classified as high, moderate or low. PMID:25886245

  14. Spatial distribution and migration of nonylphenol in groundwater following long-term wastewater irrigation

    NASA Astrophysics Data System (ADS)

    Wang, Shiyu; Wu, Wenyong; Liu, Fei; Yin, Shiyang; Bao, Zhe; Liu, Honglu

    2015-06-01

    Seen as a solution to water shortages, wastewater reuse for crop irrigation does however poses a risk owing to the potential release of organic contaminants into soil and water. The frequency of detection (FOD), concentration, and migration of nonylphenol (NP) isomers in reclaimed water (FODRW), surface water (FODSW), and groundwater (FODGW) were investigated in a long-term wastewater irrigation area in Beijing. The FODRW, FODSW and FODGW of any or all of 12 NP isomers were 66.7% to 100%, 76.9% to 100% and 13.3% to 60%, respectively. The mean (± standard deviation) NP concentrations of the reclaimed water, surface water, and groundwater (NPRW, NPSW, NPGW, repectively) were 469.4 ± 73.4 ng L- 1, 694.6 ± 248.7 ng- 1 and 244.4 ± 230.8 ng- 1, respectively. The existence of external pollution sources during water transmission and distribution resulted in NPSW exceeding NPRW. NP distribution in groundwater was related to the duration and quantity of wastewater irrigation, the sources of aquifer recharge, and was seen to decrease with increasing aquifer depth. Higher riverside infiltration rate nearby leads to higher FODGW values. The migration rate of NP isomers was classified as high, moderate or low.

  15. On the numerical solution of hyperbolic equations with singular source terms

    NASA Astrophysics Data System (ADS)

    Turk, Irfan; Ashyraliyev, Maksat

    2014-08-01

    A numerical study for hyperbolic equations having singular source terms is presented. Singular in the sense that within the spatial domain the source is defined by a Dirac delta function. Solutions of such problems will have discontinuities which forms an obstacle for standard numerical methods. In this paper, a fifth order flux implicit WENO method with non-uniform meshes is studied for approximate solutions of hyperbolic equations having singular source terms. Numerical examples are provided.

  16. Spatial Distribution of Soil Fauna In Long Term No Tillage

    NASA Astrophysics Data System (ADS)

    Corbo, J. Z. F.; Vieira, S. R.; Siqueira, G. M.

    2012-04-01

    The soil is a complex system constituted by living beings, organic and mineral particles, whose components define their physical, chemical and biological properties. Soil fauna plays an important role in soil and may reflect and interfere in its functionality. These organisms' populations may be influenced by management practices, fertilization, liming and porosity, among others. Such changes may reduce the composition and distribution of soil fauna community. Thus, this study aimed to determine the spatial variability of soil fauna in consolidated no-tillage system. The experimental area is located at Instituto Agronômico in Campinas (São Paulo, Brazil). The sampling was conducted in a Rhodic Eutrudox, under no tillage system and 302 points distributed in a 3.2 hectare area in a regular grid of 10.00 m x 10.00 m were sampled. The soil fauna was sampled with "Pitfall Traps" method and traps remained in the area for seven days. Data were analyzed using descriptive statistics to determine the main statistical moments (mean variance, coefficient of variation, standard deviation, skewness and kurtosis). Geostatistical tools were used to determine the spatial variability of the attributes using the experimental semivariogram. For the biodiversity analysis, Shannon and Pielou indexes and richness were calculated for each sample. Geostatistics has proven to be a great tool for mapping the spatial variability of groups from the soil epigeal fauna. The family Formicidae proved to be the most abundant and dominant in the study area. The parameters of descriptive statistics showed that all attributes studied showed lognormal frequency distribution for groups from the epigeal soil fauna. The exponential model was the most suited for the obtained data, for both groups of epigeal soil fauna (Acari, Araneae, Coleoptera, Formicidae and Coleoptera larva), and the other biodiversity indexes. The sampling scheme (10.00 m x 10.00 m) was not sufficient to detect the spatial

  17. 78 FR 41398 - SourceGas Distribution LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-10

    ... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Filing Take notice that on June 27, 2013, SourceGas Distribution LLC (SourceGas) filed a Rate Election and revised Statement of Operating... and 284.224). SourceGas proposes to revise its fuel reimbursement quantity percentage to reflect...

  18. Correlating Pluto's Albedo Distribution to Long Term Insolation Patterns

    NASA Astrophysics Data System (ADS)

    Earle, Alissa M.; Binzel, Richard P.; Stern, S. Alan; Young, Leslie A.; Buratti, Bonnie J.; Ennico, Kimberly; Grundy, Will M.; Olkin, Catherine B.; Spencer, John R.; Weaver, Hal A.

    2015-11-01

    NASA's New Horizons' reconnaissance of the Pluto system has revealed striking albedo contrasts from polar to equatorial latitudes on Pluto, as well as sharp boundaries for longitudinal variations. These contrasts suggest Pluto undergoes dynamic evolution that drives the redistribution of volatiles. Using the New Horizons results as a template, in this talk we will explore the volatile migration process driven seasonally on Pluto considering multiple timescales. These timescales include the current orbit (248 years) as well as the timescales for obliquity precession (amplitude of 23 degrees over 3 Myrs) and regression of the orbital longitude of perihelion (3.7 Myrs). We will build upon the long-term insolation history model described by Earle and Binzel (2015, Icarus 250, 405-412) with the goal of identifying the most critical timescales that drive the features observed in Pluto’s current post-perihelion epoch. This work was supported by the NASA New Horizons Project.

  19. Source-term reevaluation for US commercial nuclear power reactors: a status report

    SciTech Connect

    Herzenberg, C.L.; Ball, J.R.; Ramaswami, D.

    1984-12-01

    Only results that had been discussed publicly, had been published in the open literature, or were available in preliminary reports as of September 30, 1984, are included here. More than 20 organizations are participating in source-term programs, which have been undertaken to examine severe accident phenomena in light-water power reactors (including the chemical and physical behavior of fission products under accident conditions), update and reevaluate source terms, and resolve differences between predictions and observations of radiation releases and related phenomena. Results from these source-term activities have been documented in over 100 publications to date.

  20. Source term model evaluations for the low-level waste facility performance assessment

    SciTech Connect

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  1. Distribution of astronomical sources in the second Equatorial Infrared Catalogue

    NASA Technical Reports Server (NTRS)

    Nagy, T. A.; Sweeney, L. H.; Lesh, J. R.; Mead, J. M.; Maran, S. P.; Heinsheimer, T. F.; Yates, F. F.

    1979-01-01

    Measurements of infrared (2.7-micron) source positions and flux densities have been derived based on an additional 60.6 hours of satellite observations beyond those considered in the preparation of the Equatorial Infrared Catalogue No. 1 (EIC-1). These data have been processed together with the EIC-1 data to produce EIC-2. The new catalog differs from EIC-1 as follows: there are 1278 sources; there is a larger percentage of unidentified sources; there are increased numbers of sources identified with Two-Micron Sky Survey sources, AFGL sources, AGK3 stars and SAO stars.

  2. Source terms: an investigation of uncertainties, magnitudes, and recommendations for research. [PWR; BWR

    SciTech Connect

    Levine, S.; Kaiser, G. D.; Arcieri, W. C.; Firstenberg, H.; Fulford, P. J.; Lam, P. S.; Ritzman, R. L.; Schmidt, E. R.

    1982-03-01

    The purpose of this document is to assess the state of knowledge and expert opinions that exist about fission product source terms from potential nuclear power plant accidents. This is so that recommendations can be made for research and analyses which have the potential to reduce the uncertainties in these estimated source terms and to derive improved methods for predicting their magnitudes. The main reasons for writing this report are to indicate the major uncertainties involved in defining realistic source terms that could arise from severe reactor accidents, to determine which factors would have the most significant impact on public risks and emergency planning, and to suggest research and analyses that could result in the reduction of these uncertainties. Source terms used in the conventional consequence calculations in the licensing process are not explicitly addressed.

  3. Accident source terms for boiling water reactors with high burnup cores.

    SciTech Connect

    Gauntt, Randall O.; Powers, Dana Auburn; Leonard, Mark Thomas

    2007-11-01

    The primary objective of this report is to provide the technical basis for development of recommendations for updates to the NUREG-1465 Source Term for BWRs that will extend its applicability to accidents involving high burnup (HBU) cores. However, a secondary objective is to re-examine the fundamental characteristics of the prescription for fission product release to containment described by NUREG-1465. This secondary objective is motivated by an interest to understand the extent to which research into the release and behaviors of radionuclides under accident conditions has altered best-estimate calculations of the integral response of BWRs to severe core damage sequences and the resulting radiological source terms to containment. This report, therefore, documents specific results of fission product source term analyses that will form the basis for the HBU supplement to NUREG-1465. However, commentary is also provided on observed differences between the composite results of the source term calculations performed here and those reflected NUREG-1465 itself.

  4. Complex cell geometry and sources distribution model for Monte Carlo single cell dosimetry with iodine 125 radioimmunotherapy

    NASA Astrophysics Data System (ADS)

    Arnaud, F. X.; Paillas, S.; Pouget, J. P.; Incerti, S.; Bardiès, M.; Bordage, M. C.

    2016-01-01

    In cellular dosimetry, common assumptions consider concentric spheres for nucleus and cell and uniform radionuclides distribution. These approximations do not reflect reality, specially in the situation of radioimmunotherapy with Auger emitters, where very short-ranged electrons induce hyper localised energy deposition. A realistic cellular dosimetric model was generated to give account of the real geometry and activity distribution, for non-internalizing and internalizing antibodies (mAbs) labelled with Auger emitter I-125. The impact of geometry was studied by comparing the real geometry obtained from confocal microscopy for both cell and nucleus with volume equivalent concentric spheres. Non-uniform and uniform source distributions were considered for each mAbs distribution. Comparisons in terms of mean deposited energy per decay, energy deposition spectra and energy-volume histograms were calculated using Geant4. We conclude that realistic models are needed, especially when energy deposition is highly non-homogeneous due to source distribution.

  5. Transverse distribution of beam current oscillations of a 14 GHz electron cyclotron resonance ion source.

    PubMed

    Tarvainen, O; Toivanen, V; Komppula, J; Kalvas, T; Koivisto, H

    2014-02-01

    The temporal stability of oxygen ion beams has been studied with the 14 GHz A-ECR at JYFL (University of Jyvaskyla, Department of Physics). A sector Faraday cup was employed to measure the distribution of the beam current oscillations across the beam profile. The spatial and temporal characteristics of two different oscillation "modes" often observed with the JYFL 14 GHz ECRIS are discussed. It was observed that the low frequency oscillations below 200 Hz are distributed almost uniformly. In the high frequency oscillation "mode," with frequencies >300 Hz at the core of the beam, carrying most of the current, oscillates with smaller amplitude than the peripheral parts of the beam. The results help to explain differences observed between the two oscillation modes in terms of the transport efficiency through the JYFL K-130 cyclotron. The dependence of the oscillation pattern on ion source parameters is a strong indication that the mechanisms driving the fluctuations are plasma effects. PMID:24593488

  6. Spatial distribution and source apportionment of PCBs in sediments around İzmit industrial complexes, Turkey.

    PubMed

    Gedik, Kadir; Demircioğlu, Filiz; Imamoğlu, Ipek

    2010-11-01

    The spatial distribution, degree of pollution and major sources of PCBs were evaluated in surficial sediments within the heavily urbanized and industrialized İzmit Bay and its main freshwater inputs. ΣPCB concentrations range from 2.90 to 85.4ngg(-1) in marine sediments and from ND to 47.7ngg(-1) in freshwater sediments. Results suggest that high concentrations of ΣPCBs were localized around a chlor-alkali plant and an industry that handles bulk liquid, dry and drummed chemicals, and petroleum products in the Bay. Using a chemical mass balance receptor model (CMB), major sources of PCBs in the region were investigated. The CMB model identified Aroclor 1254 and 1260 to be the major PCB sources in marine sediments and the less chlorinated Aroclor 1248 and 1242 as the major PCB sources in freshwater sediments. The potential sources for the PCBs were briefly discussed in terms of their use in various industrial applications. PMID:20889182

  7. 78 FR 56685 - SourceGas Distribution LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ... Energy Regulatory Commission SourceGas Distribution LLC; Notice of Application Take notice that on August 27, 2013, SourceGas Distribution LLC (SourceGas), 600 12th Street, Suite 300, Golden, Colorado 80401, filed in Docket No. CP13-540-000 an application pursuant to section 7(f) of the Natural Gas Act...

  8. 78 FR 6318 - SourceGas Distribution LLC; Notice of Petition for Rate Approval

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Petition for Rate Approval Take notice that on January 15, 2013, SourceGas Distribution LLC (SourceGas) filed a rate election pursuant...

  9. 77 FR 28374 - SourceGas Distribution LLC; Notice of Compliance Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-14

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Compliance Filing Take notice that on April 30, 2012, SourceGas Distribution LLC (SourceGas) filed a revised Statement of Operating...

  10. A second order operator splitting method for Allen-Cahn type equations with nonlinear source terms

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Geun; Lee, June-Yub

    2015-08-01

    Allen-Cahn (AC) type equations with nonlinear source terms have been applied to a wide range of problems, for example, the vector-valued AC equation for phase separation and the phase-field equation for dendritic crystal growth. In contrast to the well developed first and second order methods for the AC equation, not many second order methods are suggested for the AC type equations with nonlinear source terms due to the difficulties in dealing with the nonlinear source term numerically. In this paper, we propose a simple and stable second order operator splitting method. A core idea of the method is to decompose the original equation into three subequations with the free-energy evolution term, the heat evolution term, and a nonlinear source term, respectively. It is important to combine these three subequations in proper order to achieve the second order accuracy and stability. We propose a method with a half-time free-energy evolution solver, a half-time heat evolution solver, a full-time midpoint solver for the nonlinear source term, and a half-time heat evolution solver followed by a final half-time free-energy evolution solver. We numerically demonstrate the second order accuracy of the new numerical method through the simulations of the phase separation and the dendritic crystal growth.

  11. Long Term Leaching of Chlorinated Solvents from Source Zones in Low Permeability Settings with Fractures

    NASA Astrophysics Data System (ADS)

    Bjerg, P. L.; Chambon, J.; Troldborg, M.; Binning, P. J.; Broholm, M. M.; Lemming, G.; Damgaard, I.

    2008-12-01

    Groundwater contamination by chlorinated solvents, such as perchloroethylene (PCE), often occurs via leaching from complex sources located in low permeability sediments such as clayey tills overlying aquifers. Clayey tills are mostly fractured, and contamination migrating through the fractures spreads to the low permeability matrix by diffusion. This results in a long term source of contamination due to back-diffusion. Leaching from such sources is further complicated by microbial degradation under anaerobic conditions to sequentially form the daughter products trichloroethylene, cis-dichloroethylene (cis-DCE), vinyl chloride (VC) and ethene. This process can be enhanced by addition of electron donors and/or bioaugmentation and is termed Enhanced Reductive Dechlorination (ERD). This work aims to improve our understanding of the physical, chemical and microbial processes governing source behaviour under natural and enhanced conditions. That understanding is applied to risk assessment, and to determine the relationship and time frames of source clean up and plume response. To meet that aim, field and laboratory observations are coupled to state of the art models incorporating new insights of contaminant behaviour. The long term leaching of chlorinated ethenes from clay aquitards is currently being monitored at a number of Danish sites. The observed data is simulated using a coupled fracture flow and clay matrix diffusion model. Sequential degradation is represented by modified Monod kinetics accounting for competitive inhibition between the chlorinated ethenes. The model is constructed using Comsol Multiphysics, a generic finite- element partial differential equation solver. The model is applied at two well characterised field sites with respect to hydrogeology, fracture network, contaminant distribution and microbial processes (lab and field experiments). At the study sites (Sortebrovej and Vadsbyvej), the source areas are situated in a clayey till with fractures

  12. Autonomous distributed temperature sensing for long-term heated applications in remote areas

    NASA Astrophysics Data System (ADS)

    Kurth, A.-M.; Dawes, N.; Selker, J.; Schirmer, M.

    2012-10-01

    Distributed Temperature Sensing (DTS) is a fiber-optical method enabling simultaneous temperature measurements over long distances. Electrical resistance heating of the metallic components of the fiber-optic cable provides information on the thermal characteristics of the cable's environment, providing valuable insight into processes occurring in the surrounding medium, such as groundwater-surface water interactions, dam stability or soil moisture. Until now, heated applications required direct handling of the DTS instrument by a researcher, rendering long-term investigations in remote areas impractical due to the often difficult and time-consuming access to the field site. Remote-control and automation of the DTS instrument and heating processes, however, resolve the issue with difficult access. The data can also be remotely accessed and stored on a central database. The power supply can be grid-independent, although significant infrastructure investment is required here due to high power consumption during heated applications. Solar energy must be sufficient even in worst case scenarios, e.g. during long periods of intense cloud cover, to prevent system failure due to energy shortage. In combination with storage batteries and a low heating frequency, e.g. once per day or once per week (depending on the season and the solar radiation on site), issues of high power consumption may be resolved. Safety regulations dictate adequate shielding and ground-fault protection, to safeguard animals and humans from electricity and laser sources. In this paper the autonomous DTS system is presented to allow research with heated applications of DTS in remote areas for long-term investigations of temperature distributions in the environment.

  13. Autonomous distributed temperature sensing for long-term heated applications in remote areas

    NASA Astrophysics Data System (ADS)

    Kurth, A.-M.; Dawes, N.; Selker, J.; Schirmer, M.

    2013-02-01

    Distributed temperature sensing (DTS) is a fiber-optical method enabling simultaneous temperature measurements over long distances. Electrical resistance heating of the metallic components of the fiber-optic cable provides information on the thermal characteristics of the cable's environment, providing valuable insight into processes occurring in the surrounding medium, such as groundwater-surface water interactions, dam stability or soil moisture. Until now, heated applications required direct handling of the DTS instrument by a researcher, rendering long-term investigations in remote areas impractical due to the often difficult and time-consuming access to the field site. Remote control and automation of the DTS instrument and heating processes, however, resolve the issue with difficult access. The data can also be remotely accessed and stored on a central database. The power supply can be grid independent, although significant infrastructure investment is required here due to high power consumption during heated applications. Solar energy must be sufficient even in worst case scenarios, e.g. during long periods of intense cloud cover, to prevent system failure due to energy shortage. In combination with storage batteries and a low heating frequency, e.g. once per day or once per week (depending on the season and the solar radiation on site), issues of high power consumption may be resolved. Safety regulations dictate adequate shielding and ground-fault protection, to safeguard animals and humans from electricity and laser sources. In this paper the autonomous DTS system is presented to allow research with heated applications of DTS in remote areas for long-term investigations of temperature distributions in the environment.

  14. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    PubMed

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators. PMID:25811254

  15. Chemotaxis Increases the Residence Time Distribution of Bacteria in Granular Media Containing Distributed Contaminant Sources

    NASA Astrophysics Data System (ADS)

    Adadevoh, J.; Triolo, S.; Ramsburg, C. A.; Ford, R.

    2015-12-01

    The use of chemotactic bacteria in bioremediation has the potential to increase access to, and biotransformation of, contaminant mass within the subsurface environment. This laboratory-scale study aimed to understand and quantify the influence of chemotaxis on residence times of pollutant-degrading bacteria within homogeneous treatment zones. Focus was placed on a continuous flow sand-packed column system in which a uniform distribution of naphthalene crystals created distributed sources of dissolved phase contaminant. A 10 mL pulse of Pseudomonas putida G7, which is chemotactic to naphthalene, and Pseudomonas putida G7 Y1, a non-chemotactic mutant strain, were simultaneously introduced into the sand-packed column at equal concentrations. Breakthrough curves obtained for the bacteria from column experiments conducted with and without naphthalene were used to quantify the effect of chemotaxis on transport parameters. In the presence of the chemoattractant, longitudinal dispersivity of PpG7 increased by a factor of 3 and percent recovery decreased from 21% to 12%. The results imply that pore-scale chemotaxis responses are evident at an interstitial fluid velocity of 1.7 m/d, which is within the range of typical groundwater flow. Within the context of bioremediation, chemotaxis may work to enhance bacterial residence times in zones of contamination thereby improving treatment.

  16. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  17. WATER QUALITY IN SOURCE WATER, TREATMENT, AND DISTRIBUTION SYSTEMS

    EPA Science Inventory

    Most drinking water utilities practice the multiple-barrier concept as the guiding principle for providing safe water. This chapter discusses multiple barriers as they relate to the basic criteria for selecting and protecting source waters, including known and potential sources ...

  18. Distribution, sources and health risk assessment of mercury in kindergarten dust

    NASA Astrophysics Data System (ADS)

    Sun, Guangyi; Li, Zhonggen; Bi, Xiangyang; Chen, Yupeng; Lu, Shuangfang; Yuan, Xin

    2013-07-01

    Mercury (Hg) contamination in urban area is a hot issue in environmental research. In this study, the distribution, sources and health risk of Hg in dust from 69 kindergartens in Wuhan, China, were investigated. In comparison with most other cities, the concentrations of total mercury (THg) and methylmercury (MeHg) were significantly elevated, ranging from 0.15 to 10.59 mg kg-1 and from 0.64 to 3.88 μg kg-1, respectively. Among the five different urban areas, the educational area had the highest concentrations of THg and MeHg. The GIS mapping was used to identify the hot-spot areas and assess the potential pollution sources of Hg. The emissions of coal-power plants and coking plants were the main sources of THg in the dust, whereas the contributions of municipal solid waste (MSW) landfills and iron and steel smelting related industries were not significant. However, the emission of MSW landfills was considered to be an important source of MeHg in the studied area. The result of health risk assessment indicated that there was a high adverse health effect of the kindergarten dust in terms of Hg contamination on the children living in the educational area (Hazard index (HI) = 6.89).

  19. Observation-based source terms in the third-generation wave model WAVEWATCH

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.

    2015-12-01

    Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.

  20. Use of open source distribution for a machine tool controller

    NASA Astrophysics Data System (ADS)

    Shackleford, William P.; Proctor, Frederick M.

    2001-02-01

    In recent years a growing number of government and university las, non-profit organizations and even a few for- profit corporations have found that making their source code public is good for both developers and users. In machine tool control, a growing number of users are demanding that the controllers they buy be `open architecture,' which would allow third parties and end-users at least limited ability to modify, extend or replace the components of that controller. This paper examines the advantages and dangers of going one step further, and providing `open source' controllers by relating the experiences of users and developers of the Enhanced Machine Controller. We also examine some implications for the development of standards for open-architecture but closed-source controllers. Some of the questions we hope to answer include: How can the quality be maintained after the source code has been modified? Can the code be trusted to run on expensive machines and parts, or when the safety of the operator is an issue? Can `open- architecture' but closed-source controllers ever achieve the level of flexibility or extensibility that open-source controllers can?

  1. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1988-01-01

    The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.

  2. Uncertainties associated with the definition of a hydrologic source term for the Nevada Test Site

    SciTech Connect

    Smith, D.K.; Esser, B.K.; Thompson, J.L.

    1995-05-01

    The U.S. Department of Energy, Nevada Operations Office (DOE/NV) Environmental Restoration Division is seeking to evaluate groundwater contamination resulting from 30 years of underground nuclear testing at the Nevada Test Site (NTS). This evaluation requires knowledge about what radioactive materials are in the groundwater and how they are transported through the underground environment. This information coupled with models of groundwater flow (flow paths and flow rates) will enable predictions of the arrival of each radionuclide at a selected receptor site. Risk assessment models will then be used to calculate the expected environmental and human doses. The accuracy of our predictions depends on the validity of our hydrologic and risk assessment models and on the quality of the data for radionuclide concentrations in ground water at each underground nuclear test site. This paper summarizes what we currently know about radioactive material in NTS groundwater and suggests how we can best use our limited knowledge to proceed with initial modeling efforts. The amount of a radionuclide available for transport in groundwater at the site of an underground nuclear test is called the hydrologic source term. The radiologic source term is the total amount of residual radionuclides remaining after an underground nuclear test. The hydrologic source term is smaller than the radiologic source term because some or most of the radionuclide residual cannot be transported by groundwater. The radiologic source term has been determined for each of the underground nuclear tests fired at the NTS; however, the hydrologic source term has been estimated from measurements at only a few sites.

  3. Acetone in the atmosphere: Distribution, sources, and sinks

    NASA Technical Reports Server (NTRS)

    Singh, H. B.; O'Hara, D.; Herlth, D.; Sachse, W.; Blake, D. R.; Bradshaw, J. D.; Kanakidou, M.; Crutzen, P. J.

    1994-01-01

    Acetone (CH3COCH3) was found to be the dominant nonmethane organic species present in the atmosphere sampled primarily over eastern Canada (0-6 km, 35 deg-65 deg N) during ABLE3B (July to August 1990). A concentration range of 357 to 2310 ppt (= 10(exp -12) v/v) with a mean value of 1140 +/- 413 ppt was measured. Under extremely clean conditions, generally involving Arctic flows, lowest (background) mixing ratios of 550 +/- 100 ppt were present in much of the troposphere studied. Correlations between atmospheric mixing ratios of acetone and select species such as C2H2, CO, C3H8, C2Cl4 and isoprene provided important clues to its possible sources and to the causes of its atmospheric variability. Biomass burning as a source of acetone has been identified for the first time. By using atmospheric data and three-dimensional photochemical models, a global acetone source of 40-60 Tg (= 10(exp 12) g)/yr is estimated to be present. Secondary formation from the atmospheric oxidation of precursor hydrocarbons (principally propane, isobutane, and isobutene) provides the single largest source (51%). The remainder is attributable to biomass burning (26%), direct biogenic emissions (21%), and primary anthropogenic emissions (3%). Atmospheric removal of acetone is estimated to be due to photolysis (64%), reaction with OH radicals (24%), and deposition (12%). Model calculations also suggest that acetone photolysis contributed significantly to PAN formation (100-200 ppt) in the middle and upper troposphere of the sampled region and may be important globally. While the source-sink equation appears to be roughly balanced, much more atmospheric and source data, especially from the southern hemisphere, are needed to reliably quantify the atmospheric budget of acetone.

  4. A study of numerical methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Yee, H. C.

    1990-01-01

    In the present study of the behavior of typical numerical methods in the case of a model advection equation having a parameter-dependent source term, two approaches to the incorporation of the source terms are used: MacCormack-type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. The latter are found to perform slightly better. The model scalar equation is used to show that the incorrectness of the propagation speeds of discontinuities observed in the stiff case is due to the introduction of nonequilibrium values through numerical dissipation in the advection step.

  5. Source-term characterisation and solid speciation of plutonium at the Semipalatinsk NTS, Kazakhstan.

    PubMed

    Nápoles, H Jiménez; León Vintró, L; Mitchell, P I; Omarova, A; Burkitbayev, M; Priest, N D; Artemyev, O; Lukashenko, S

    2004-01-01

    New data on the concentrations of key fission/activation products and transuranium nuclides in samples of soil and water from the Semipalatinsk Nuclear Test Site are presented and interpreted. Sampling was carried out at Ground Zero, Lake Balapan, the Tel'kem craters and reference locations within the test site boundary well removed from localised sources. Radionuclide ratios have been used to characterise the source term(s) at each of these sites. The geochemical partitioning of plutonium has also been examined and it is shown that the bulk of the plutonium contamination at most of the sites examined is in a highly refractory, non-labile form. PMID:15177366

  6. The long-term problems of contaminated land: Sources, impacts and countermeasures

    SciTech Connect

    Baes, C.F. III

    1986-11-01

    This report examines the various sources of radiological land contamination; its extent; its impacts on man, agriculture, and the environment; countermeasures for mitigating exposures; radiological standards; alternatives for achieving land decontamination and cleanup; and possible alternatives for utilizing the land. The major potential sources of extensive long-term land contamination with radionuclides, in order of decreasing extent, are nuclear war, detonation of a single nuclear weapon (e.g., a terrorist act), serious reactor accidents, and nonfission nuclear weapons accidents that disperse the nuclear fuels (termed ''broken arrows'').

  7. 77 FR 10490 - SourceGas Distribution LLC; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-22

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission SourceGas Distribution LLC; Notice of Filing Take notice that on February 14, 2012, SourceGas Distribution LLC submitted a revised baseline filing of their Statement of...

  8. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  9. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  10. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  11. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  12. 16 CFR Table 4 to Part 1512 - Relative Energy Distribution of Sources

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Relative Energy Distribution of Sources 4... SUBSTANCES ACT REGULATIONS REQUIREMENTS FOR BICYCLES Pt. 1512, Table 4 Table 4 to Part 1512—Relative Energy Distribution of Sources Wave length (nanometers) Relative energy 380 9.79 390 12.09 400 14.71 410 17.68 420...

  13. The Fukushima releases: an inverse modelling approach to assess the source term by using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc

    2013-04-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in

  14. Using natural archives to track sources and long-term trends of pollution: an introduction

    USGS Publications Warehouse

    Jules Blais; Rosen, Michael R.; John Smol

    2015-01-01

    This book explores the myriad ways that environmental archives can be used to study the distribution and long-term trajectories of contaminants. The volume first focuses on reviews that examine the integrity of the historic record, including factors related to hydrology, post-depositional diffusion, and mixing processes. This is followed by a series of chapters dealing with the diverse archives available for long-term studies of environmental pollution.

  15. An Alternative Treatment of Trace Chemical Constituents in Calculated Chemical Source Terms for Hanford Tank Farms Safety Analsyes

    SciTech Connect

    Huckaby, James L.

    2006-09-26

    Hanford Site high-level radioactive waste tank accident analyses require chemical waste toxicity source terms to assess potential accident consequences. Recent reviews of the current methodology used to generate source terms and the need to periodically update the sources terms has brought scrutiny to the manner in which trace waste constituents are included in the source terms. This report examines the importance of trace constituents to the chemical waste source terms, which are calculated as sums of fractions (SOFs), and recommends three changes to the manner in which trace constituents are included in the calculation SOFs.

  16. The planetary distribution of heat sources and sinks during FGGE

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Wei, M. Y.

    1985-01-01

    Heating distributions from analysis of the National Meteorological Center and European Center for Medium Range Weather Forecasts data sets; methods used and problems involved in the inference of diabatic heating; the relationship between differential heating and energy transport; and recommendations on the inference of heat soruces and heat sinks for the planetary show are discussed.

  17. Monitoring Design for Source Identification in Water Distribution Systems

    EPA Science Inventory

    The design of sensor networks for the purpose of monitoring for contaminants in water distribution systems is currently an active area of research. Much of the effort has been directed at the contamination detection problem and the expression of public health protection objective...

  18. Acoustic Source Localization via Distributed Sensor Networks using Tera-scale Optical-Core Devices

    SciTech Connect

    Imam, Neena; Barhen, Jacob; Wardlaw, Michael

    2008-01-01

    For real-time acoustic source localization applications, one of the primary challenges is the considerable growth in computational complexity associated with the emergence of ever larger, active or passive, distributed sensor networks. The complexity of the calculations needed to achieve accurate source localization increases dramatically with the size of sensor arrays, resulting in substantial growth of computational requirements that cannot be met with standard hardware. One option to meet this challenge builds upon the emergence of digital optical-core devices. The objective of this work was to explore the implementation of key building block algorithms used in underwater source localization on an optical-core digital processing platform recently introduced by Lenslet Inc. They investigate key concepts of threat-detection algorithms such as Time Difference Of Arrival (TDOA) estimation via sensor data correlation in the time domain with the purpose of implementation on the optical-core processor. they illustrate their results with the aid of numerical simulation and actual optical hardware runs. The major accomplishments of this research, in terms of computational speedup and numerical accurcy achieved via the deployment of optical processing technology, should be of substantial interest to the acoustic signal processing community.

  19. Using Reactive Transport Modeling to Evaluate the Source Term at Yucca Mountain

    SciTech Connect

    Y. Chen

    2001-12-19

    The conventional approach of source-term evaluation for performance assessment of nuclear waste repositories uses speciation-solubility modeling tools and assumes pure phases of radioelements control their solubility. This assumption may not reflect reality, as most radioelements (except for U) may not form their own pure phases. As a result, solubility limits predicted using the conventional approach are several orders of magnitude higher then the concentrations of radioelements measured in spent fuel dissolution experiments. This paper presents the author's attempt of using a non-conventional approach to evaluate source term of radionuclide release for Yucca Mountain. Based on the general reactive-transport code AREST-CT, a model for spent fuel dissolution and secondary phase precipitation has been constructed. The model accounts for both equilibrium and kinetic reactions. Its predictions have been compared against laboratory experiments and natural analogues. It is found that without calibrations, the simulated results match laboratory and field observations very well in many aspects. More important is the fact that no contradictions between them have been found. This provides confidence in the predictive power of the model. Based on the concept of Np incorporated into uranyl minerals, the model not only predicts a lower Np source-term than that given by conventional Np solubility models, but also produces results which are consistent with laboratory measurements and observations. Moreover, two hypotheses, whether Np enters tertiary uranyl minerals or not, have been tested by comparing model predictions against laboratory observations, the results favor the former. It is concluded that this non-conventional approach of source term evaluation not only eliminates over-conservatism in conventional solubility approach to some extent, but also gives a realistic representation of the system of interest, which is a prerequisite for truly understanding the long-term

  20. Long-term variability in bright hard X-ray sources: 5+ years of BATSE data

    NASA Technical Reports Server (NTRS)

    Robinson, C. R.; Harmon, B. A.; McCollough, M. L.; Paciesas, W. S.; Sahi, M.; Scott, D. M.; Wilson, C. A.; Zhang, S. N.; Deal, K. J.

    1997-01-01

    The operation of the Compton Gamma Ray Observatory (CGRO)/burst and transient source experiment (BATSE) continues to provide data for inclusion into a data base for the analysis of long term variability in bright, hard X-ray sources. The all-sky capability of BATSE provides up to 30 flux measurements/day for each source. The long baseline and the various rising and setting occultation flux measurements allow searches for periodic and quasi-periodic signals with periods of between several hours to hundreds of days to be conducted. The preliminary results from an analysis of the hard X-ray variability in 24 of the brightest BATSE sources are presented. Power density spectra are computed for each source and profiles are presented of the hard X-ray orbital modulations in some X-ray binaries, together with amplitude modulations and variations in outburst durations and intensities in recurrent X-ray transients.

  1. The distribution and source of boulders on asteroid 4179 Toutatis

    NASA Astrophysics Data System (ADS)

    Jiang, Yun; Ji, Jianghui; Huang, Jiangchuan; Marchi, Simone; Li, Yuan; Ip, Wing-Huen

    2016-01-01

    Boulders are ubiquitous on the surfaces of asteroids and their spatial and size distributions provide information for the geological evolution and collisional history of parent bodies. We identify more than 200 boulders on near-Earth asteroid 4179 Toutatis based on images obtained by Chang'e-2 flyby. The cumulative boulder size frequency distribution (SFD) gives a power-index of -4.4 +/- 0.1, which is clearly steeper than those of boulders on Itokawa and Eros, indicating much high degree of fragmentation. Correlation analyses with craters suggest that most boulders cannot solely be produced as products of cratering, but are probably survived fragments from the parent body of Toutatis, accreted after its breakup. Similar to Itokawa, Toutatis probably has a rubble-pile structure, but owns a different preservation state of boulders.

  2. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  3. Experimental Investigation and 3D Finite Element Prediction of Temperature Distribution during Travelling Heat Sourced from Oxyacetylene Flame

    NASA Astrophysics Data System (ADS)

    Umar Alkali, Adam; Lenggo Ginta, Turnad; Majdi Abdul-Rani, Ahmad

    2015-04-01

    This paper presents a 3D transient finite element modelling of the workpiece temperature field produced during the travelling heat sourced from oxyacetylene flame. The proposed model was given in terms of preheat-only test applicable during thermally enhanced machining using the oxyacetylene flame as a heat source. The FEA model as well as the experimental test investigated the surface temperature distribution on 316L stainless steel at scanning speed of 100mm/min, 125mm/min 160mm/min, 200mm/min and 250mm/min. The parametric properties of the heat source maintained constant are; lead distance Ld =10mm, focus height Fh=7.5mm, oxygen gas pressure Poxy=15psi and acetylene gas pressure Pacty=25psi. An experimental validation of the temperature field induced on type 316L stainless steel reveal that temperature distribution increases when the travelling speed decreases.

  4. Characterizing short-term stability for Boolean networks over any distribution of transfer functions

    NASA Astrophysics Data System (ADS)

    Seshadhri, C.; Smith, Andrew M.; Vorobeychik, Yevgeniy; Mayo, Jackson R.; Armstrong, Robert C.

    2016-07-01

    We present a characterization of short-term stability of Kauffman's N K (random) Boolean networks under arbitrary distributions of transfer functions. Given such a Boolean network where each transfer function is drawn from the same distribution, we present a formula that determines whether short-term chaos (damage spreading) will happen. Our main technical tool which enables the formal proof of this formula is the Fourier analysis of Boolean functions, which describes such functions as multilinear polynomials over the inputs. Numerical simulations on mixtures of threshold functions and nested canalyzing functions demonstrate the formula's correctness.

  5. Characterizing short-term stability for Boolean networks over any distribution of transfer functions.

    PubMed

    Seshadhri, C; Smith, Andrew M; Vorobeychik, Yevgeniy; Mayo, Jackson R; Armstrong, Robert C

    2016-07-01

    We present a characterization of short-term stability of Kauffman's NK (random) Boolean networks under arbitrary distributions of transfer functions. Given such a Boolean network where each transfer function is drawn from the same distribution, we present a formula that determines whether short-term chaos (damage spreading) will happen. Our main technical tool which enables the formal proof of this formula is the Fourier analysis of Boolean functions, which describes such functions as multilinear polynomials over the inputs. Numerical simulations on mixtures of threshold functions and nested canalyzing functions demonstrate the formula's correctness. PMID:27575142

  6. Fission product source term research at Oak Ridge National Laboratory. [PWR; BWR

    SciTech Connect

    Malinauskas, A.P.

    1985-01-01

    The purpose of this work is to describe some of the research being performed at ORNL in support of the effort to describe, as realistically as possible, fission product source terms for nuclear reactor accidents. In order to make this presentation manageable, only those studies directly concerned with fission product behavior, as opposed to thermal hydraulics, accident sequence progression, etc., will be discussed.

  7. Enhancement of the source term algorithm for emergency response at the Savannah River Site

    SciTech Connect

    Simpkins, A.A.; O`Kula, K.R.; Taylor, R.P.; Kearnaghan, G.P.

    1992-12-31

    The purpose of this work is to use the results of the Savannah River Site K-Reactor Probabilistic Safety Assessment to determine the accident sequences and source terms for beyond design basis accidents. Additionally, the methodology necessary to allow the Reactor Accident Program to incorporate this information is to be discussed.

  8. Enhancement of the source term algorithm for emergency response at the Savannah River Site

    SciTech Connect

    Simpkins, A.A.; O'Kula, K.R.; Taylor, R.P.; Kearnaghan, G.P.

    1992-01-01

    The purpose of this work is to use the results of the Savannah River Site K-Reactor Probabilistic Safety Assessment to determine the accident sequences and source terms for beyond design basis accidents. Additionally, the methodology necessary to allow the Reactor Accident Program to incorporate this information is to be discussed.

  9. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    ERIC Educational Resources Information Center

    Hall, Matthew L.; Bavelier, Daphne

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory--perception, encoding, and recall--in this effect. The present study…

  10. Sensitivities to source-term parameters of emergency planning zone boundaries for waste management facilities

    SciTech Connect

    Mueller, C.J.

    1995-07-01

    This paper reviews the key parameters comprising airborne radiological and chemical release source terms, discusses the ranges over which values of these parameters occur for plausible but severe waste management facility accidents, and relates the concomitant sensitivities of emergency planning zone boundaries predicted on calculated distances to early severe health effects.

  11. ACT-ARA: Code System for the Calculation of Changes in Radiological Source Terms with Time

    Energy Science and Technology Software Center (ESTSC)

    1988-02-01

    The program calculates the source term activity as a function of time for parent isotopes as well as daughters. Also, at each time, the "probable release" is produced. Finally, the program determines the time integrated probable release for each isotope over the time period of interest.

  12. High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array

    SciTech Connect

    Qian Xin; Tucker, Andrew; Gidcumb, Emily; Shan Jing; Yang Guang; Calderon-Colon, Xiomara; Sultana, Shabana; Lu Jianping; Zhou, Otto; Spronk, Derrek; Sprenger, Frank; Zhang Yiheng; Kennedy, Don; Farbizio, Tom; Jing Zhenxue

    2012-04-15

    , the projection resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images. Conclusions: A prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years.

  13. High resolution stationary digital breast tomosynthesis using distributed carbon nanotube x-ray source array

    PubMed Central

    Qian, Xin; Tucker, Andrew; Gidcumb, Emily; Shan, Jing; Yang, Guang; Calderon-Colon, Xiomara; Sultana, Shabana; Lu, Jianping; Zhou, Otto; Spronk, Derrek; Sprenger, Frank; Zhang, Yiheng; Kennedy, Don; Farbizio, Tom; Jing, Zhenxue

    2012-01-01

    binning, the projection resolution along the scanning direction increased from 4.0 cycles/mm [at 10% modulation-transfer-function (MTF)] in DBT to 5.1 cycles/mm in s-DBT at magnification factor of 1.08. The improvement is more pronounced for faster scanning speeds, wider angular coverage, and smaller detector pixel sizes. The scanning speed depends on the detector, the number of views, and the imaging dose. With 240 ms detector readout time, the s-DBT system scanning time is 6.3 s for a 15-view, 100 mAs scan regardless of the angular coverage. The scanning speed can be reduced to less than 4 s when detectors become faster. Initial phantom studies showed good quality reconstructed images. Conclusions: A prototype s-DBT scanner has been developed and evaluated by retrofitting the Selenia rotating gantry DBT scanner with a spatially distributed CNT x-ray source array. Preliminary results show that it improves system spatial resolution substantially by eliminating image blur due to x-ray focal spot motion. The scanner speed of s-DBT system is independent of angular coverage and can be increased with faster detector without image degration. The accelerated lifetime measurement demonstrated the long term stability of CNT x-ray source array with typical clinical operation lifetime over 3 years. PMID:22482630

  14. Occurrence of arsenic contamination in Canada: sources, behavior and distribution.

    PubMed

    Wang, Suiling; Mulligan, Catherine N

    2006-08-01

    Recently there has been increasing anxieties concerning arsenic related problems. Occurrence of arsenic contamination has been reported worldwide. In Canada, the main natural arsenic sources are weathering and erosion of arsenic-containing rocks and soil, while tailings from historic and recent gold mine operations and wood preservative facilities are the principal anthropogenic sources. Across Canada, the 24-h average concentration of arsenic in the atmosphere is generally less than 0.3 microg/m3. Arsenic concentrations in natural uncontaminated soil and sediments range from 4 to 150 mg/kg. In uncontaminated surface and ground waters, the arsenic concentration ranges from 0.001 to 0.005 mg/L. As a result of anthropogenic inputs, elevated arsenic levels, above ten to thousand times the Interim Maximum Acceptable Concentration (IMAC), have been reported in air, soil and sediment, surface water and groundwater, and biota in several regions. Most arsenic is of toxic inorganic forms. It is critical to recognize that such contamination imposes serious harmful effects on various aquatic and terrestrial organisms and human health ultimately. Serious incidences of acute and chronic arsenic poisonings have been revealed. Through examination of the available literature, screening and selecting existing data, this paper provides an analysis of the currently available information on recognized problem areas, and an overview of current knowledge of the principal hydrogeochemical processes of arsenic transportation and transformation. However, a more detailed understanding of local sources of arsenic and mechanisms of arsenic release is required. More extensive studies will be required for building practical guidance on avoiding and reducing arsenic contamination. Bioremediation and hyperaccumulation are emerging innovative technologies for the remediation of arsenic contaminated sites. Natural attenuation may be utilized as a potential in situ remedial option. Further

  15. Laboratory experiments designed to provide limits on the radionuclide source term for the NNWSI Project

    SciTech Connect

    Oversby, V.M.; McCright, R.D.

    1984-11-01

    The Nevada Nuclear Waste Storage Investigations Project is investigating the suitability of the tuffaceous rocks at Yucca Mountain Nevada for potential use as a high-level nuclear waste repository. The horizon under investigation lies above the water table, and therefore offers a setting that differs substantially from other potential repository sites. The unsaturated zone environment allows a simple, but effective, waste package design. The source term for radionuclide release from the waste package will be based on laboratory experiments that determine the corrosion rates and mechanisms for the metal container and the dissolution rate of the waste form under expected long term conditions. This paper describes the present status of laboratory results and outlines the approach to be used in combining the data to develop a realistic source term for release of radionuclides from the waste package. 16 refs., 3 figs., 1 tab.

  16. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  17. Solid angle subtended by a cylindrical detector at a point source in terms of elliptic integrals

    NASA Astrophysics Data System (ADS)

    Prata, M. J.

    2003-07-01

    The solid angle subtended by a right circular cylinder at a point source located at an arbitrary position generally consists of a sum of two terms: that defined by the cylindrical surface (Ω cyl) and the other by either of the end circles (Ω circ) . We derive an expression for Ω cyl in terms of elliptic integrals of the first and third kinds and give similar expressions for Ω circ using integrals of the first and second kinds. These latter can be used alternatively to an expression also in terms of elliptic integrals, due to Philip A. Macklin and included as a footnote in Masket (Rev. Sci. Instrum. 28 (3) (1957) 191). The solid angle subtended by the whole cylinder when the source is located at an arbitrary location can then be calculated using elliptic integrals.

  18. Low-level radioactive waste source terms for the 1992 integrated data base

    SciTech Connect

    Loghry, S L; Kibbey, A H; Godbee, H W; Icenhour, A S; DePaoli, S M

    1995-01-01

    This technical manual presents updated generic source terms (i.e., unitized amounts and radionuclide compositions) which have been developed for use in the Integrated Data Base (IDB) Program of the U.S. Department of Energy (DOE). These source terms were used in the IDB annual report, Integrated Data Base for 1992: Spent Fuel and Radioactive Waste Inventories, Projections, and Characteristics, DOE/RW-0006, Rev. 8, October 1992. They are useful as a basis for projecting future amounts (volume and radioactivity) of low-level radioactive waste (LLW) shipped for disposal at commercial burial grounds or sent for storage at DOE solid-waste sites. Commercial fuel cycle LLW categories include boiling-water reactor, pressurized-water reactor, fuel fabrication, and uranium hexafluoride (UF{sub 6}) conversion. Commercial nonfuel cycle LLW includes institutional/industrial (I/I) waste. The LLW from DOE operations is category as uranium/thorium fission product, induced activity, tritium, alpha, and {open_quotes}other{close_quotes}. Fuel cycle commercial LLW source terms are normalized on the basis of net electrical output [MW(e)-year], except for UF{sub 6} conversion, which is normalized on the basis of heavy metal requirement [metric tons of initial heavy metal ]. The nonfuel cycle commercial LLW source term is normalized on the basis of volume (cubic meters) and radioactivity (curies) for each subclass within the I/I category. The DOE LLW is normalized in a manner similar to that for commercial I/I waste. The revised source terms are based on the best available historical data through 1992.

  19. Fukushima Daiichi reactor source term attribution using cesium isotope ratios from contaminated environmental samples

    DOE PAGESBeta

    Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.

    2016-01-18

    Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/137Cs versus 134Cs/137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain

  20. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  1. A Systematic Search for Short-term Variability of EGRET Sources

    NASA Technical Reports Server (NTRS)

    Wallace, P. M.; Griffis, N. J.; Bertsch, D. L.; Hartman, R. C.; Thompson, D. J.; Kniffen, D. A.; Bloom, S. D.

    2000-01-01

    The 3rd EGRET Catalog of High-energy Gamma-ray Sources contains 170 unidentified sources, and there is great interest in the nature of these sources. One means of determining source class is the study of flux variability on time scales of days; pulsars are believed to be stable on these time scales while blazers are known to be highly variable. In addition, previous work has demonstrated that 3EG J0241-6103 and 3EG J1837-0606 are candidates for a new gamma-ray source class. These sources near the Galactic plane display transient behavior but cannot be associated with any known blazers. Although, many instances of flaring AGN have been reported, the EGRET database has not been systematically searched for occurrences of short-timescale (approximately 1 day) variability. These considerations have led us to conduct a systematic search for short-term variability in EGRET data, covering all viewing periods through proposal cycle 4. Six 3EG catalog sources are reported here to display variability on short time scales; four of them are unidentified. In addition, three non-catalog variable sources are discussed.

  2. Simulation of dose distribution for iridium-192 brachytherapy source type-H01 using MCNPX

    SciTech Connect

    Purwaningsih, Anik

    2014-09-30

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result of calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.

  3. Simulation of dose distribution for iridium-192 brachytherapy source type-H01 using MCNPX

    NASA Astrophysics Data System (ADS)

    Purwaningsih, Anik

    2014-09-01

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result of calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.

  4. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean

  5. Making Learning Memorable: Distributed Practice and Long-Term Retention by Special Needs Students.

    ERIC Educational Resources Information Center

    Crawford, S. A. S.; Baine, D.

    1992-01-01

    This paper considers reasons why distributed practice is relatively little used as a method for increasing long-term retention with special needs students and proposes an instructional strategy in which intervals between practice are scheduled according to a student's mastery of the material. (DB)

  6. A General Model for Preferential and Triadic Choice in Terms of Central F Distribution Functions.

    ERIC Educational Resources Information Center

    Ennis, Daniel M; Johnson, Norman L.

    1994-01-01

    A model for preferential and triadic choice is derived in terms of weighted sums of central F distribution functions. It is a probabilistic generalization of Coombs' (1964) unfolding model from which special cases can be derived easily. This model for binary choice can be easily related to preference ratio judgments. (SLD)

  7. Search for correlated radio and optical events in long-term studies of extragalactic sources

    NASA Technical Reports Server (NTRS)

    Pomphrey, R. B.; Smith, A. G.; Leacock, R. J.; Olsson, C. N.; Scott, R. L.; Pollock, J. T.; Edwards, P.; Dent, W. A.

    1976-01-01

    For the first time, long-term records of radio and optical fluxes of a large sample of variable extragalactic sources have been assembled and compared, with linear cross-correlation analysis being used to reinforce the visual comparisons. Only in the case of the BL Lac object OJ 287 is the correlation between radio and optical records strong. In the majority of cases there is no evidence of significant correlation, although nine sources show limited or weak evidence of correlation. The results do not support naive extrapolation of the expanding source model. The general absence of strong correlation between the radio and optical regions has important implications for the energetics of events occurring in such sources.

  8. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System

    PubMed Central

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W.; Dong, Fengzhong

    2016-01-01

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges. PMID:27275822

  9. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-01-01

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges. PMID:27275822

  10. Long-Term Probability Distribution of Wind Turbine Planetary Bearing Loads (Poster)

    SciTech Connect

    Jiang, Z.; Xing, Y.; Guo, Y.; Dong, W.; Moan, T.; Gao, Z.

    2013-04-01

    Among the various causes of bearing damage and failure, metal fatigue of the rolling contact surface is the dominant failure mechanism. The fatigue life is associated with the load conditions under which wind turbines operate in the field. Therefore, it is important to understand the long-term distribution of the bearing loads under various environmental conditions. The National Renewable Energy Laboratory's 750-kW Gearbox Reliability Collaborative wind turbine is studied in this work. A decoupled analysis using several computer codes is carried out. The global aero-elastic simulations are performed using HAWC2. The time series of the drivetrain loads and motions from the global dynamic analysis are fed to a drivetrain model in SIMPACK. The time-varying internal pressure distribution along the raceway is obtained analytically. A series of probability distribution functions are then used to fit the long-term statistical distribution at different locations along raceways. The long-term distribution of the bearing raceway loads are estimated under different environmental conditions. Finally, the bearing fatigue lives are calculated.

  11. Accident source terms for Light-Water Nuclear Power Plants. Final report

    SciTech Connect

    Soffer, L.; Burson, S.B.; Ferrell, C.M.; Lee, R.Y.; Ridgely, J.N.

    1995-02-01

    In 1962 tile US Atomic Energy Commission published TID-14844, ``Calculation of Distance Factors for Power and Test Reactors`` which specified a release of fission products from the core to the reactor containment for a postulated accident involving ``substantial meltdown of the core``. This ``source term``, tile basis for tile NRC`s Regulatory Guides 1.3 and 1.4, has been used to determine compliance with tile NRC`s reactor site criteria, 10 CFR Part 100, and to evaluate other important plant performance requirements. During the past 30 years substantial additional information on fission product releases has been developed based on significant severe accident research. This document utilizes this research by providing more realistic estimates of the ``source term`` release into containment, in terms of timing, nuclide types, quantities and chemical form, given a severe core-melt accident. This revised ``source term`` is to be applied to the design of future light water reactors (LWRs). Current LWR licensees may voluntarily propose applications based upon it.

  12. Identifying Synonymy between SNOMED Clinical Terms of Varying Length Using Distributional Analysis of Electronic Health Records

    PubMed Central

    Henriksson, Aron; Conway, Mike; Duneld, Martin; Chapman, Wendy W.

    2013-01-01

    Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records – the MIMIC-II database – can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length. PMID:24551362

  13. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  14. Reconstruction of Far-Field Tsunami Amplitude Distributions from Earthquake Sources

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2016-04-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  15. An altitude and distance correction to the source fluence distribution of TGFs

    PubMed Central

    Nisi, R S; Østgaard, N; Gjesteland, T; Collier, A B

    2014-01-01

    The source fluence distribution of terrestrial gamma ray flashes (TGFs) has been extensively discussed in recent years, but few have considered how the TGF fluence distribution at the source, as estimated from satellite measurements, depends on the distance from satellite foot point and assumed production altitude. As the absorption of the TGF photons increases significantly with lower source altitude and larger distance between the source and the observing satellite, these might be important factors. We have addressed the issue by using the tropopause pressure distribution as an approximation of the TGF production altitude distribution and World Wide Lightning Location Network spheric measurements to determine the distance. The study is made possible by the increased number of Ramaty High Energy Solar Spectroscopic Imager (RHESSI) TGFs found in the second catalog of the RHESSI data. One find is that the TGF/lightning ratio for the tropics probably has an annual variability due to an annual variability in the Dobson-Brewer circulation. The main result is an indication that the altitude distribution and distance should be considered when investigating the source fluence distribution of TGFs, as this leads to a softening of the inferred distribution of source brightness. PMID:26167434

  16. An altitude and distance correction to the source fluence distribution of TGFs

    NASA Astrophysics Data System (ADS)

    Nisi, R. S.; Østgaard, N.; Gjesteland, T.; Collier, A. B.

    2014-10-01

    The source fluence distribution of terrestrial gamma ray flashes (TGFs) has been extensively discussed in recent years, but few have considered how the TGF fluence distribution at the source, as estimated from satellite measurements, depends on the distance from satellite foot point and assumed production altitude. As the absorption of the TGF photons increases significantly with lower source altitude and larger distance between the source and the observing satellite, these might be important factors. We have addressed the issue by using the tropopause pressure distribution as an approximation of the TGF production altitude distribution and World Wide Lightning Location Network spheric measurements to determine the distance. The study is made possible by the increased number of Ramaty High Energy Solar Spectroscopic Imager (RHESSI) TGFs found in the second catalog of the RHESSI data. One find is that the TGF/lightning ratio for the tropics probably has an annual variability due to an annual variability in the Dobson-Brewer circulation. The main result is an indication that the altitude distribution and distance should be considered when investigating the source fluence distribution of TGFs, as this leads to a softening of the inferred distribution of source brightness.

  17. Distribution of Short-Term and Lifetime Predicted Risks of Cardiovascular Diseases in Peruvian Adults

    PubMed Central

    Quispe, Renato; Bazo-Alvarez, Juan Carlos; Burroughs Peña, Melissa S; Poterico, Julio A; Gilman, Robert H; Checkley, William; Bernabé-Ortiz, Antonio; Huffman, Mark D; Miranda, J Jaime

    2015-01-01

    Background Short-term risk assessment tools for prediction of cardiovascular disease events are widely recommended in clinical practice and are used largely for single time-point estimations; however, persons with low predicted short-term risk may have higher risks across longer time horizons. Methods and Results We estimated short-term and lifetime cardiovascular disease risk in a pooled population from 2 studies of Peruvian populations. Short-term risk was estimated using the atherosclerotic cardiovascular disease Pooled Cohort Risk Equations. Lifetime risk was evaluated using the algorithm derived from the Framingham Heart Study cohort. Using previously published thresholds, participants were classified into 3 categories: low short-term and low lifetime risk, low short-term and high lifetime risk, and high short-term predicted risk. We also compared the distribution of these risk profiles across educational level, wealth index, and place of residence. We included 2844 participants (50% men, mean age 55.9 years [SD 10.2 years]) in the analysis. Approximately 1 of every 3 participants (34% [95% CI 33 to 36]) had a high short-term estimated cardiovascular disease risk. Among those with a low short-term predicted risk, more than half (54% [95% CI 52 to 56]) had a high lifetime predicted risk. Short-term and lifetime predicted risks were higher for participants with lower versus higher wealth indexes and educational levels and for those living in urban versus rural areas (P<0.01). These results were consistent by sex. Conclusions These findings highlight potential shortcomings of using short-term risk tools for primary prevention strategies because a substantial proportion of Peruvian adults were classified as low short-term risk but high lifetime risk. Vulnerable adults, such as those from low socioeconomic status and those living in urban areas, may need greater attention regarding cardiovascular preventive strategies. PMID:26254303

  18. Decoy-state quantum key distribution with a leaky source

    NASA Astrophysics Data System (ADS)

    Tamaki, Kiyoshi; Curty, Marcos; Lucamarini, Marco

    2016-06-01

    In recent years, there has been a great effort to prove the security of quantum key distribution (QKD) with a minimum number of assumptions. Besides its intrinsic theoretical interest, this would allow for larger tolerance against device imperfections in the actual implementations. However, even in this device-independent scenario, one assumption seems unavoidable, that is, the presence of a protected space devoid of any unwanted information leakage in which the legitimate parties can privately generate, process and store their classical data. In this paper we relax this unrealistic and hardly feasible assumption and introduce a general formalism to tackle the information leakage problem in most of existing QKD systems. More specifically, we prove the security of optical QKD systems using phase and intensity modulators in their transmitters, which leak the setting information in an arbitrary manner. We apply our security proof to cases of practical interest and show key rates similar to those obtained in a perfectly shielded environment. Our work constitutes a fundamental step forward in guaranteeing implementation security of quantum communication systems.

  19. Measurements of Infrared and Acoustic Source Distributions in Jet Plumes

    NASA Technical Reports Server (NTRS)

    Agboola, Femi A.; Bridges, James; Saiyed, Naseem

    2004-01-01

    The aim of this investigation was to use the linear phased array (LPA) microphones and infrared (IR) imaging to study the effects of advanced nozzle-mixing techniques on jet noise reduction. Several full-scale engine nozzles were tested at varying power cycles with the linear phased array setup parallel to the jet axis. The array consisted of 16 sparsely distributed microphones. The phased array microphone measurements were taken at a distance of 51.0 ft (15.5 m) from the jet axis, and the results were used to obtain relative overall sound pressure levels from one nozzle design to the other. The IR imaging system was used to acquire real-time dynamic thermal patterns of the exhaust jet from the nozzles tested. The IR camera measured the IR radiation from the nozzle exit to a distance of six fan diameters (X/D(sub FAN) = 6), along the jet plume axis. The images confirmed the expected jet plume mixing intensity, and the phased array results showed the differences in sound pressure level with respect to nozzle configurations. The results show the effects of changes in configurations to the exit nozzles on both the flows mixing patterns and radiant energy dissipation patterns. By comparing the results from these two measurements, a relationship between noise reduction and core/bypass flow mixing is demonstrated.

  20. Optimal source distribution for binaural synthesis over loudspeakers

    NASA Astrophysics Data System (ADS)

    Takeuchi, Takashi; Nelson, Philip A.

    2002-12-01

    When binaural sound signals are presented with loudspeakers, the system inversion involved gives rise to a number of problems such as a loss of dynamic range and a lack of robustness to small errors and room reflections. The amplification required by the system inversion results in loss of dynamic range. The control performance of such a system deteriorates severely due to small errors resulting from, e.g., misalignment of the system and individual differences in the head related transfer functions at certain frequencies. The required large sound radiation results in severe reflection which also reduces the control performance. A method of overcoming these fundamental problems is proposed in this paper. A conceptual monopole transducer is introduced whose position varies continuously as frequency varies. This gives a minimum processing requirement of the binaural signals for the control to be achieved and all the above problems either disappear or are minimized. The inverse filters have flat amplitude response and the reproduced sound is not colored even outside the relatively large ``sweet area.'' A number of practical solutions are suggested for the realization of such optimally distributed transducers. One of them is a discretization that enables the use of conventional transducer units.

  1. A Systematic Search for Short-term Variability of EGRET Sources

    NASA Technical Reports Server (NTRS)

    Wallace, P. M.; Bertsch, D. L.; Bloom, S. D.; Griffis, N. J.; Hunter, S. D.; Kniffen, D. A.; Thompson, D. J.

    1999-01-01

    The 3rd EGRET Catalog contains 170 unidentified high-energy (E>100 MeV) gamma-ray sources, and there is great interest in the nature of these sources. One means of determining sources class is the study of flux variability on time scales of days; pulsars are believed to be stable on these scales while blazars are known to be highly variable. In addition, previous work has led to the discovery of 2CG 135+01 and GRO J1838-04, candidates for a new high-energy gamma-ray source class. These sources display transient behavior but cannot be associated with any known blazars. These considerations have led us to conduct a systematic search for short-term variability in EGRET data, covering all viewing periods through cycle 4. Three unidentified sources show some evidence of variability on short time scales; the source displaying the most convincing variability, 3EG J2006-2321, is not easily identified as a blazar.

  2. Analytic solutions of the time-dependent quasilinear diffusion equation with source and loss terms

    SciTech Connect

    Hassan, M.H.A. ); Hamza, E.A. )

    1993-08-01

    A simplified one-dimensional quasilinear diffusion equation describing the time evolution of collisionless ions in the presence of ion-cyclotron-resonance heating, sources, and losses is solved analytically for all harmonics of the ion cyclotron frequency. Simple time-dependent distribution functions which are initially Maxwellian and vanish at high energies are obtained and calculated numerically for the first four harmonics of resonance heating. It is found that the strongest ion tail of the resulting anisotropic distribution function is driven by heating at the second harmonic followed by heating at the fundamental frequency.

  3. Differential dose contributions on total dose distribution of 125I brachytherapy source

    PubMed Central

    Camgöz, B.; Yeğin, G.; Kumru, M.N.

    2010-01-01

    This work provides an improvement of the approach using Monte Carlo simulation for the Amersham Model 6711 125I brachytherapy seed source, which is well known by many theoretical and experimental studies. The source which has simple geometry was researched with respect to criteria of AAPM Tg-43 Report. The approach offered by this study involves determination of differential dose contributions that come from virtual partitions of a massive radioactive element of the studied source to a total dose at analytical calculation point. Some brachytherapy seeds contain multi-radioactive elements so the dose at any point is a total of separate doses from each element. It is momentous to know well the angular and radial dose distributions around the source that is located in cancerous tissue for clinical treatments. Interior geometry of a source is effective on dose characteristics of a distribution. Dose information of inner geometrical structure of a brachytherapy source cannot be acquired by experimental methods because of limits of physical material and geometry in the healthy tissue, so Monte Carlo simulation is a required approach of the study. EGSnrc Monte Carlo simulation software was used. In the design of a simulation, the radioactive source was divided into 10 rings, partitioned but not separate from each other. All differential sources were simulated for dose calculation, and the shape of dose distribution was determined comparatively distribution of a single-complete source. In this work anisotropy function was examined also mathematically. PMID:24376927

  4. GEOCHEMISTRY OF PAHS IN AQUATIC ENVIRONMENTS: A SYNTHESIS OF DISTRIBUTION, SOURCE, PERSISTENCE, PARTITIONING AND BIOAVAILABILITY

    EPA Science Inventory

    On the basis of their distributions, sources, persistence, partitioning and bioavailability, polycyclic aromatic hydrocarbons (PAHs) are a unique class of persistent organic pollutants (POPs) contaminating the aquatic environment. They are of particular interest to geochemists an...

  5. A study of the distribution of the noise source strengths in coaxial double jet

    NASA Astrophysics Data System (ADS)

    Nakazono, Y.

    1983-05-01

    The apparent noise source distributions in a coaxial double jet at subsonic speed were determined using a reflector-type directional microphone system which has an elliptical concave mirror with a diameter of 40 cm. It was found that the 1/3 octave band sound pressure spectrum in the far-field which was calculated with the sound source distribution agreed with the results obtained by an omnidirectional microphone at each velocity ratio of the bypass jet to the core jet. These results indicate that the directional microphone system will be a highly effective tool for the measurement of the sound source strength distribution of a coaxial double jet as well as that of a single jet. In addition, the overall pressure fluctuations in the jet flow were measured at each velocity ratio, and the influence of the correlation upon the far-field noise was examined based on the pressure fluctuations and the noise source distribution.

  6. Comparing two micrometeorological techniques for estimating trace gas emissions from distributed sources

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Measuring trace gas emission from distributed sources such as treatment lagoons, treatment wetlands, land spread of manure, and feedlots requires micrometeorological methods. In this study, we tested the accuracy of two relatively new micrometeorological techniques, vertical radial plume mapping (VR...

  7. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... earnings and profits. In determining the source of a distribution, consideration should be given first, to.... (b) If the earnings and profits of the taxable year (computed as of the close of the year...

  8. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... earnings and profits. In determining the source of a distribution, consideration should be given first, to.... (b) If the earnings and profits of the taxable year (computed as of the close of the year...

  9. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... earnings and profits. In determining the source of a distribution, consideration should be given first, to.... (b) If the earnings and profits of the taxable year (computed as of the close of the year...

  10. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and profits. In determining the source of a distribution, consideration should be given first, to the... earnings and profits of the taxable year (computed as of the close of the year without diminution by...

  11. 26 CFR 1.316-2 - Sources of distribution in general.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... and profits. In determining the source of a distribution, consideration should be given first, to the... earnings and profits of the taxable year (computed as of the close of the year without diminution by...

  12. Sources and distribution of trace elements in Estonian peat

    NASA Astrophysics Data System (ADS)

    Orru, Hans; Orru, Mall

    2006-10-01

    This paper presents the results of the distribution of trace elements in Estonian mires. Sixty four mires, representative of the different landscape units, were analyzed for the content of 16 trace elements (Cr, Mn, Ni, Cu, Zn, and Pb using AAS; Cd by GF-AAS; Hg by the cold vapour method; and V, Co, As, Sr, Mo, Th, and U by XRF) as well as other peat characteristics (peat type, degree of humification, pH and ash content). The results of the research show that concentrations of trace elements in peat are generally low: V 3.8 ± 0.6, Cr 3.1 ± 0.2, Mn 35.1 ± 2.7, Co 0.50 ± 0.05, Ni 3.7 ± 0.2, Cu 4.4 ± 0.3, Zn 10.0 ± 0.7, As 2.4 ± 0.3, Sr 21.9 ± 0.9, Mo 1.2 ± 0.2, Cd 0.12 ± 0.01, Hg 0.05 ± 0.01, Pb 3.3 ± 0.2, Th 0.47 ± 0.05, U 1.3 ± 0.2 μg g - 1 and S 0.25 ± 0.02%. Statistical analyses on these large database showed that Co has the highest positive correlations with many elements and ash content. As, Ni, Mo, ash content and pH are also significantly correlated. The lowest abundance of most trace elements was recorded in mires fed only by precipitation (ombrotrophic), and the highest in mires fed by groundwater and springs (minerotrophic), which are situated in the flood plains of river valleys. Concentrations usually differ between the superficial, middle and bottom peat layers, but the significance decreases depending on the type of mire in the following order: transitional mires - raised bogs - fens. Differences among mire types are highest for the superficial but not significant for the basal peat layers. The use of peat with high concentrations of trace elements in agriculture, horticulture, as fuel, for water purification etc., may pose a risk for humans: via the food chain, through inhalation, drinking water etc.

  13. Source terms for analysis of accidents at a high level waste repository

    SciTech Connect

    Mubayi, V.; Davis, R.E.; Youngblood, R.

    1989-01-01

    This paper describes an approach to identifying source terms from possible accidents during the preclosure phase of a high-level nuclear waste repository. A review of the literature on repository safety analyses indicated that source term estimation is in a preliminary stage, largely based on judgement-based scoping analyses. The approach developed here was to partition the accident space into domains defined by certain threshold values of temperature and impact energy density which may arise in potential accidents and specify release fractions of various radionuclides, present in the waste form, in each domain. Along with a more quantitative understanding of accident phenomenology, this approach should help in achieving a clearer perspective on scenarios important to preclosure safety assessments of geologic repositories. 18 refs., 3 tabs.

  14. On the application of subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1989-01-01

    LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.

  15. Long-term monitoring of airborne nickel (Ni) pollution in association with some potential source processes in the urban environment.

    PubMed

    Kim, Ki-Hyun; Shon, Zang-Ho; Mauulida, Puteri T; Song, Sang-Keun

    2014-09-01

    The environmental behavior and pollution status of nickel (Ni) were investigated in seven major cities in Korea over a 13-year time span (1998-2010). The mean concentrations of Ni measured during the whole study period fell within the range of 3.71 (Gwangju: GJ) to 12.6ngm(-3) (Incheon: IC). Although Ni values showed a good comparability in a relatively large spatial scale, its values in most cities (6 out of 7) were subject to moderate reductions over the study period. To assess the effect of major sources on the long-term distribution of Ni, the relationship between their concentrations and the potent source processes like non-road transportation sources (e.g., ship and aircraft emissions) were examined from some cities with port and airport facilities. The potential impact of long-range transport of Asian dust particles in controlling Ni levels was also evaluated. The overall results suggest that the Ni levels were subject to gradual reductions over the study period irrespective of changes in such localized non-road source activities. The pollution of Ni at all the study sites was maintained well below the international threshold (Directive 2004/107/EC) value of 20ngm(-3). PMID:24997934

  16. Relative contribution of DNAPL dissolution and matrix diffusion to the long-term persistence of chlorinated solvent source zones

    NASA Astrophysics Data System (ADS)

    Seyedabbasi, Mir Ahmad; Newell, Charles J.; Adamson, David T.; Sale, Thomas C.

    2012-06-01

    The relative contribution of dense non-aqueous phase liquid (DNAPL) dissolution versus matrix diffusion processes to the longevity of chlorinated source zones was investigated. Matrix diffusion is being increasingly recognized as an important non-DNAPL component of source behavior over time, and understanding the persistence of contaminants that have diffused into lower permeability units can impact remedial decision-making. In this study, a hypothetical DNAPL source zone architecture consisting of several different sized pools and fingers originally developed by Anderson et al. (1992) was adapted to include defined low permeability layers. A coupled dissolution-diffusion model was developed to allow diffusion into these layers while in contact with DNAPL, followed by diffusion out of these same layers after complete DNAPL dissolution. This exercise was performed for releases of equivalent masses (675 kg) of three different compounds, including chlorinated solvents with solubilities ranging from low (tetrachloroethene (PCE)), moderate (trichloroethene (TCE)) to high (dichloromethane (DCM)). The results of this simple modeling exercise demonstrate that matrix diffusion can be a critical component of source zone longevity and may represent a longer-term contributor to source longevity (i.e., longer time maintaining concentrations above MCLs) than DNAPL dissolution alone at many sites. For the hypothetical TCE release, the simulation indicated that dissolution of DNAPL would take approximately 38 years, while the back diffusion from low permeability zones could maintain the source for an additional 83 years. This effect was even more dramatic for the higher solubility DCM (97% of longevity due to matrix diffusion), while the lower solubility PCE showed a more equal contribution from DNAPL dissolution vs. matrix diffusion. Several methods were used to describe the resulting source attenuation curves, including a first-order decay model which showed that half-life of

  17. Analytical source term optimization for radioactive releases with approximate knowledge of nuclide ratios

    NASA Astrophysics Data System (ADS)

    Hofman, Radek; Seibert, Petra; Kovalets, Ivan; Andronopoulos, Spyros

    2015-04-01

    We are concerned with source term retrieval in the case of an accident in a nuclear power with off-site consequences. The goal is to optimize atmospheric dispersion model inputs using inverse modeling of gamma dose rate measurements (instantaneous or time-integrated). These are the most abundant type of measurements provided by various radiation monitoring networks across Europe and available continuously in near-real time. Usually, a source term of an accidental release comprises of a mixture of nuclides. Unfortunately, gamma dose rate measurements do not provide a direct information on the source term composition; however, physical properties of respective nuclides (deposition properties, decay half-life) can yield some insight. In the method presented, we assume that nuclide ratios are known at least approximately, e.g. from nuclide specific observations or reactor inventory and assumptions on the accident type. The source term can be in multiple phases, each being characterized by constant nuclide ratios. The method is an extension of a well-established source term inversion approach based on the optimization of an objective function (minimization of a cost function). This function has two quadratic terms: mismatch between model and measurements weighted by an observation error covariance matrix and the deviation of the solution from a first guess weighted by the first-guess error covariance matrix. For simplicity, both error covariance matrices are approximated as diagonal. Analytical minimization of the cost function leads to a liner system of equations. Possible negative parts of the solution are iteratively removed by the means of first guess error variance reduction. Nuclide ratios enter the problem in the form of additional linear equations, where the deviations from prescribed ratios are weighted by factors; the corresponding error variance allows us to control how strongly we want to impose the prescribed ratios. This introduces some freedom into the

  18. Implementation of New Turbulence Spectra in the Lighthill Analogy Source Terms

    NASA Technical Reports Server (NTRS)

    Woodruff, S. L.; Seiner, J. M.; Hussaini, M. Y.; Erlebacher, G.

    2000-01-01

    The industry-standard MGB approach to predicting the noise generated by a given aerodynamic flow field requires that the turbulence velocity correlation be specified so that the source terms in the Lighthill acoustic analogy may be computed. The velocity correlation traditionally used in MGB Computations is inconsistent with a number of basic qualitative properties of turbulent flows. In the present investigation the effect on noise prediction of using two alternative velocity correlations is examined.

  19. 2D and 3D potential flows with rotational source terms in turbomachines

    NASA Astrophysics Data System (ADS)

    Alkalai, K.; Leboeuf, F.

    A computational method capable of treating two- and three-dimensional potential flows is developed which includes blade effects and viscosity in source terms determined over the entire flowfield considered. Details of the mathematical and numerical formulations are given, and grid generation and density calculation are discussed. Preliminary results obtained with the codes developed here are then presented, and the possibility of applying the method to more complex flows is examined.

  20. Balancing the source terms in a SPH model for solving the shallow water equations

    NASA Astrophysics Data System (ADS)

    Xia, Xilin; Liang, Qiuhua; Pastor, Manuel; Zou, Weilie; Zhuang, Yan-Feng

    2013-09-01

    A shallow flow generally features complex hydrodynamics induced by complicated domain topography and geometry. A numerical scheme with well-balanced flux and source term gradients is therefore essential before a shallow flow model can be applied to simulate real-world problems. The issue of source term balancing has been exhaustively investigated in grid-based numerical approaches, e.g. discontinuous Galerkin finite element methods and finite volume Godunov-type methods. In recent years, a relatively new computational method, smooth particle hydrodynamics (SPH), has started to gain popularity in solving the shallow water equations (SWEs). However, the well-balanced problem has not been fully investigated and resolved in the context of SPH. This work aims to discuss the well-balanced problem caused by a standard SPH discretization to the SWEs with slope source terms and derive a corrected SPH algorithm that is able to preserve the solution of lake at rest. In order to enhance the shock capturing capability of the resulting SPH model, the Monotone Upwind-centered Scheme for Conservation Laws (MUSCL) is also explored and applied to enable Riemann solver based artificial viscosity. The new SPH model is validated against several idealized benchmark tests and a real-world dam-break case and promising results are obtained.

  1. Schematic way to find solution of the outcoupled matter wave with a source term

    SciTech Connect

    Prayitno, T. B.

    2013-09-09

    We propose a schematic way to obtain solution of the outcoupled atom laser beam wave function in the presence of a source term where the beam is influenced by gravity. In this case, we only focus on the external potentials inside the region of Bose-Einstein condensate that are generated by electromagnetic source and gravity. Since the evolution of the atom laser beam can be portrayed through the ordinary Schrödinger equation with a source, we are allowed to express the general solution as the superposition of the homogeneous solution and particular solution. With the given external potentials and ansatz solutions, we attain that the obtained energy depends on the parameter constituting to the ratio between the longitudinal frequency and transverse frequency.

  2. Low-level waste disposal performance assessments - Total source-term analysis

    SciTech Connect

    Wilhite, E.L.

    1995-12-31

    Disposal of low-level radioactive waste at Department of Energy (DOE) facilities is regulated by DOE. DOE Order 5820.2A establishes policies, guidelines, and minimum requirements for managing radioactive waste. Requirements for disposal of low-level waste emplaced after September 1988 include providing reasonable assurance of meeting stated performance objectives by completing a radiological performance assessment. Recently, the Defense Nuclear Facilities Safety Board issued Recommendation 94-2, {open_quotes}Conformance with Safety Standards at Department of Energy Low-Level Nuclear Waste and Disposal Sites.{close_quotes} One of the elements of the recommendation is that low-level waste performance assessments do not include the entire source term because low-level waste emplaced prior to September 1988, as well as other DOE sources of radioactivity in the ground, are excluded. DOE has developed and issued guidance for preliminary assessments of the impact of including the total source term in performance assessments. This paper will present issues resulting from the inclusion of all DOE sources of radioactivity in performance assessments of low-level waste disposal facilities.

  3. Elevated Natural Source Water Ammonia and Nitrification in the Distribution Systems of Four Water Utilities

    EPA Science Inventory

    Nitrification in drinking water distribution systems is a concern of many drinking water systems. Although chloramination as a source of nitrification (i.e., addition of excess ammonia or breakdown of chloramines) has drawn the most attention, many source waters contain signific...

  4. Response of a viscoelastic halfspace to subsurface distributed acoustic sources with application to medical diagnosis

    NASA Astrophysics Data System (ADS)

    Royston, Thomas J.; Yazicioglu, Yigit; Loth, Francis

    2003-04-01

    The response within and at the surface of an isotropic viscoelastic medium to subsurface distributed low audible frequency acoustic sources is considered. Spherically and cylindrically distributed sources are approximated as arrays of infinitesimal point sources. Analytical approximations for the acoustic field radiating from these sources are then obtained as a summation of tractable point source expressions. These theoretical approximations are compared to computational finite element predictions and experimental studies in selected cases. The objective is to better understand low audible frequency sound propagation in soft biological tissue caused by subsurface sources. Distributed acoustic sources could represent vibratory motion of the vascular wall caused by turbulent blood flow past a constriction (stenosis). Additionally focused vibratory stimulation using a dynamic radiation force caused by interfering ultrasound beams effectively creates a distributed subsurface acoustic source. A dynamic radiation force has been investigated as a means of probing subsurface tissue anomalies, including calcified vascular plaque and tumorous growths. In these cases, there is an interest in relating acoustic measurements at the skin surface and within the medium to the underlying flow/constriction environment or tissue anomaly. [Research supported by NIH NCRR 14250 and Whitaker Foundation BME RG 01-0198.

  5. Numerical analysis of atomic density distribution in arc driven negative ion sources

    SciTech Connect

    Yamamoto, T. Shibata, T.; Hatayama, A.; Kashiwagi, M.; Hanada, M.; Sawada, K.

    2014-02-15

    The purpose of this study is to calculate atomic (H{sup 0}) density distribution in JAEA 10 ampere negative ion source. A collisional radiative model is developed for the calculation of the H{sup 0} density distribution. The non-equilibrium feature of the electron energy distribution function (EEDF), which mainly determines the H{sup 0} production rate, is included by substituting the EEDF calculated from 3D electron transport analysis. In this paper, the H{sup 0} production rate, the ionization rate, and the density distribution in the source chamber are calculated. In the region where high energy electrons exist, the H{sup 0} production and the ionization are enhanced. The calculated H{sup 0} density distribution without the effect of the H{sup 0} transport is relatively small in the upper region. In the next step, the effect should be taken into account to obtain more realistic H{sup 0} distribution.

  6. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    ERIC Educational Resources Information Center

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  7. Extended Tonks-Langmuir-type model with non-Boltzmann-distributed electrons and cold ion sources

    NASA Astrophysics Data System (ADS)

    Kamran, M.; Kuhn, S.; Tskhakaya, D. D.; Khan, M.; Khan

    2013-04-01

    A general formalism for calculating the potential distribution Φ(z) in the quasineutral region of a new class of plane Tonks-Langmuir (TL)-type bounded-plasma-system (BPS) models differing from the well-known `classical' TL model (Tonks, L. and Langmuir, I. 1929 A general theory of the plasma of an arc. Phys. Rev. 34, 876) by allowing for arbitrary (but still cold) ion sources and arbitrary electron distributions is developed. With individual particles usually undergoing microscopic collision/sink/source (CSS) events, extensive use is made here of the basic kinetic-theory concept of `CSS-free trajectories' (i.e., the characteristics of the kinetic equation). Two types of electron populations, occupying the `type-t' and `type-p' domains of electron phase space, are distinguished. By definition, the type-t and type-p domains are made up of phase points lying on type-t (`trapped') CSS-free trajectories (not intersecting the walls and closing on themselves) and type-p (`passing') ones (starting at one of the walls and ending at the other). This work being the first step, it is assumed that ɛ ≡ λ D /l -> 0+ (where λ D and l are a typical Debye length and a typical ionization length respectively) so that the system exhibits a finite quasineutral `plasma' region and two infinitesimally thin `sheath' regions associated with the `sheath-edge singularities' | dΦ/dz| z->+/-zs -> ∞. The potential in the plasma region is required to satisfy a plasma equation (quasineutrality condition) of the form n i {Φ} = n e (Φ), where the electron density n e (Φ) is given and the ion density n i {Φ} is expressed in terms of trajectory integrals of the ion kinetic equation, with the ions produced by electron-impact ionization of cold neutrals. While previous TL-type models were characterized by electrons diffusing under the influence of frequent collisions with the neutral background particles and approximated by Maxwellian (Riemann, K.-U. 2006 Plasma-sheath transition in the

  8. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a "first guess" source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the

  9. Impact of routine episodic emissions on the expected frequency distribution of emissions from oil and gas production sources.

    NASA Astrophysics Data System (ADS)

    Smith, N.; Blewitt, D.; Hebert, L. B.

    2015-12-01

    In coordination with oil and gas operators, we developed a high resolution (< 1 min) simulation of temporal variability in well-pad oil and gas emissions over a year. We include routine emissions from condensate tanks, dehydrators, pneumatic devices, fugitive leaks and liquids unloading. We explore the variability in natural gas emissions from these individual well-pad sources, and find that routine short-term episodic emissions such as tank flashing and liquids unloading result in the appearance of a skewed, or 'fat-tail' distribution of emissions, from an individual well-pad over time. Additionally, we explore the expected variability in emissions from multiple wells with different raw gas composition, gas/liquids production volumes and control equipment. Differences in well-level composition, production volume and control equipment translate into differences in well-level emissions leading to a fat-tail distribution of emissions in the absence of operational upsets. Our results have several implications for recent studies focusing on emissions from oil and gas sources. Time scale of emission estimates are important and have important policy implications. Fat tail distributions may not be entirely driven by avoidable mechanical failures, and are expected to occur under routine operational conditions from short-duration emissions (e.g., tank flashing, liquid unloading). An understanding of the expected distribution of emissions for a particular population of wells is necessary to evaluate whether the observed distribution is more skewed than expected. Temporal variability in well-pad emissions make comparisons to annual average emissions inventories difficult and may complicate the interpretation of long-term ambient fenceline monitoring data. Sophisticated change detection algorithms will be necessary to identify when true operational upsets occur versus routine short-term emissions.

  10. Measurement Technique of Dose Rate Distribution of Ionization Sources with Unstable in Time Beam Parameters

    NASA Astrophysics Data System (ADS)

    Stuchebrov, S. G.; Miloichikova, I. A.; Danilova, I. B.

    2016-01-01

    The article describes a new technique for the average values of radiation dose measurement for the unstable gamma-ray sources which are used in non-destructive testing. The method is based on usage of different types of compact accumulative dosimeters. Spatially distributed position sensitive dosimetry system based on compact sensitive elements was created. Size and spatial resolution of the system of the dosimetry system are chosen taking into account sources characteristics. The proposed method has been tested on the measurement of dose distribution of several sources of X-ray and gamma-radiation based on X-ray tubes, electronic accelerator betatrons and linear electron accelerators.

  11. Interpreting the neutron's electric form factor: Rest frame charge distribution or foldy term?

    SciTech Connect

    Nathan Isgur

    1998-12-01

    The neutron's electric form factor contains vital information on nucleon structure, but its interpretation within many models has been obscured by relativistic effects. The author demonstrates that, to leading order in the relativistic expansion of a constituent quark model, the Foldy term cancels exactly against a contribution to the Dirac form factor F{sub 1} to leave intact the naive interpretation of G{sup n}{sub E} as arising from the neutron's rest frame charge distribution.

  12. Review of uncertainty sources affecting the long-term predictions of space debris evolutionary models

    NASA Astrophysics Data System (ADS)

    Dolado-Perez, J. C.; Pardini, Carmen; Anselmo, Luciano

    2015-08-01

    Since the launch of Sputnik-I in 1957, the amount of space debris in Earth's orbit has increased continuously. Historically, besides abandoned intact objects (spacecraft and orbital stages), the primary sources of space debris in Earth's orbit were (i) accidental and intentional break-ups which produced long-lasting debris and (ii) debris released intentionally during the operation of launch vehicle orbital stages and spacecraft. In the future, fragments generated by collisions are expected to become a significant source as well. In this context, and from a purely mathematical point of view, the orbital debris population in Low Earth Orbit (LEO) should be intrinsically unstable, due to the physics of mutual collisions and the relative ineffectiveness of natural sink mechanisms above~700 km. Therefore, the real question should not be "if", but "when" the exponential growth of the space debris population is supposed to start. From a practical point of view, and in order to answer the previous question, since the end of the 1980's several sophisticated long-term debris evolutionary models have been developed. Unfortunately, the predictions performed with such models, in particular beyond a few decades, are affected by considerable uncertainty. Such uncertainty comes from a relative important number of variables that being either under the partial control or completely out of the control of modellers, introduce a variability on the long-term simulation of the space debris population which cannot be captured with standard Monte Carlo statistics. The objective of this paper is to present and discuss many of the uncertainty sources affecting the long-term predictions done with evolutionary models, in order to serve as a roadmap for the uncertainty and the statistical robustness analysis of the long-term evolution of the space debris population.

  13. Modeling Heat Flow for a Distributed Moving Heat Source in Micro-Laser Welding of Plastics

    NASA Astrophysics Data System (ADS)

    Grewell, David; Benatar, Avraham

    2004-06-01

    Polymer use in micro-devices, especially in the medical industry has been rapidly increasing. During assembly of micro-devices it is desirable to produce weld joints that are about 100 μm in width. This paper reviews the modeling of heat flow during through transmission infrared micro-welding of plastic using fiber coupled laser diodes. Two models were used to predict the temperature distributions within welded samples. Both models were based on a moving heat source and moving coordinate system. For the simpler model a moving point heat source was used and for the more complex model a Gaussian distributed heat source was used. It was found that the distributed model can accurately predict temperature fields in plastic laser welds for all ranges of the parameters evaluated. However, the point heat source model was only able to accurately predict temperature fields with a relatively small laser focal spot (25 μm). In addition it was found that for micro-welding of plastics, when the dimensionless distribution parameter is less than two, a point heat source model predicts similar widths to those predicted by a distributed heat source model.

  14. Development of a tool dedicated to the evaluation of hydrogen term source for technological Wastes: assumptions, physical models, and validation

    SciTech Connect

    Lamouroux, C.

    2013-07-01

    In radioactive waste packages hydrogen is generated, in one hand, from the radiolysis of wastes (mainly organic materials) and, in the other hand, from the radiolysis of water content in the cement matrix. In order to assess hydrogen generation 2 tools based on operational models have been developed. One is dedicated to the determination of the hydrogen source term issues from the radiolysis of the wastes: the STORAGE tool (Simulation Tool Of Emission Radiolysis Gas), the other deals with the hydrogen source term gas, produced by radiolysis of the cement matrices (the Damar tool). The approach used by the STORAGE tool for assessing the production rate of radiolysis gases is divided into five steps: 1) Specification of the data packages, in particular, inventories and radiological materials defined for a package medium; 2) Determination of radiochemical yields for the different constituents and the laws of behavior associated, this determination of radiochemical yields is made from the PRELOG database in which radiochemical yields in different irradiation conditions have been compiled; 3) Definition of hypothesis concerning the composition and the distribution of contamination inside the package to allow assessment of the power absorbed by the constituents; 4) Sum-up of all the contributions; And finally, 5) validation calculations by comparison with a reduced sampling of packages. Comparisons with measured values confirm the conservative character of the methodology and give confidence in the safety margins for safety analysis report.

  15. Evaluation of severe accident risks: Methodology for the containment, source term, consequence, and risk integration analyses; Volume 1, Revision 1

    SciTech Connect

    Gorham, E.D.; Breeding, R.J.; Brown, T.D.; Harper, F.T.; Helton, J.C.; Murfin, W.B.; Hora, S.C.

    1993-12-01

    NUREG-1150 examines the risk to the public from five nuclear power plants. The NUREG-1150 plant studies are Level III probabilistic risk assessments (PRAs) and, as such, they consist of four analysis components: accident frequency analysis, accident progression analysis, source term analysis, and consequence analysis. This volume summarizes the methods utilized in performing the last three components and the assembly of these analyses into an overall risk assessment. The NUREG-1150 analysis approach is based on the following ideas: (1) general and relatively fast-running models for the individual analysis components, (2) well-defined interfaces between the individual analysis components, (3) use of Monte Carlo techniques together with an efficient sampling procedure to propagate uncertainties, (4) use of expert panels to develop distributions for important phenomenological issues, and (5) automation of the overall analysis. Many features of the new analysis procedures were adopted to facilitate a comprehensive treatment of uncertainty in the complete risk analysis. Uncertainties in the accident frequency, accident progression and source term analyses were included in the overall uncertainty assessment. The uncertainties in the consequence analysis were not included in this assessment. A large effort was devoted to the development of procedures for obtaining expert opinion and the execution of these procedures to quantify parameters and phenomena for which there is large uncertainty and divergent opinions in the reactor safety community.

  16. Long-Term Safe Storage and Disposal of Spent Sealed Radioactive Sources in Borehole Type Repositories

    SciTech Connect

    Ojovan, M. I.; Dmitriev, S. A.; Sobolev, I. A.

    2003-02-26

    Russian Federation has the leading experience in applying borehole storage/disposal method for SRS. A new immobilization technology for sources being disposed of in underground repositories was mastered by 1986 and since then it is used in the country. This method uses all advantages of borehole type repositories supplementing them with metal encapsulation of sources. Sources being uniformly allocated in the volume of underground vessel are fixed in the metal block hence ensuring long-term safety. The dissipation of radiogenic heat from SRS is considerably improved, radiation fields are reduced, and direct contact of sources to an environment is completely eliminated. The capacity of a typical borehole storage/disposal facility is increased almost 6 times applying metal immobilization. That has made new technology extremely favourable economically. The metal immobilization of SRS is considered as an option in Belarus and Ukraine as well as Bulgaria. Immobilization of sources in metal matrices can be a real solution for retrieval of SRS from inadequate repositories.

  17. Trace elements in particulate matter from metropolitan regions of Northern China: Sources, concentrations and size distributions.

    PubMed

    Pan, Yuepeng; Tian, Shili; Li, Xingru; Sun, Ying; Li, Yi; Wentworth, Gregory R; Wang, Yuesi

    2015-12-15

    Public concerns over airborne trace elements (TEs) in metropolitan areas are increasing, but long-term and multi-site observations of size-resolved aerosol TEs in China are still lacking. Here, we identify highly elevated levels of atmospheric TEs in megacities and industrial sites in a Beijing-Tianjin-Hebei urban agglomeration relative to background areas, with the annual mean values of As, Pb, Ni, Cd and Mn exceeding the acceptable limits of the World Health Organization. Despite the spatial variability in concentrations, the size distribution pattern of each trace element was quite similar across the region. Crustal elements of Al and Fe were mainly found in coarse particles (2.1-9 μm), whereas the main fraction of toxic metals, such as Cu, Zn, As, Se, Cd and Pb, was found in submicron particles (<1.1 μm). These toxic metals were enriched by over 100-fold relative to the Earth's crust. The size distributions of Na, Mg, K, Ca, V, Cr, Mn, Ni, Mo and Ba were bimodal, with two peaks at 0.43-0.65 μm and 4.7-5.8 μm. The combination of the size distribution information, principal component analysis and air mass back trajectory model offered a robust technique for distinguishing the main sources for airborne TEs, e.g., soil dust, fossil fuel combustion and industrial emissions, at different sites. In addition, higher elemental concentrations coincided with westerly flow, indicating that polluted soil and fugitive dust were major sources of TEs on the regional scale. However, the contribution of coal burning, iron industry/oil combustion and non-ferrous smelters to atmospheric metal pollution in Northern China should be given more attention. Considering that the concentrations of heavy metals associated with fine particles in the target region were significantly higher than those in other Asian sites, the implementations of strict environmental standards in China are required to reduce the amounts of these hazardous pollutants released into the atmosphere. PMID

  18. Size distribution, mixing state and source apportionments of black carbon aerosols in London during winter time

    NASA Astrophysics Data System (ADS)

    Liu, D.; Allan, J. D.; Young, D. E.; Coe, H.; Beddows, D.; Fleming, Z. L.; Flynn, M. J.; Gallagher, M. W.; Harrison, R. M.; Lee, J.; Prevot, A. S. H.; Taylor, J. W.; Yin, J.; Williams, P. I.; Zotter, P.

    2014-06-01

    Black carbon aerosols (BC) at a London urban site were characterized in both winter and summer time 2012 during the Clean Air for London (ClearfLo) project. Positive matrix factorization (PMF) factors of organic aerosol mass spectra measured by a high resolution aerosol mass spectrometer (HR-AMS) showed traffic-dominant sources in summer but in winter the influence of additional non-traffic sources became more important, mainly from solid fuel sources (SF). Measurements using a single particle soot photometer (SP2, DMT), showed the traffic-dominant BC exhibited an almost uniform BC core size (Dc) distribution with very thin coating thickness throughout the detectable range of Dc. However the size distribution of Dc (project average mass median Dc = 149 ± 22 nm in winter, and 120 ± 6 nm in summer) and BC coating thickness varied significantly in winter. A novel methodology was developed to attribute the BC number concentrations and mass abundances from traffic (BCtr) and from SF (BCsf), by using a 2-D histogram of the particle optical properties as a function of BC core size, as measured by the SP2. The BCtr and BCsf showed distinctly different Dc distributions and coating thicknesses, with BCsf displaying larger Dc and larger coating thickness compared to BCtr. BC particles from different sources were also apportioned by applying a multiple linear regression between the total BC mass and each AMS-PMF factor (BC-AMS-PMF method), and also attributed by applying the absorption spectral dependence of carbonaceous aerosols to 7-wavelength Aethalometer measurements (Aethalometer method). Air masses that originated from westerly (W), southeasterly (SE), or easterly (E) sectors showed BCsf fractions that ranged from low to high, and whose mass median Dc values were 137 ± 10 nm, 143 ± 11 nm, and 169 ± 29 nm respectively. The corresponding bulk relative coating thickness of BC (coated particle size / BC core - Dp / Dc) for these same sectors was 1.28 ± 0.07, 1.45 ± 0

  19. Size distribution, mixing state and source apportionment of black carbon aerosol in London during wintertime

    NASA Astrophysics Data System (ADS)

    Liu, D.; Allan, J. D.; Young, D. E.; Coe, H.; Beddows, D.; Fleming, Z. L.; Flynn, M. J.; Gallagher, M. W.; Harrison, R. M.; Lee, J.; Prevot, A. S. H.; Taylor, J. W.; Yin, J.; Williams, P. I.; Zotter, P.

    2014-09-01

    Black carbon aerosols (BC) at a London urban site were characterised in both winter- and summertime 2012 during the Clean Air for London (ClearfLo) project. Positive matrix factorisation (PMF) factors of organic aerosol mass spectra measured by a high-resolution aerosol mass spectrometer (HR-AMS) showed traffic-dominant sources in summer but in winter the influence of additional non-traffic sources became more important, mainly from solid fuel sources (SF). Measurements using a single particle soot photometer (SP2, DMT), showed the traffic-dominant BC exhibited an almost uniform BC core size (Dc) distribution with very thin coating thickness throughout the detectable range of Dc. However, the size distribution of sf (project average mass median Dc = 149 ± 22 nm in winter, and 120 ± 6 nm in summer) and BC coating thickness varied significantly in winter. A novel methodology was developed to attribute the BC number concentrations and mass abundances from traffic (BCtr) and from SF (BCsf), by using a 2-D histogram of the particle optical properties as a function of BC core size, as measured by the SP2. The BCtr and BCsf showed distinctly different sf distributions and coating thicknesses, with BCsf displaying larger Dc and larger coating thickness compared to BCtr. BC particles from different sources were also apportioned by applying a multiple linear regression between the total BC mass and each AMS-PMF factor (BC-AMS-PMF method), and also attributed by applying the absorption spectral dependence of carbonaceous aerosols to 7-wavelength Aethalometer measurements (Aethalometer method). Air masses that originated from westerly (W), southeasterly (SE), and easterly (E) sectors showed BCsf fractions that ranged from low to high, and whose mass median Dc values were 137 ± 10 nm, 143 ± 11 nm and 169 ± 29 nm, respectively. The corresponding bulk relative coating thickness of BC (coated particle size/BC core - Dp/Dc) for these same sectors was 1.28 ± 0.07, 1.45 ± 0

  20. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations

  1. SARNET: Integrating Severe Accident Research in Europe - Safety Issues in the Source Term Area

    SciTech Connect

    Haste, T.; Giordano, P.; Micaelli, J.-C.; Herranz, L.

    2006-07-01

    SARNET (Severe Accident Research Network) is a Network of Excellence of the EU 6. Framework Programme that integrates in a sustainable manner the research capabilities of about fifty European organisations to resolve important remaining uncertainties and safety issues concerning existing and future nuclear plant, especially water-cooled reactors, under hypothetical severe accident conditions. It emphasises integrating activities, spreading of excellence (including knowledge transfer) and jointly-executed research. This paper summarises the main results obtained at the middle of the current 4-year term, highlighting those concerning radioactive release to the environment. Integration is pursued through different methods: the ASTEC integral computer code for severe accident modelling, development of PSA level 2 methods, a means for definition, updating and resolution of safety issues, and development of a web database for storing experimental results. These activities are helped by an evolving Advanced Communication Tool, easing communication amongst partners. Concerning spreading of excellence, educational courses covering severe accident analysis methodology and level 2 PSA have been organised for early 2006. A text book on Severe Accident Phenomenology is being written. A mobility programme for students and young researchers has started. Results are disseminated mainly through open conference proceedings, with journal publications planned. The 1. European Review Meeting on Severe Accidents in November 2005 covered SARNET activities during its first 18 months. Jointly executed research activities concern key issues grouped in the Corium, Containment and Source Term areas. In Source Term, behaviour of the highly radio-toxic ruthenium under oxidising conditions, including air ingress, is investigated. Models are proposed for fuel and ruthenium oxidation. Experiments on transport of oxide ruthenium species are performed. Reactor scenario studies assist in defining

  2. The Integration of Renewable Energy Sources into Electric Power Distribution Systems, Vol. II Utility Case Assessments

    SciTech Connect

    Zaininger, H.W.

    1994-01-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: the local solar insolation and/or wind characteristics, renewable energy source penetration level, whether battery or other energy storage systems are applied, and local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kW-scale applications may be connected to three+phase secondaries, and larger hundred-kW and y-scale applications, such as MW-scale windfarms, or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. In any case, the installation of small, distributed renewable energy sources is expected to have a significant impact on local utility distribution primary and secondary system economics. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications. The

  3. Long-term particle measurements in Finnish Arctic: Part II - Trend analysis and source location identification

    NASA Astrophysics Data System (ADS)

    Laing, James R.; Hopke, Philip K.; Hopke, Eleanor F.; Husain, Liaquat; Dutkiewicz, Vincent A.; Paatero, Jussi; Viisanen, Yrjö.

    2014-05-01

    Forty-seven years (1964-2010) of weekly trace metal and major ion concentrations in total suspended particle samples from Kevo, Finland were analyzed for long-term trends and by source identification methods. Significant long-term decreasing trends were detected for most species. The largest decreases over the 47 years were Sb (-3.90% yr-1), Pb (-3.87% yr-1), Mn (-3.45% yr-1), Cd (-3.42% yr-1), and Ca (-3.13% yr-1). As, Pb, and Cd concentrations at Kevo were consistent with the reported time-trends of European emissions inventories. Pb concentrations at Kevo have dramatically decreased (92%) in the past 47 years due to the reduced use of leaded gasoline in automobiles. Back-trajectory analysis suggests that the main source areas of anthropogenic species (V, Cd, Mn, Mo, Sb, Tl, W) were predominantly in Eastern Europe, European Russia, and the Baltics. Markers of stationary fuel combustion (V, Mn, Mo, Sb, Se, and Tl) pointed towards source regions in the Pechora Basin and Ural industrial areas in Russia, and near gas and oil fields in western Kazakhstan.

  4. The Multimedia Environmental Pollutant Assessment System (MEPAS){reg_sign}: Source-term release formulations

    SciTech Connect

    Streile, G.P.; Shields, K.D.; Stroh, J.L.; Bagaasen, L.M.; Whelan, G.; McDonald, J.P.; Droppo, J.G.; Buck, J.W.

    1996-11-01

    This report is one of a series of reports that document the mathematical models in the Multimedia Environmental Pollutant Assessment System (MEPAS). Developed by Pacific Northwest National Laboratory for the US Department of Energy, MEPAS is an integrated impact assessment software implementation of physics-based fate and transport models in air, soil, and water media. Outputs are estimates of exposures and health risk assessments for radioactive and hazardous pollutants. Each of the MEPAS formulation documents covers a major MEPAS component such as source-term, atmospheric, vadose zone/groundwater, surface water, and health exposure/health impact assessment. Other MEPAS documentation reports cover the sensitivity/uncertainty formulations and the database parameter constituent property estimation methods. The pollutant source-term release component is documented in this report. MEPAS simulates the release of contaminants from a source, transport through the air, groundwater, surface water, or overland pathways, and transfer through food chains and exposure pathways to the exposed individual or population. For human health impacts, risks are computed for carcinogens and hazard quotients for noncarcinogens. MEPAS is implemented on a desktop computer with a user-friendly interface that allows the user to define the problem, input the required data, and execute the appropriate models for both deterministic and probabilistic analyses.

  5. Long-term X-ray variability of ultraluminous X-ray sources

    NASA Astrophysics Data System (ADS)

    Lin, Lupin Chun-Che; Hu, Chin-Ping; Kong, Albert K. H.; Yen, David Chien-Chang; Takata, Jumpei; Chou, Yi

    2015-12-01

    Long-term X-ray modulations on time-scales from tens to hundreds of days have been widely studied for X-ray binaries located in the Milky Way and the Magellanic Clouds. For other nearby galaxies, only the most luminous X-ray sources can be monitored with dedicated observations. We here present the first systematic study of long-term X-ray variability of four ultraluminous X-ray sources (ESO 243-49 HLX-1, Holmberg IX X-1, M81 X-6, and NGC 5408 X-1) monitored with Swift. By using various dynamic techniques to analyse their light curves, we find several interesting low-frequency quasi-periodicities. Although the periodic signals may not represent any stable orbital modulations, these detections reveal that such long-term regular patterns may be related to superorbital periods and structure of the accretion discs. In particular, we show that the outburst recurrence time of ESO 243-49 HLX-1 varies over time and suggest that it may not be the orbital period. Instead, it may be due to some kinds of precession, and the true binary period is expected to be much shorter.

  6. Distribution of terminal electron-accepting processes in an aquifer having multiple contaminant sources

    USGS Publications Warehouse

    McMahon, P.B.; Bruce, B.W.

    1997-01-01

    Concentrations of electron acceptors, electron donors, and H2 in groundwater were measured to determine the distribution of terminal electron-accepting processes (TEAPs) in an alluvial aquifer having multiple contaminant sources. Upgradient contaminant sources included two separate hydrocarbon point sources, one of which contained the fuel oxygenate methyl tertbutyl ether (MTBE). Infiltrating river water was a source of dissolved NO31 SO4 and organic carbon (DOC) to the downgradient part of the aquifer. Groundwater downgradient from the MTBE source had larger concentrations of electron acceptors (dissolved O2 and SO4) and smaller concentrations of TEAP end products (dissolved inorganic C, Fe2+ and CH4) than groundwater downgradient from the other hydrocarbon source, suggesting that MTBE was not as suitable for supporting TEAPs as the other hydrocarbons. Measurements of dissolved H2 indicated that SO4 reduction predominated in the aquifer during a period of high water levels in the aquifer and river. The predominant TEAP shifted to Fe3+ reduction in upgradient areas after water levels receded but remained SO4 reducing downgradient near the river. This distribution of TEAPs is the opposite of what is commonly observed in aquifers having a single contaminant point source and probably reflects the input of Dec and SO4 to the aquifer from the river. Results of this study indicate that the distribution of TEAPs in aquifers having multiple contaminant sources depends on the composition and location of the contaminants and on the availability of electron acceptors.

  7. Integrating distributed data sources with OGSA-DAI DQP and VIEWS.

    PubMed

    Dobrzelecki, Bartosz; Krause, Amrey; Hume, Alastair C; Grant, Alistair; Antonioletti, Mario; Alemu, Tilaye Y; Atkinson, Malcolm; Jackson, Mike; Theocharopoulos, Elias

    2010-09-13

    OGSA-DAI (Open Grid Services Architecture Data Access and Integration) is a framework for building distributed data access and integration systems. Until recently, it lacked the built-in functionality that would allow easy creation of federations of distributed data sources. The latest release of the OGSA-DAI framework introduced the OGSA-DAI DQP (Distributed Query Processing) resource. The new resource encapsulates a distributed query processor, that is able to orchestrate distributed data sources when answering declarative user queries. The query processor has many extensibility points, making it easy to customize. We have also introduced a new OGSA-DAI Views resource that provides a flexible method for defining views over relational data. The interoperability of the two new resources, together with the flexibility of the OGSA-DAI framework, allows the building of highly customized data integration solutions. PMID:20679127

  8. The impact of light source spectral power distribution on sky glow

    NASA Astrophysics Data System (ADS)

    Luginbuhl, Christian B.; Boley, Paul A.; Davis, Donald R.

    2014-05-01

    The effect of light source spectral power distribution on the visual brightness of anthropogenic sky glow is described. Under visual adaptation levels relevant to observing the night sky, namely with dark-adapted (scotopic) vision, blue-rich (“white”) sources produce a dramatically greater sky brightness than yellow-rich sources. High correlated color temperature LEDs and metal halide sources produce a visual brightness up to 8× brighter than low-pressure sodium and 3× brighter than high-pressure sodium when matched lumen-for-lumen and observed nearby. Though the sky brightness arising from blue-rich sources decreases more strongly with distance, the visual sky glow resulting from such sources remains significantly brighter than from yellow sources out to the limits of this study at 300 km.

  9. Technical considerations related to interim source-term assumptions for emergency planning and equipment qualification. [PWR; BWR

    SciTech Connect

    Niemczyk, S.J.; McDowell-Boyer, L.M.

    1982-09-01

    The source terms recommended in the current regulatory guidance for many considerations of light water reactor (LWR) accidents were developed a number of years ago when understandings of many of the phenomena pertinent to source term estimation were relatively primitive. The purpose of the work presented here was to develop more realistic source term assumptions which could be used for interim regulatory purposes for two specific considerations, namely, equipment qualification and emergency planning. The overall approach taken was to adopt assumptions and models previously proposed for various aspects of source term estimation and to modify those assumptions and models to reflect recently gained insights into, and data describing, the release and transport of radionuclides during and after LWR accidents. To obtain illustrative estimates of the magnitudes of the source terms, the results of previous calculations employing the adopted assumptions and models were utilized and were modified to account for the effects of the recent insights and data.

  10. Characterization and Source Term Assessments of Radioactive Particles from Marshall Islands Using Non-Destructive Analytical Techniques

    SciTech Connect

    Jernstrom, J; Eriksson, M; Simon, R; Tamborini, G; Bildstein, O; Carlos-Marquez, R; Kehl, S R; Betti, M; Hamilton, T

    2005-06-11

    A considerable fraction of radioactivity entering the environment from different nuclear events is associated with particles. The impact of these events can only be fully assessed where there is some knowledge about the mobility of particle bound radionuclides entering the environment. The behavior of particulate radionuclides is dependent on several factors, including the physical, chemical and redox state of the environment, the characteristics of the particles (e.g., the chemical composition, crystallinity and particle size) and on the oxidative state of radionuclides contained in the particles. Six plutonium-containing particles stemming from Runit Island soil (Marshall Islands) were characterized using non-destructive analytical and microanalytical methods. By determining the activity of {sup 239,240}Pu and {sup 241}Am isotopes from their gamma peaks structural information related to Pu matrix was obtained, and the source term was revealed. Composition and elemental distribution in the particles were studied with synchrotron radiation based micro X-ray fluorescence (SR-{mu}-XRF) spectrometry. Scanning electron microscope equipped with energy dispersive X-ray detector (SEMEDX) and secondary ion mass spectrometer (SIMS) were used to examine particle surfaces. Based on the elemental composition the particles were divided into two groups; particles with plain Pu matrix, and particles where the plutonium is included in Si/O-rich matrix being more heterogeneously distributed. All of the particles were identified as fragments of initial weapons material. As containing plutonium with low {sup 240}Pu/{sup 239}Pu atomic ratio, {approx}2-6%, which corresponds to weapons grade plutonium, the source term was identified to be among the safety tests conducted in the history of Runit Island.

  11. Effect of tissue inhomogeneities on dose distributions from Cf-252 brachytherapy source.

    PubMed

    Ghassoun, J

    2013-01-01

    The Monte Carlo method was used to determine the effect of tissue inhomogeneities on dose distribution from a Cf-252 brachytherapy source. Neutron and gamma-ray fluences, energy spectra and dose rate distributions were determined in both homogenous and inhomogeneous phantoms. Simulations were performed using the MCNP5 code. Obtained results were compared with experimentally measured values published in literature. Results showed a significant change in neutron dose rate distributions in presence of heterogeneities. However, their effect on gamma rays dose distribution is minimal. PMID:23069196

  12. Operational source term estimation and ensemble prediction for the Grimsvoetn 2011 event

    NASA Astrophysics Data System (ADS)

    Maurer, Christian; Arnold, Delia; Klonner, Robert; Wotawa, Gerhard

    2014-05-01

    The ESA-funded international project VAST (Volcanic Ash Strategic Initiative Team) includes focusing on a realistic source term estimation in the case of volcanic eruptions as well as on an estimate of the forecast uncertainty in the resulting atmospheric dispersion calculations, which partly derive from the forecast uncertainty in the meteorological input data. SEVIRI earth observation data serve as a basis for the source term estimation, from which the total atmospheric column ash content can be estimated. In an operational environment, the already available EUMETCAST VOLE product may be used. Further an a priori source term is needed, which can be coarsely estimated according to information from previous eruptions and/or constrained with observations of the eruption column. The link between observations and the a priori source is established by runs of the atmospheric transport model FLEXPART for individual emission periods and a predefined number of vertical levels. Through minimizing the differences between observations and model results the so-called a posteriori source term can be depicted for a certain time interval as a function of height. Such a result is shown for a first test case, the eruption of the Grimsvoetn volcano on Iceland in May 2011. Once the dispersion calculations are as optimized as possible with regard to the source term, the uncertainty stemming from the forecast uncertainty of the numeric weather prediction model used is still present, adding up to the unavoidable model errors. Since it is impossible to perform FLEXPART runs for all 50 members of the Integrated Forecasting System (IFS) of ECMWF due to computational (time-storage) constraints, the number of members gets restricted to five (maximum seven) representative runs via cluster analysis. The approach used is as of Klonner (2012) where it was demonstrated that exclusive consideration of the wind components on a pressure level (e.g. 400 hPa) makes it possible to find clusters and

  13. Explicit estimation of higher order modes in fission source distribution of Monte-Carlo calculation

    SciTech Connect

    Yamamoto, A.; Sakata, K.; Endo, T.

    2013-07-01

    Magnitude of higher order modes in fission source distribution of a multi-group Monte-Carlo calculation is estimated using the orthogonal property of forward and adjoint fission source distributions. Calculation capability of the forward and adjoint fission source distributions for fundamental and higher order modes are implemented in the AEGIS code, which is a two-dimensional transport code based on the method of characteristics. With the calculation results of the AEGIS code, magnitudes of the first to fifth higher order modes in fission source distribution obtained by the multi-group Monte-Carlo code GMVP are estimated. There are two contributions in the present study - (1) establishment of a surrogate model, which represents convergence of fission source distribution taking into account the inherent statistical 'noise' of higher order modes of Monte-Carlo calculations and (2) independent confirmation of the estimated dominance ratio in a Monte-Carlo calculation. The surrogate model would contribute to studies of the inter-cycle correlation and estimation of sufficient number of inactive/active cycles. (authors)

  14. Sources and distribution of late Pleistocene sand, northern Gulf of Mexico Shelf

    SciTech Connect

    Mazzullo, J.M.; Bates, C.; Reutter, D.; Withers, K.

    1985-02-01

    A completed 3-yr study of the sources and consequent distribution of late Pleistocene sand on the northern Gulf shelf clarifies paleogeography and alluvial identification. Techniques used to determine the sources of sand are: the Fourier technique (which differentiated sands from different source terranes on the basis of the shapes of quartz sand grains), mineralogic analysis (which identified the composition of the source terranes that contributed each quartz-shape type), and an evaluation of the source terranes drained by each of the southern US rivers (thereby linking each shape type to a particular river). These data and the mapped distribution of sand deposited on the shelf by each of these rivers during the late Pleistocene lowstand indicate distribution patterns have not been modified by modern shelf currents to any great extent, and thus record the late Pleistocene paleogeography of the shelf. These distributions show, among other things, the locations of the late Pleistocene alluvial valleys of each of the southern US rivers, and identify the sources of shelf-edge deltas off the coasts of Texas and Louisiana that were detected by shallow seismic analysis.

  15. A Comprehensive Probabilistic Tsunami Hazard Assessment: Multiple Sources and Short-Term Interactions

    NASA Astrophysics Data System (ADS)

    Anita, G.; Selva, J.; Laura, S.

    2011-12-01

    We develop a comprehensive and total probabilistic tsunami hazard assessment (TotPTHA), in which many different possible source types concur to the definition of the total tsunami hazard at given target sites. In a multi-hazard and multi-risk perspective, such an innovative approach allows, in principle, to consider all possible tsunamigenic sources, from seismic events, to slides, asteroids, volcanic eruptions, etc. In this respect, we also formally introduce and discuss the treatment of interaction/cascade effects in the TotPTHA analysis. We demonstrate how external triggering events may induce significant temporary variations in the tsunami hazard. Because of this, such effects should always be considered, at least in short-term applications, to obtain unbiased analyses. Finally, we prove the feasibility of the TotPTHA and of the treatment of interaction/cascade effects by applying this methodology to an ideal region with realistic characteristics (Neverland).

  16. The long-term oscillations in sunspots and related inter-sunspot sources in microwave emission

    NASA Astrophysics Data System (ADS)

    Bakunina, I. A.; Abramov-Maximov, V. E.; Smirnova, V. V.

    2016-02-01

    This work presents the microwave long-term oscillations with periods of a few tens of minutes obtained from Nobeyama radioheliograph (NoRH) at frequency 17 GHz. In two active regions the fluctuations of radio emission of different types of intersunspot sources (ISS) (compact and extended) were compared with the fluctuations in magnetic fields of sunspots. Common periods in variations of microwave emission of different type of sources and magnetic field of sunspots were discovered. The delay of 17 minutes was revealed for oscillations of the extended ISS with respect to variations of magnetic field of its tail sunspot. The model of the sunspot magnetic structure based on the concept of three magnetic fluxes for explanation of this fact is discussed.

  17. PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources

    PubMed Central

    Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa

    2015-01-01

    The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data. PMID:26834980

  18. Numerical Dissipation and Wrong Propagation Speed of Discontinuities for Stiff Source Terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Kotov, D. V.; Sjoegreen, B.

    2012-01-01

    In compressible turbulent combustion/nonequilibrium flows, the constructions of numerical schemes for (a) stable and accurate simulation of turbulence with strong shocks, and (b) obtaining correct propagation speed of discontinuities for stiff reacting terms on coarse grids share one important ingredient - minimization of numerical dissipation while maintaining numerical stability. Here coarse grids means standard mesh density requirement for accurate simulation of typical non-reacting flows. This dual requirement to achieve both numerical stability and accuracy with zero or minimal use of numerical dissipation is most often conflicting for existing schemes that were designed for non-reacting flows. The goal of this paper is to relate numerical dissipations that are inherited in a selected set of high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities as a function of stiffness of the source term and the grid spacing.

  19. Long-term storage life of light source modules by temperature cycling accelerated life test

    NASA Astrophysics Data System (ADS)

    Ningning, Sun; Manqing, Tan; Ping, Li; Jian, Jiao; Xiaofeng, Guo; Wentao, Guo

    2014-05-01

    Light source modules are the most crucial and fragile devices that affect the life and reliability of the interferometric fiber optic gyroscope (IFOG). While the light emitting chips were stable in most cases, the module packaging proved to be less satisfactory. In long-term storage or the working environment, the ambient temperature changes constantly and thus the packaging and coupling performance of light source modules are more likely to degrade slowly due to different materials with different coefficients of thermal expansion in the bonding interface. A constant temperature accelerated life test cannot evaluate the impact of temperature variation on the performance of a module package, so the temperature cycling accelerated life test was studied. The main failure mechanism affecting light source modules is package failure due to solder fatigue failure including a fiber coupling shift, loss of cooling efficiency and thermal resistor degradation, so the Norris-Landzberg model was used to model solder fatigue life and determine the activation energy related to solder fatigue failure mechanism. By analyzing the test data, activation energy was determined and then the mean life of light source modules in different storage environments with a continuously changing temperature was simulated, which has provided direct reference data for the storage life prediction of IFOG.

  20. Analysis of source term modeling for low-level radioactive waste performance assessments

    SciTech Connect

    Icenhour, A.S.

    1995-03-01

    Site-specific radiological performance assessments are required for the disposal of low-level radioactive waste (LLW) at both commercial and US Department of Energy facilities. This work explores source term modeling of LLW disposal facilities by using two state-of-the-art computer codes, SOURCEI and SOURCE2. An overview of the performance assessment methodology is presented, and the basic processes modeled in the SOURCE1 and SOURCE2 codes are described. Comparisons are made between the two advective models for a variety of radionuclides, transport parameters, and waste-disposal technologies. These comparisons show that, in general, the zero-order model predicts undecayed cumulative fractions leached that are slightly greater than or equal to those of the first-order model. For long-lived radionuclides, results from the two models eventually reach the same value. By contrast, for short-lived radionuclides, the zero-order model predicts a slightly higher undecayed cumulative fraction leached than does the first-order model. A new methodology, based on sensitivity and uncertainty analyses, is developed for predicting intruder scenarios. This method is demonstrated for {sup 137}Cs in a tumulus-type disposal facility. The sensitivity and uncertainty analyses incorporate input-parameter uncertainty into the evaluation of a potential time of intrusion and the remaining radionuclide inventory. Finally, conclusions from this study are presented, and recommendations for continuing work are made.

  1. Short-term spatial change in a volcanic tremor source during the 2011 Kirishima eruption

    NASA Astrophysics Data System (ADS)

    Matsumoto, Satoshi; Shimizu, Hiroshi; Matsushima, Takeshi; Uehira, Kenji; Yamashita, Yusuke; Nakamoto, Manami; Miyazaki, Masahiro; Chikura, Hiromi

    2013-04-01

    Volcanic tremors are indicators of magmatic behavior, which is strongly related to volcanic eruptions and activity. Detection of spatial and temporal variations in the source location is important for understanding the mechanism of volcanic eruptions. However, short-term temporal variations within a tremor event have not always been detected by seismic array observations around volcanoes. Here, we show that volcanic tremor sources were activated at both the top (i.e., the crater) and the lower end of the conduit, by analyzing seismograms from a dense seismic array 3 km from the Shinmoedake crater, Kirishima volcano, Japan. We observed changes in the seismic ray direction during a volcanic tremor sequence, and inferred two major sources of the tremor from the slowness vectors of the approaching waves. One was located in a shallow region beneath the Shinmoedake crater. The other was found in a direction N30°W from the array, pointing to a location above a pressure source. The fine spatial and temporal characteristics of volcanic tremors suggest an interaction between deep and shallow conduits.

  2. The integration of renewable energy sources into electric power distribution systems. Volume 2, Utility case assessments

    SciTech Connect

    Zaininger, H.W.; Ellis, P.R.; Schaefer, J.C.

    1994-06-01

    Electric utility distribution system impacts associated with the integration of renewable energy sources such as photovoltaics (PV) and wind turbines (WT) are considered in this project. The impacts are expected to vary from site to site according to the following characteristics: (1) The local solar insolation and/or wind characteristics; (2) renewable energy source penetration level; (3) whether battery or other energy storage systems are applied; and (4) local utility distribution design standards and planning practices. Small, distributed renewable energy sources are connected to the utility distribution system like other, similar kW- and MW-scale equipment and loads. Residential applications are expected to be connected to single-phase 120/240-V secondaries. Larger kw-scale applications may be connected to three-phase secondaries, and larger hundred-kW and MW-scale applications, such as MW-scale windfarms or PV plants, may be connected to electric utility primary systems via customer-owned primary and secondary collection systems. Small, distributed renewable energy sources installed on utility distribution systems will also produce nonsite-specific utility generation system benefits such as energy and capacity displacement benefits, in addition to the local site-specific distribution system benefits. Although generation system benefits are not site-specific, they are utility-specific, and they vary significantly among utilities in different regions. In addition, transmission system benefits, environmental benefits and other benefits may apply. These benefits also vary significantly among utilities and regions. Seven utility case studies considering PV, WT, and battery storage were conducted to identify a range of potential renewable energy source distribution system applications.

  3. EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY

    SciTech Connect

    Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young; Kim, Chang-Lak

    2003-02-27

    Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation of the Korean concept of the LILW disposal project in the near future.

  4. The source term and waste optimization of molten salt reactors with processing

    SciTech Connect

    Gat, U.; Dodds, H.L.

    1993-07-01

    The source term of a molten salt reactor (MSR) with fuel processing is reduced by the ratio of processing time to refueling time as compared to solid fuel reactors. The reduction, which can be one to two orders of magnitude, is due to removal of the long-lived fission products. The waste from MSRs can be optimized with respect to its chemical composition, concentration, mixture, shape, and size. The actinides and long-lived isotopes can be separated out and returned to the reactor for transmutation. These features make MSRs more acceptable and simpler in operation and handling.

  5. DWPF Algorithm for Calculation of Source Terms and Consequences for EXCEL

    Energy Science and Technology Software Center (ESTSC)

    1997-02-11

    The DWPFAST software application algorithm is an Excel spreadsheet, with optional macros, designed to calculate the radiological source terms and consequences due to postulated accident progressions in non-reactor nuclear facilities (currently it is being used for DWPF). Upon input of a multi-character accident progression identification code, and basic facility data, the algorithm calculates individual accident segment releases, overall facility releases, and radiological consequences for various receptors, for up to 13 individual radionuclides. The algorithm wasmore » designed to support probabilistic safety assements (PSAs).« less

  6. SOURCE TERM REMEDIATION & DEMOLITION STRATEGY FOR THE HANFORD K-AREA SPENT FUEL BASINS

    SciTech Connect

    CHRONISTER, G.B.

    2006-03-23

    This paper discusses the technologies applied at Hanford's K-Basins to mitigate risk and reduce the source term in preparing the basins for deactivation and demolition. These project technologies/strategies (in various stages of implementation) are sequential in nature and are the basis for preparing to dispose of the K Basins--two highly contaminated concrete basins at the Hanford Site in southeastern Washington State. A large collection of spent nuclear fuel stored for many years underwater at the K Basins has been removed to stable, dry, safe storage. Remediation activities are underway to prepare the basin structures for de-inventory, decontamination, and disposal.

  7. An Exact Form of Lilley's Equation with a Velocity Quadrupole/Temperature Dipole Source Term

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2001-01-01

    There have been several attempts to introduce approximations into the exact form of Lilley's equation in order to express the source term as the sum of a quadrupole whose strength is quadratic in the fluctuating velocities and a dipole whose strength is proportional to the temperature fluctuations. The purpose of this note is to show that it is possible to choose the dependent (i.e., the pressure) variable so that this type of result can be derived directly from the Euler equations without introducing any additional approximations.

  8. On the numerical treatment of nonlinear source terms in reaction-convection equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.

  9. Quantifying the Combined Effect of Radiation Therapy and Hyperthermia in Terms of Equivalent Dose Distributions

    SciTech Connect

    Kok, H. Petra; Crezee, Johannes; Franken, Nicolaas A.P.; Barendsen, Gerrit W.

    2014-03-01

    Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normal tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different

  10. Regulatory Technology Development Plan Sodium Fast Reactor. Mechanistic Source Term Development

    SciTech Connect

    Grabaskas, David S.; Brunett, Acacia Joann; Bucknor, Matthew D.; Sienicki, James J.; Sofu, Tanju

    2015-02-28

    Construction and operation of a nuclear power installation in the U.S. requires licensing by the U.S. Nuclear Regulatory Commission (NRC). A vital part of this licensing process and integrated safety assessment entails the analysis of a source term (or source terms) that represents the release of radionuclides during normal operation and accident sequences. Historically, nuclear plant source term analyses have utilized deterministic, bounding assessments of the radionuclides released to the environment. Significant advancements in technical capabilities and the knowledge state have enabled the development of more realistic analyses such that a mechanistic source term (MST) assessment is now expected to be a requirement of advanced reactor licensing. This report focuses on the state of development of an MST for a sodium fast reactor (SFR), with the intent of aiding in the process of MST definition by qualitatively identifying and characterizing the major sources and transport processes of radionuclides. Due to common design characteristics among current U.S. SFR vendor designs, a metal-fuel, pool-type SFR has been selected as the reference design for this work, with all phenomenological discussions geared toward this specific reactor configuration. This works also aims to identify the key gaps and uncertainties in the current knowledge state that must be addressed for SFR MST development. It is anticipated that this knowledge state assessment can enable the coordination of technology and analysis tool development discussions such that any knowledge gaps may be addressed. Sources of radionuclides considered in this report include releases originating both in-vessel and ex-vessel, including in-core fuel, primary sodium and cover gas cleanup systems, and spent fuel movement and handling. Transport phenomena affecting various release groups are identified and qualitatively discussed, including fuel pin and primary coolant retention, and behavior in the cover gas and

  11. User`s Manual for the SOURCE1 and SOURCE2 Computer Codes: Models for Evaluating Low-Level Radioactive Waste Disposal Facility Source Terms (Version 2.0)

    SciTech Connect

    Icenhour, A.S.; Tharp, M.L.

    1996-08-01

    The SOURCE1 and SOURCE2 computer codes calculate source terms (i.e. radionuclide release rates) for performance assessments of low-level radioactive waste (LLW) disposal facilities. SOURCE1 is used to simulate radionuclide releases from tumulus-type facilities. SOURCE2 is used to simulate releases from silo-, well-, well-in-silo-, and trench-type disposal facilities. The SOURCE codes (a) simulate the degradation of engineered barriers and (b) provide an estimate of the source term for LLW disposal facilities. This manual summarizes the major changes that have been effected since the codes were originally developed.

  12. Distribution and source of (129)I, (239)(,240)Pu, (137)Cs in the environment of Lithuania.

    PubMed

    Ežerinskis, Ž; Hou, X L; Druteikienė, R; Puzas, A; Šapolaitė, J; Gvozdaitė, R; Gudelis, A; Buivydas, Š; Remeikis, V

    2016-01-01

    Fifty five soil samples collected in the Lithuania teritory in 2011 and 2012 were analyzed for (129)I, (137)Cs and Pu isotopes in order to investigate the level and distribution of artificial radioactivity in Lithuania. The activity and atomic ratio of (238)Pu/((239,24)0)Pu, (129)I/(127)I and (131)I/(137)Cs were used to identify the origin of these radionuclides. The (238)Pu/(239+240)Pu and (240)Pu/(239)Pu ratios in the soil samples analyzed varied in the range of 0.02-0.18 and 0.18-0.24, respectively, suggesting the global fallout as the major source of Pu in Lithuania. The values of 10(-9) to 10(-6) for (129)I/(127)I atomic ratio revealed that the source of (129)I in Lithuania is global fallout in most cases though several sampling sites shows a possible impact of reprocessing releases. Estimated (129)I/(131)I ratio in soil samples from the southern part of Lithuania shows negligible input of the Chernobyl fallout. No correlation of the (137)Cs and Pu isotopes with (129)I was observed, indicating their different sources terms. Results demonstrate uneven distribution of these radionuclides in the Lithuanian territory and several sources of contamination i.e. Chernobyl accident, reprocessing releases and global fallout. PMID:26476410

  13. Using sediment particle size distribution to evaluate sediment sources in the Tobacco Creek Watershed

    NASA Astrophysics Data System (ADS)

    Liu, Cenwei; Lobb, David; Li, Sheng; Owens, Philip; Kuzyk, ZouZou

    2014-05-01

    Lake Winnipeg has recently brought attention to the deteriorated water quality due to in part to nutrient and sediment input from agricultural land. Improving water quality in Lake Winnipeg requires the knowledge of the sediment sources within this ecosystem. There are a variety of environmental fingerprinting techniques have been successfully used in the assessment of sediment sources. In this study, we used particle size distribution to evaluate spatial and temporal variations of suspended sediment and potential sediment sources collected in the Tobacco Creek Watershed in Manitoba, Canada. The particle size distribution of suspended sediment can reflect the origin of sediment and processes during sediment transport, deposition and remobilization within the watershed. The objectives of this study were to quantify visually observed spatial and temporal changes in sediment particles, and to assess the sediment source using a rapid and cost-effective fingerprinting technique based on particle size distribution. The suspended sediment was collected by sediment traps twice a year during rainfall and snowmelt periods from 2009 to 2012. The potential sediment sources included the top soil of cultivated field, riparian area and entire profile from stream banks. Suspended sediment and soil samples were pre-wet with RO water and sieved through 600 μm sieve before analyzing. Particle size distribution of all samples was determined using a Malvern Mastersizer 2000S laser diffraction with the measurement range up to 600μm. Comparison of the results for different fractions of sediment showed significant difference in particle size distribution of suspended sediment between snowmelt and rainfall events. An important difference of particle size distribution also found between the cultivated soil and forest soil. This difference can be explained by different land uses which provided a distinct fingerprint of sediment. An overall improvement in water quality can be achieved by

  14. Source coding with escort distributions and Rényi entropy bounds

    NASA Astrophysics Data System (ADS)

    Bercher, J.-F.

    2009-08-01

    We discuss the interest of escort distributions and Rényi entropy in the context of source coding. We first recall a source coding theorem by Campbell relating a generalized measure of length to the Rényi-Tsallis entropy. We show that the associated optimal codes can be obtained using considerations on escort-distributions. We propose a new family of measure of length involving escort-distributions and we show that these generalized lengths are also bounded below by the Rényi entropy. Furthermore, we obtain that the standard Shannon codes lengths are optimum for the new generalized lengths measures, whatever the entropic index. Finally, we show that there exists in this setting an interplay between standard and escort distributions.

  15. Accident source terms for pressurized water reactors with high-burnup cores calculated using MELCOR 1.8.5.

    SciTech Connect

    Gauntt, Randall O.; Powers, Dana Auburn; Ashbaugh, Scott G.; Leonard, Mark Thomas; Longmire, Pamela

    2010-04-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in this study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs2MoO4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU

  16. The Analytical Repository Source-Term (AREST) model: Description and documentation

    SciTech Connect

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs.

  17. Projected Source Terms for Potential Sabotage Events Related to Spent Fuel Shipments

    SciTech Connect

    Luna, R.E.; Neuhauser, K.S.; Vigil, M.G.

    1999-06-01

    Two major studies, one sponsored by the U.S. Department of Energy and the other by the U.S. Nuclear Regulatory Commission, were conducted in the late 1970s and early 1980s to provide information and source terms for an optimally successful act of sabotage on spent fuel casks typical of those available for use. This report applies the results of those studies and additional analysis to derive potential source terms for certain classes of sabotage events on spent fuel casks and spent fuel typical of those which could be shipped in the early decades of the 21st century. In addition to updating the cask and spent fuel characteristics used in the analysis, two release mechanisms not included in the earlier works were identified and evaluated. As would be expected, inclusion of these additional release mechanisms resulted in a somewhat higher total release from the postulated sabotage events. Although health effects from estimated releases were addressed in the earlier study conducted for U.S. Department of Energy, they have not been addressed in this report. The results from this report maybe used to estimate health effects.

  18. Computer simulation of PPF distribution under blue and red LED light source for plant growth.

    PubMed

    Takita, S; Okamoto, K; Yanagi, T

    1996-12-01

    The superimposed pattern of "luminescence spectrum of blue light emitting diode (LED)" and "that of red LED", corresponds well to light absorption spectrum of chlorophyll. If these two kinds of LED are used as a light source, various plant cultivation experiments are possible. The cultivation experiments which use such light sources are becoming increasingly active, and in such experiments, it is very important to know the distribution of the photosynthetic photon flux (PPF) which exerts an important influence on photosynthesis. Therefore, we have developed a computer simulation system which can visualize the PPF distribution under a light source equipped with blue and red LEDs. In this system, an LED is assumed to be a point light source, and only the photons which are emitted directly from LED are considered. This simulation system can display a perspective view of the PPF distribution, a transverse and a longitudinal section of the distribution, and a contour map of the distribution. Moreover, a contour map of the ratio of the value of the PPF emitted by blue LEDs to that by blue and red LEDs can be displayed. As the representation is achieved by colored lines according to the magnitudes of the PPF in our system, a user can understand and evaluate the state of the PPF well. PMID:11541576

  19. KAPPA DISTRIBUTION MODEL FOR HARD X-RAY CORONAL SOURCES OF SOLAR FLARES

    SciTech Connect

    Oka, M.; Ishikawa, S.; Saint-Hilaire, P.; Krucker, S.; Lin, R. P.

    2013-02-10

    Solar flares produce hard X-ray emission, the photon spectrum of which is often represented by a combination of thermal and power-law distributions. However, the estimates of the number and total energy of non-thermal electrons are sensitive to the determination of the power-law cutoff energy. Here, we revisit an 'above-the-loop' coronal source observed by RHESSI on 2007 December 31 and show that a kappa distribution model can also be used to fit its spectrum. Because the kappa distribution has a Maxwellian-like core in addition to a high-energy power-law tail, the emission measure and temperature of the instantaneous electrons can be derived without assuming the cutoff energy. Moreover, the non-thermal fractions of electron number/energy densities can be uniquely estimated because they are functions of only the power-law index. With the kappa distribution model, we estimated that the total electron density of the coronal source region was {approx}2.4 Multiplication-Sign 10{sup 10} cm{sup -3}. We also estimated without assuming the source volume that a moderate fraction ({approx}20%) of electrons in the source region was non-thermal and carried {approx}52% of the total electron energy. The temperature was 28 MK, and the power-law index {delta} of the electron density distribution was -4.3. These results are compared to the conventional power-law models with and without a thermal core component.

  20. Numerical evaluation of the jet noise source distribution from far-field cross correlations

    NASA Technical Reports Server (NTRS)

    Maestrello, L.; Liu, C.-H.

    1976-01-01

    This paper contains the development of techniques to determine the relationship between the unknown source correlation function to the correlation of scattered amplitudes in a jet. This study has application to the determination of forward motion effects. The technique has been developed and tested on a model jet of high subsonic flow. Numerical solution was obtained by solving the Fredholm integral equation of the first kind. Interpretation of the apparent source distribution and its application to flight testing are provided.

  1. Mismatched-basis statistics enable quantum key distribution with uncharacterized qubit sources

    NASA Astrophysics Data System (ADS)

    Yin, Zhen-Qiang; Fung, Chi-Hang Fred; Ma, Xiongfeng; Zhang, Chun-Mei; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-11-01

    In the postprocessing of quantum key distribution, the raw key bits from the mismatched-basis measurements, where two parties use different bases, are normally discarded. Here, we propose a postprocessing method that exploits measurement statistics from mismatched-basis cases and prove that incorporating these statistics enables uncharacterized qubit sources to be used in the measurement-device-independent quantum key distribution protocol and the Bennett-Brassard 1984 protocol, which is otherwise impossible.

  2. Improvement of capabilities of the Distributed Electrochemistry Modeling Tool for investigating SOFC long term performance

    SciTech Connect

    Gonzalez Galdamez, Rinaldo A.; Recknagle, Kurtis P.

    2012-04-30

    This report provides an overview of the work performed for Solid Oxide Fuel Cell (SOFC) modeling during the 2012 Winter/Spring Science Undergraduate Laboratory Internship at Pacific Northwest National Laboratory (PNNL). A brief introduction on the concept, operation basics and applications of fuel cells is given for the general audience. Further details are given regarding the modifications and improvements of the Distributed Electrochemistry (DEC) Modeling tool developed by PNNL engineers to model SOFC long term performance. Within this analysis, a literature review on anode degradation mechanisms is explained and future plans of implementing these into the DEC modeling tool are also proposed.

  3. Long-term monitoring of a marine geologic hydrocarbon source by a coastal air pollution station in Southern California

    NASA Astrophysics Data System (ADS)

    Bradley, Eliza; Leifer, Ira; Roberts, Dar

    2010-12-01

    Hourly total hydrocarbon (THC) data, spanning 1990-2008 from a California air pollution station located near the Coal Oil Point (COP) seep field, were analyzed and clearly showed geologic CH 4 emissions as the dominant local source. Annual COP emissions are conservatively estimated as 0.015 Tg CH 4 year -1 and represent a natural and concentrated geologic methane source (24 m 3 m -2 day -1 gas flux at some active seeps, Clark et al., 2010). For a sense of the scale and potential importance to the regional Southern California methane budget, COP emits an amount equivalent to 8% of the estimated Los Angeles County anthropogenic emissions. Station THC measurements near COP showed a strong wind dependency with elevated levels closely correlated with a sonar-derived spatial distribution of seep field emissions. THC varied seasonally, with a maximum in January and minimum in July and a peak-to-peak amplitude of 0.24 ppm. The seasonal signal was more readily apparent midday ( R2 = 0.69 harmonic fit), compared to nighttime and morning ( R2 < 0.45). The bimodal diel THC pattern consisted of seasonally-modulated peaks in the morning and evening. THC temporal and spatial trends were consistent with both transport and source emission variations. Long-term, annual seep field emissions consistently decreased on a field-wide basis until the late 1990s, before increasing consistently, most likely as a function of underlying geologic processes. This study demonstrates the value of municipal air quality monitoring stations for insight into local greenhouse gas sources and highlights the non-negligible and variable contribution from marine geologic seepage.

  4. Source-term development for a contaminant plume for use by multimedia risk assessment models

    SciTech Connect

    Whelan, Gene ); McDonald, John P. ); Taira, Randal Y. ); Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.

    1999-12-01

    Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equal importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.

  5. CCN frequency distributions and aerosol chemical composition from long-term observations at European ACTRIS supersites

    NASA Astrophysics Data System (ADS)

    Decesari, Stefano; Rinaldi, Matteo; Schmale, Julia Yvonne; Gysel, Martin; Fröhlich, Roman; Poulain, Laurent; Henning, Silvia; Stratmann, Frank; Facchini, Maria Cristina

    2016-04-01

    Cloud droplet number concentration is regulated by the availability of aerosol acting as cloud condensation nuclei (CCN). Predicting the air concentrations of CCN involves knowledge of all physical and chemical processes that contribute to shape the particle size distribution and determine aerosol hygroscopicity. The relevance of specific atmospheric processes (e.g., nucleation, coagulation, condensation of secondary organic and inorganic aerosol, etc.) is time- and site-dependent, therefore the availability of long-term, time-resolved aerosol observations at locations representative of diverse environments is strategic for the validation of state-of-the-art chemical transport models suited to predict CCN concentrations. We focused on long-term (year-long) datasets of CCN and of aerosol composition data including black carbon, and inorganic as well as organic compounds from the Aerosol Chemical Speciation Monitor (ACSM) at selected ACTRIS supersites (http://www.actris.eu/). We discuss here the joint frequency distribution of CCN levels and of aerosol chemical components concentrations for two stations: an alpine site (Jungfraujoch, CH) and a central European rural site (Melpitz, DE). The CCN frequency distributions at Jungfraujoch are broad and generally correlated with the distributions of the concentrations of aerosol chemical components (e.g., high CCN concentrations are most frequently found for high organic matter or black carbon concentrations, and vice versa), which can be explained as an effect of the strong seasonality in the aerosol characteristics at the mountain site. The CCN frequency distributions in Melpitz show a much weaker overlap with the distributions of BC concentrations or other chemical compounds. However, especially at high CCN concentration levels, a statistical correlation with organic matter (OM) concentration can be observed. For instance, the number of CCN (with particle diameter between 20 and 250 nm) at a supersaturation of 0.7% is

  6. Long-term observations of aerosol size distributions in semi-clean and polluted savannah in South Africa

    NASA Astrophysics Data System (ADS)

    Vakkari, V.; Beukes, J. P.; Laakso, H.; Mabaso, D.; Pienaar, J. J.; Kulmala, M.; Laakso, L.

    2012-09-01

    originates from regional wild fires, while at Marikana domestic heating in the informal settlements is the main source. Air mass history analysis for Botsalano identified four regional scale source areas in Southern Africa and enabled the differentiation between fresh and aged rural background aerosol originating from the clean sector, i.e., western sector with very few large anthropogenic sources. Comparison to size distributions published for other comparable environments in Northern Hemisphere shows Southern African savannah to have a unique combination of sources and meteorological parameters. The observed strong link between combustion and seasonal variation is comparable only to the Amazon basin; however the lack of long-term observations in the Amazonas does not allow a quantitative comparison. All the data presented in the figures, as well as the time series of monthly mean and median size distributions are included in numeric form as a Supplement to provide a reference point for the aerosol modelling community.

  7. Long-term observations of aerosol size distributions in semi-clean and polluted savannah in South Africa

    NASA Astrophysics Data System (ADS)

    Vakkari, V.; Beukes, J. P.; Laakso, H.; Mabaso, D.; Pienaar, J. J.; Kulmala, M.; Laakso, L.

    2013-02-01

    concentration originates from regional wild fires, while at Marikana domestic heating in the informal settlements is the main source. Air mass history analysis for Botsalano identified four regional scale source areas in southern Africa and enabled the differentiation between fresh and aged rural background aerosol originating from the clean sector, i.e., western sector with very few large anthropogenic sources. Comparison to size distributions published for other comparable environments in Northern Hemisphere shows southern African savannah to have a unique combination of sources and meteorological parameters. The observed strong link between combustion and seasonal variation is comparable only to the Amazon basin; however, the lack of long-term observations in the Amazonas does not allow a quantitative comparison. All the data presented in the figures, as well as the time series of monthly mean and median size distributions are included in numeric form as a Supplement to provide a reference point for the aerosol modelling community.

  8. X-ray point source distribution in the Galactic center region

    NASA Astrophysics Data System (ADS)

    Hong, J.; Grindlay, J.; Laycock, S.; Schlegel, E. M.; Zhao, P.

    2003-12-01

    Recent deep (520 ksec) Chandra observations on the Sgr A* by Muno et al. (2003, ApJ, 589, 225) discovered a significant population of hard X-ray (2-8 keV) sources strongly peaked around Sgr A*, falling off as 1/theta from the center. Preliminary analysis by Hong et al (2003, HEAD, 35, 1402) compared their spatial distribution with that of relatively shallow (2 x 12 ksec observation per field) and wide field (2 deg x 0.5 deg) observations around the Galactic center by Wang et al (2002, Nature, 415, 148). As a part of our on-going Chandra Multiwavelength Plane (ChaMPlane) Survey project to determine the accretion source content of the Galaxy, we (re)-analyzed all the available Chandra observations on the Galactic center region using Chandra analysis tools developed for ChaMPLane (Hong et al 2003). This analysis includes the Wang fields as well as archival data of a deep (100 ksec) Chandra observation of Sgr B2 (Takagi et al. 2002, ApJ, 573, 275) and other moderately deep (50 ksec) fields within 2 deg of the Galactic center. We derive logN-logS source distributions in the separate fields for a preliminary analysis of the overall source distribution in the central bulge. This is compared with the source distributions in our deep survey of Baade's Window (cf. Grindlay et al and Laycock et al; this meeting) for an initial estimate of the large scale point source distribution in the Bulge. This work is supported by NASA grants AR2-3002A and GO3-4033A.

  9. Source Distributions of Substorm Ions Observed in the Near-Earth Magnetotail

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M.; El-Alaoui, M.; Peroomian, V.; Walker, R. J.; Raeder, J.; Frank, L. A.; Paterson, W. R.

    1999-01-01

    This study employs Geotail plasma observations and numerical modeling to determine sources of the ions observed in the near-Earth magnetotail near midnight during a substorm. The growth phase has the low-latitude boundary layer as its most important source of ions at Geotail, but during the expansion phase the plasma mantle is dominant. The mantle distribution shows evidence of two distinct entry mechanisms: entry through a high latitude reconnection region resulting in an accelerated component, and entry through open field lines traditionally identified with the mantle source. The two entry mechanisms are separated in time, with the high-latitude reconnection region disappearing prior to substorm onset.

  10. Detailed dose distribution prediction of Cf-252 brachytherapy source with boron loading dose enhancement.

    PubMed

    Ghassoun, J; Mostacci, D; Molinari, V; Jehouani, A

    2010-02-01

    The purpose of this work is to evaluate the dose rate distribution and to determine the boron effect on dose rate distribution for (252)Cf brachytherapy source. This study was carried out using a Monte Carlo simulation. To validate the Monte Carlo computer code, the dosimetric parameters were determined following the updated TG-43 formalism and compared with current literature data. The validated computer code was then applied to evaluate the neutron and photon dose distribution and to illustrate the boron loading effect. PMID:19889549

  11. Distribution functions in plasmas generated by a volume source of fission fragments. [in nuclear pumped lasers

    NASA Technical Reports Server (NTRS)

    Deese, J. E.; Hassan, H. A.

    1979-01-01

    The role played by fission fragments and electron distribution functions in nuclear pumped lasers is considered and procedures for their calculations are outlined. The calculations are illustrated for a He-3/Xe mixture where fission is provided by the He-3(n,p)H-3 reaction. Because the dominant ion in the system depends on the Xe fraction, the distribution functions cannot be determined without the simultaneous consideration of a detailed kinetic model. As is the case for wall sources of fission fragments, the resulting plasmas are essentially thermal but the electron distribution functions are non-Maxwellian.

  12. Marine litter on Mediterranean shores: Analysis of composition, spatial distribution and sources in north-western Adriatic beaches.

    PubMed

    Munari, Cristina; Corbau, Corinne; Simeoni, Umberto; Mistri, Michele

    2016-03-01

    Marine litter is one descriptor in the EU Marine Strategy Framework Directive (MSFD). This study provides the first account of an MSFD indicator (Trends in the amount of litter deposited on coastlines) for the north-western Adriatic. Five beaches were sampled in 2015. Plastic dominated in terms of abundance, followed by paper and other groups. The average density was 0.2 litter items m(-2), but at one beach it raised to 0.57 items m(-2). The major categories were cigarette butts, unrecognizable plastic pieces, bottle caps, and others. The majority of marine litter came from land-based sources: shoreline and recreational activities, smoke-related activities and dumping. Sea-based sources contributed for less. The abundance and distribution of litter seemed to be particularly influenced by beach users, reflecting inadequate disposal practices. The solution to these problems involves implementation and enforcement of local educational and management policies. PMID:26725754

  13. Long-term accounting for raindrop size distribution variations improves quantitative precipitation estimation by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2016-04-01

    Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.

  14. Detecting Long-term Changes in Point Source Fossil CO2 Emissions with Tree Ring Archives

    NASA Astrophysics Data System (ADS)

    Keller, E. D.; Turnbull, J. C.; Norris, M. W.

    2015-12-01

    We examine the utility of tree ring 14C archives for detecting long term changes in fossil CO2 emissions from a point source. Trees assimilate carbon from the atmosphere during photosynthesis, in the process faithfully recording the average atmospheric 14C content over the growing season in each annual tree ring. Using 14C as a proxy for fossil CO2, we examine interannual variability over six years of fossil CO2 observations between 2004 and 2012 from two trees growing near the Kapuni Natural Gas Plant in rural Taranaki, New Zealand. We quantify the amount of variability that can be attributed to transport and meteorology by simulating constant point source fossil CO2 emissions over the observation period with the atmospheric transport model WindTrax. We then calculate the amount of change in emissions that we can detect with new observations over annual or multi-year time periods given both measurement uncertainty of 1ppm and the modelled variation in transport. In particular, we ask, what is the minimum amount of change in emissions that we can detect using this method, given a reference period of six years? We find that changes of 42% or more could be detected in a new sample from one year at the pine tree, or 22% in the case of four years of new samples. This threshold lowers and the method becomes more practical with a larger signal; for point sources 10 times the magnitude of the Kapuni plant (a typical size for large electricity generation point sources worldwide), it would be possible to detect sustained emissions changes on the order of 10% given suitable meteorology and observations.

  15. Impact of the differential fluence distribution of brachytherapy sources on the spectroscopic dose-rate constant

    SciTech Connect

    Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A. E-mail: ladewerd@wisc.edu

    2015-05-15

    Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-only model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in

  16. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    NASA Astrophysics Data System (ADS)

    Bansal, A. R.; Anand, S.; Rajaram, M.; Rao, V.; Dimri, V. P.

    2012-12-01

    The depth to the bottom of the magnetic sources (DBMS) may be used as an estimate of the Curie - point depth. The DBMSs can also be interpreted in term of thermal structure of the crust. The thermal structure of the crust is a sensitive parameter and depends on the many properties of crust e.g. modes of deformation, depths of brittle and ductile deformation zones, regional heat flow variations, seismicity, subsidence/uplift patterns and maturity of organic matter in sedimentary basins. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on fractal distribution has been proposed. We applied this modified centroid method to the aeromagnetic data of the central Indian region and selected 29 half overlapping blocks of dimension 200 km x 200 km covering different parts of the central India. Shallower values of the DBMS are found for the western and southern portion of Indian shield. The DBMSs values are found as low as close to middle crust in the south west Deccan trap and probably deeper than Moho in the Chhatisgarh basin. In few places DBMS are close to the Moho depth found from the seismic study and others places shallower than the Moho. The DBMS indicate complex nature of the Indian crust.

  17. DISTRIBUTION, TYPE, ACCUMULATION AND SOURCE OF MARINE DEBRIS IN THE UNITED STATES, 1989-93

    EPA Science Inventory

    Distribution, type, accumulation, & source of marine debris on coastal beaches and in harbors of the United States were examined from 1989 to 1993. nformation was compiled from annual beach cleanups coordinated by the Center for marine Conservation, quarterly beach surveys at eig...

  18. Bacterial Composition in a Metropolitan Drinking Water Distribution System Utilizing Different Source Waters

    EPA Science Inventory

    The microbial community structure was investigated from bulk phase water samples of multiple collection sites from two service areas within the Cincinnati drinking water distribution system (DWDS). Each area is associated with a different primary source of water (i.e., groundwat...

  19. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Where do moneys distributed from the Fund and other sources go? 872.12 Section 872.12 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR ABANDONED MINE LAND RECLAMATION MONEYS AVAILABLE TO ELIGIBLE...

  20. 30 CFR 872.12 - Where do moneys distributed from the Fund and other sources go?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Where do moneys distributed from the Fund and other sources go? 872.12 Section 872.12 Mineral Resources OFFICE OF SURFACE MINING RECLAMATION AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR ABANDONED MINE LAND RECLAMATION MONEYS AVAILABLE TO ELIGIBLE...

  1. MICROBIOLOGICAL CHANGES IN SOURCE WATER TREATMENT: REFLECTIONS IN DISTRIBUTION WATER QUALITY

    EPA Science Inventory

    Microbial quality in the distribution system is a reflection of raw source water characteristics, treatment process configurations and their modifications. ased on case history experiences there may at times be a microbial breakthrough that is caused by fluctuations in raw surfac...

  2. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  3. [Soil Heavy Metal Spatial Distribution and Source Analysis Around an Aluminum Plant in Baotou].

    PubMed

    Zhang, Lian-ke; Li, Hai-peng; Huang, Xue-min; Li, Yu-mei; Jiao, Kun-ling; Sun, Peng; Wang, Wei-da

    2016-03-15

    The soil with 500 m distance from an aluminum plant in Baotou was studied. A total of 64 soil samples were taken from the 0-5 cm, 5-20 cm, 20-40 cm and 40-60 cm layers, and the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn were tested, respectively. The correlation analysis and principal component analysis were used to identify the sources of these heavy metals in soils. The results suggested that the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn in study area were 32.9, 50.35, 69.92, 43.78, 0.54, 554.42 and 36.65 mg · kg⁻¹ respectively. All seven heavy metals tested were overweight compared with the background values of soil in Inner Mongolia. The spatial distribution of heavy metals showed that the horizontal distribution of heavy metals was obviously enriched in the southwest, while in vertical distribution, the heavy metal content (0 to 5 cm) was highest in the surface soil, and the heavy metal content decreased with increasing depth and tended to be stabilized when the depth was over 20 cm. Source analysis showed that the source of Cu, Zn, Cr and Mn might be influenced by the aluminum plant and the surrounding industrial activity. The source of Pb and Cd might be mainly related to road transportation. The source of Ni may be affected by agricultural activities and soil parent material together. PMID:27337911

  4. Evaluation of severe accident risks: Quantification of major input parameters. Experts` determination of source term issues: Volume 2, Revision 1, Part 4

    SciTech Connect

    Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D.; Amos, C.N.; Helton, J.; Boyd, G.

    1992-06-01

    In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.

  5. Estimating distributions of long-term particulate matter and manganese exposures for residents of Toronto, Canada

    NASA Astrophysics Data System (ADS)

    Clayton, C. A.; Pellizzari, E. D.; Rodes, C. E.; Mason, R. E.; Piper, L. L.

    Methylcyclopentadienyl manganese tricarbonyl (MMT), a manganese-based gasoline additive, has been used in Canadian gasoline for about 20 yr. Because MMT potentially increases manganese levels in particulate matter resulting from automotive exhausts, a population-based study conducted in Toronto, Canada assessed the levels of personal manganese exposures. Integrated 3-day particulate matter (PM 2.5) exposure measurements, obtained for 922 participant periods over the course of a year (September 1995-August 1996), were analyzed for several constituent elements, including Mn. The 922 measurements included 542 participants who provided a single 3-day observation plus 190 participants who provided two observations (in two different months). In addition to characterizing the distributions of 3-day average exposures, which can be estimated directly from the data, including the second observation for some participants enabled us to use a model-based approach to estimate the long-term (i.e. annual) exposure distributions for PM 2.5 mass and Mn. The model assumes that individuals' 3-day average exposure measurements within a given month are lognormally distributed and that the correlation between 3-day log-scale measurements k months apart (after seasonal adjustment) depends only on the lag time, k, and not on the time of year. The approach produces a set of simulated annual exposures from which an annual distribution can be inferred using estimated correlations and monthly means and variances (log scale) as model inputs. The model appeared to perform reasonably well for the overall population distribution of PM 2.5 exposures (mean=28 μg m -3). For example, the model predicted the 95th percentile of the annual distribution to be 62.9 μg m -3 while the corresponding percentile estimated for the 3-day data was 86.6 μg m -3. The assumptions of the model did not appear to hold for the overall population of Mn exposures (mean=13.1 ng m -3). Since the population included

  6. Long-term spatial distributions and trends of ambient CO concentrations in the central Taiwan Basin

    NASA Astrophysics Data System (ADS)

    Lin, Yu Chi; Lan, Yung Yao; Tsuang, Ben-Jei; Engling, Guenter

    2008-06-01

    Long-term spatial distributions and trends of atmospheric carbon monoxide (CO) concentrations in the central Taiwan Basin were investigated by analysis of CO data obtained from the Taiwan Air Quality Monitoring Network (TAQMN). The influence of meteorological conditions on the CO patterns was also analyzed in this paper. The results showed the highest CO concentrations were found in the vicinity of urban areas with a 13-yr mean value of 0.79±0.16 ppm. This was associated with the most intensive anthropogenic CO emissions at the urban sites. For all sites, lower CO levels were consistently observed during the summer season. This was explained by favorable conditions for dispersion and loss of CO via photochemical reactions. Analysis of wind fields and backward trajectories revealed that two types of synoptic sea breezes directly influenced the CO spatial distributions in the basin. During autumn to spring, northerly flow accompanied by pollutants traveled to inland areas, resulting in higher CO concentrations in the remote areas. During summer, breezes coming from the sea or areas to the south with lower CO emissions, resulted in more uniform spatial distributions of CO in the study region. While CO concentrations exhibited decreasing trends, the average CO mixing ratio from 1994 through 2006 decreased at a rate of approximately 0.02 ppm yr-1 in the central Taiwan Basin.

  7. Multi-term approximation to the Boltzmann transport equation for electron energy distribution functions in nitrogen

    NASA Astrophysics Data System (ADS)

    Feng, Yue

    Plasma is currently a hot topic and it has many significant applications due to its composition of both positively and negatively charged particles. The energy distribution function is important in plasma science since it characterizes the ability of the plasma to affect chemical reactions, affect physical outcomes, and drive various applications. The Boltzmann Transport Equation is an important kinetic equation that provides an accurate basis for characterizing the distribution function---both in energy and space. This dissertation research proposes a multi-term approximation to solve the Boltzmann Transport Equation by treating the relaxation process using an expansion of the electron distribution function in Legendre polynomials. The elastic and 29 inelastic cross sections for electron collisions with nitrogen molecules (N2) and singly ionized nitrogen molecules ( N+2 ) have been used in this application of the Boltzmann Transport Equation. Different numerical methods have been considered to compare the results. The numerical methods discussed in this thesis are the implicit time-independent method, the time-dependent Euler method, the time-dependent Runge-Kutta method, and finally the implicit time-dependent relaxation method by generating the 4-way grid with a matrix solver. The results show that the implicit time-dependent relaxation method is the most accurate and stable method for obtaining reliable results. The results were observed to match with the published experimental data rather well.

  8. Calculation of the neutron source distribution in the VENUS PWR Mockup Experiment

    SciTech Connect

    Williams, M.L.; Morakinyo, P.; Kam, F.B.K.; Leenders, L.; Minsart, G.; Fabry, A.

    1984-01-01

    The VENUS PWR Mockup Experiment is an important component of the Nuclear Regulatory Commission's program goal of benchmarking reactor pressure vessel (RPV) fluence calculations in order to determine the accuracy to which RPV fluence can be computed. Of particular concern in this experiment is the accuracy of the source calculation near the core-baffle interface, which is the important region for contributing to RPV fluence. Results indicate that the calculated neutron source distribution within the VENUS core agrees with the experimental measured values with an average error of less than 3%, except at the baffle corner, where the error is about 6%. Better agreement with the measured fission distribution was obtained with a detailed space-dependent cross-section weighting procedure for thermal cross sections near the core-baffle interface region. The maximum error introduced into the predicted RPV fluence due to source errors should be on the order of 5%.

  9. FDTD verification of deep-set brain tumor hyperthermia using a spherical microwave source distribution

    SciTech Connect

    Dunn, D.; Rappaport, C.M.; Terzuoli, A.J. Jr.

    1996-10-01

    Although use of noninvasive microwave hyperthermia to treat cancer is problematic in many human body structures, careful selection of the source electric field distribution around the entire surface of the head can generate a tightly focused global power density maximum at the deepest point within the brain. An analytic prediction of the optimum volume field distribution in a layered concentric head model based on summing spherical harmonic modes is derived and presented. This ideal distribution is then verified using a three-dimensional finite difference time domain (TDTD) simulation with a discretized, MRI-based head model excited by the spherical source. The numerical computation gives a very similar dissipated power pattern as the analytic prediction. This study demonstrates that microwave hyperthermia can theoretically be a feasible cancer treatment modality for tumors in the head, providing a well-resolved hot-spot at depth without overheating any other healthy tissue.

  10. Methodology and tools for source term assessment in case of emergency.

    PubMed

    Herviou, Karine; Calmtorp, Christer

    2004-01-01

    By looking at the power plant state of fission product barriers and critical safety systems, the magnitude of a potential radioactive release could be predicted in a timely manner to allow emergency response to be executed even before the occurrence of a release. This is the perspective in which the development of ASTRID methodology and tool is performed. The methodology maps out, for several reactor types as well as reactor containments, relevant process parameters and indicators, what and how to calculate and a structured way to summarise and conclude on potential source term and likely time projections. A computer tool is proposed to support the methodology, to suite different user situations, both on-site and off-site as well as size of staff, priority and work order. The output from such an assessment is intended to, first, give bases for decisions on necessary urgent protective actions pre-release, and, second, an input for the sophisticated dispersion calculation codes. PMID:15238656

  11. ACT: a program for calculation of the changes in radiological source terms with time

    SciTech Connect

    Woolfolk, S.W.

    1985-08-12

    The program ACT calculates the source term activity from a set of initial activities as a function of discrete time steps. This calculation considers inbreeding of daughter products. ACT also calculates ''Probable Release'', which is the activity at a given time multiplied by both the fraction released and the probability of the release. The ''Probable Release'' not only assumes that the fraction released is a single step function with time, but that the probability of release is zero for a limited period and it can be described by the ''Wisconsin Regression'' function using time as the independent variable. Finally, the program calculates the time integrated sum of the ''Probable Release'' for each isotope. This program is intended to support analysis of releases from radioactive waste disposal sites such as those required by 40 CFR 191.

  12. DWPFASTXL: Defense Waste Processing Facility Algorithm for Source Terms for Excel

    SciTech Connect

    Toole, B.; Gough, S.T.

    1994-11-01

    Software & Copyright Submittal The tool used to analyze the progression of accidents in the DWPF is called an Accident Progression Event Tree (APET). The APET methodology groups analyzed progressions into a series of bins, based on similarities in their characteristics. DWPFASTXL is an Excel spreadsheet that can be used to calculate radiological source terms and consequences for these accident progression bins. This document presents the calculations used in version 2.0 of the DWPFASTXL spreadsheet. This revision of DWPFASTXL has been written to complete the debugging of version 1.0, and to reconfigure the spreadsheet to model the new bin attribute table developed for the latest revision of the DWPF safety analyses

  13. The Annular Core Research Reactor (ACRR) postulated limiting event initial and building source terms

    SciTech Connect

    Restrepo, L F

    1992-08-01

    As part of the update of the Safety analysis Report (SAR) for the Annular Core Research Reactor (ACRR), operational limiting events under the category of inadvertent withdrawal of an experiment while at power or during a power pulse were determined to be the most limiting event(s) for this reactor. This report provides a summary of the assumptions, modeling, and results in evaluation of: Reactivity and thermal hydraulics analysis to determine the amount of fuel melt or fuel damage ratios; The reactor inventories following the limiting event; A literature review of post NUREG-0772 release fraction experiment results on severe fuel damages; Decontamination factors due to in-pool transport; and In-building transport modeling and building source term analysis.

  14. Long-term Periodicity Analysis of Polarization Variation for Radio Sources

    NASA Astrophysics Data System (ADS)

    Yuan, Yuhai

    2011-06-01

    We use the database of University of Michigan Radio Astronomy Observatory (UMRAO) at three radio bands (4.8, 8 and 14.5 GHz) to analyse the long-term polarization variation in search of the possible periodicity. Using the power spectral analysis method (PSA), the Jurkevich method and the discrete correlation function (DCF) method, we find that there are 16 sources lying in periodicity. The results show the astrophysically meaningful periodicity covering 2.1 years to 16.2 years at 4.8 GHz, 2.8 years to 16.3 years at 8 GHz, and 1.8 years to 16.6 years at 14.5 GHz.

  15. Microbial characterization for the Source-Term Waste Test Program (STTP) at Los Alamos

    SciTech Connect

    Leonard, P.A.; Strietelmeier, B.A.; Pansoy-Hjelvik, M.E.; Villarreal, R.

    1999-04-01

    The effects of microbial activity on the performance of the proposed underground nuclear waste repository, the Waste Isolation Pilot Plant (WIPP) at Carlsbad, New Mexico are being studied at Los Alamos National Laboratory (LANL) as part of an ex situ large-scale experiment. Actual actinide-containing waste is being used to predict the effect of potential brine inundation in the repository in the distant future. The study conditions are meant to simulate what might exist should the underground repository be flooded hundreds of years after closure as a result of inadvertent drilling into brine pockets below the repository. The Department of Energy (DOE) selected LANL to conduct the Actinide Source-Term Waste Test Program (STTP) to confirm the predictive capability of computer models being developed at Sandia National Laboratory.

  16. Update to the NARAC NNPP Non-Reactor Source Term Products

    SciTech Connect

    Vogt, P

    2009-06-29

    Recent updates to NARAC plots for NNPP requires a modification to your iClient database. The steps you need to take are described below. Implementation of the non-reactor source terms in February 2009 included four plots, the traditional three instantaneous plots (1-3) and a new Gamma Dose Rate: 1. Particulate Air Concentration 2. Total Ground Deposition 3. Whole Body Inhalation Dose Rate (CEDE Rate) 4. Gamma Dose Rate These plots were all initially implemented to be instantaneous output and generated 30 minutes after the release time. Recently, Bettis and NAVSEA have requested the Whole Body CEDE rate plot to be changed to an integrated dose valid at two hours. This is consistent with the change made to the Thyroid Dose rate plot conversion to a 2-hour Integrated Thyroid dose for the Reactor and Criticality accidents.

  17. A simplified radionuclide source term for total-system performance assessment; Yucca Mountain Site Characterization Project

    SciTech Connect

    Wilson, M.L.

    1991-11-01

    A parametric model for releases of radionuclides from spent-nuclear-fuel containers in a waste repository is presented. The model is appropriate for use in preliminary total-system performance assessments of the potential repository site at Yucca Mountain, Nevada; for this reason it is simpler than the models used for detailed studies of waste-package performance. Terms are included for releases from the spent fuel pellets, from the pellet/cladding gap and the grain boundaries within the fuel pellets, from the cladding of the fuel rods, and from the radioactive fuel-assembly parts. Multiple barriers are considered, including the waste container, the fuel-rod cladding, the thermal ``dry-out``, and the waste form itself. The basic formulas for release from a single fuel rod or container are extended to formulas for expected releases for the whole repository by using analytic expressions for probability distributions of some important parameters. 39 refs., 4 figs., 4 tabs.

  18. Distributed Source Modeling of Language with Magnetoencephalography: Application to Patients with Intractable Epilepsy

    PubMed Central

    McDonald, Carrie R.; Thesen, Thomas; Hagler, Donald J.; Carlson, Chad; Devinksy, Orrin; Kuzniecky, Rubin; Barr, William; Gharapetian, Lusineh; Trongnetrpunya, Amy; Dale, Anders M.; Halgren, Eric

    2009-01-01

    Purpose To examine distributed patterns of language processing in healthy controls and patients with epilepsy using magnetoencephalography (MEG), and to evaluate the concordance between laterality of distributed MEG sources and language laterality as determined by the intracarotid amobarbitol procedure (IAP). Methods MEG was performed in ten healthy controls using an anatomically-constrained, noise-normalized distributed source solution (dSPM). Distributed source modeling of language was then applied to eight patients with intractable epilepsy. Average source strengths within temporoparietal and frontal lobe regions of interest (ROIs) were calculated and the laterality of activity within ROIs during discrete time windows was compared to results from the IAP. Results In healthy controls, dSPM revealed activity in visual cortex bilaterally from ~80-120ms in response to novel words and sensory control stimuli (i.e., false fonts). Activity then spread to fusiform cortex ~160-200ms, and was dominated by left hemisphere activity in response to novel words. From ~240-450ms, novel words produced activity that was left-lateralized in frontal and temporal lobe regions, including anterior and inferior temporal, temporal pole, and pars opercularis, as well as bilaterally in posterior superior temporal cortex. Analysis of patient data with dSPM demonstrated that from 350-450ms, laterality of temporoparietal sources agreed with the IAP 75% of the time, whereas laterality of frontal MEG sources agreed with the IAP in all eight patients. Discussion Our results reveal that dSPM can unveil the timing and spatial extent of language processes in patients with epilepsy and may enhance knowledge of language lateralization and localization for use in preoperative planning. PMID:19552656

  19. Effective petroleum source rocks of the world: Stratigraphic distribution and controlling depositional factors

    SciTech Connect

    Klemme, H.D. ); Ulmishek, G.F. )

    1991-12-01

    Six stratigraphic intervals, representing one-third of Phanerozoic time, contain petroleum source rocks that have provided more than 90% of the world's discovered original reserves of oil and gas (in barrels of oil equivalent). The six intervals are (1) Silurian (generated 9% of the world's reserves), (2) Upper Devonian-Tournaisian (8% of reserves), (3) Pennsylvanian-Lower Permian (8% of reserves), (4) Upper Jurassic (25% of reserves), (5) middle Cretaceous (29% of reserves), and (6) Oligocene-Miocene (12.5% of reserves). This uneven distribution of source rocks vary from interval to interval. Maps that show facies, structural forms, and petroleum source rocks were prepared for this study. Analysis of the maps indicates that several primary factors controlled the areal distribution of source rocks, their geochemical type, and their effectiveness (i.e., the amounts of discovered original conventionally recoverable reserves of oil and gas generated by these rocks). These factors are geologic age, paleolatitude of the depositional areas, structural forms in which the deposition of source rocks occurred, and the evolution of biota. The maturation time of these source rocks demonstrates that majority of discovered oil and gas is very young; almost 70% of the world's original reserves of oil and gas has been generated since the Coniacian, and nearly 50% of the world's petroleum{sup 4} has been generated and trapped since the Oligocene.

  20. Microlensing towards the LMC revisited by adopting a non-Gaussian velocity distribution for the sources

    NASA Astrophysics Data System (ADS)

    Mancini, L.

    2009-03-01

    Aims: We discuss whether the Gaussian is a reasonable approximation of the velocity distribution of stellar systems that are not spherically distributed. Methods: By using a non-Gaussian velocity distribution to describe the sources in the Large Magellanic Cloud (LMC), we reinvestigate the expected microlensing parameters of a lens population isotropically distributed either in the Milky Way halo or in the LMC (self lensing). We compare our estimates with the experimental results of the MACHO collaboration. Results: An interesting result that emerges from our analysis is that, moving from the Gaussian to the non-Gaussian case, we do not observe any change in the form of the distribution curves describing the rate of microlensing events for lenses in the Galactic halo. The corresponding expected timescales and number of expected events also do not vary. Conversely, with respect to the self-lensing case, we observe a moderate increase in the rate and number of expected events. We conclude that the error in the estimate of the most likely value for the MACHO mass and the Galactic halo fraction in form of MACHOs, calculated with a Gaussian velocity distribution for the LMC sources, is not higher than 2%.

  1. Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy

    PubMed Central

    Hall, Matthew L.

    2011-01-01

    Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English was used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language. PMID:21450284

  2. Independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool

    SciTech Connect

    Madni, I.K.; Eltawila, F.

    1994-01-01

    MELCOR is a fully integrated computer code that models all phases of the progression of severe accidents in light water reactor nuclear power plants, and is being developed for the US Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL). Brookhaven National Laboratory (BNL) has a program with the NRC called ``MELCOR Verification, Benchmarking, and Applications,`` whose aim is to provide independent assessment of MELCOR as a severe accident thermal-hydraulic/source term analysis tool. The scope of this program is to perform quality control verification on all released versions of MELCOR, to benchmark MELCOR against more mechanistic codes and experimental data from severe fuel damage tests, and to evaluate the ability of MELCOR to simulate long-term severe accident transients in commercial LWRs, by applying the code to model both BWRs and PWRs. Under this program, BNL provided input to the NRC-sponsored MELCOR Peer Review, and is currently contributing to the MELCOR Cooperative Assessment Program (MCAP). This paper presents a summary of MELCOR assessment efforts at BNL and their contribution to NRC goals with respect to MELCOR.

  3. Long term fault slip rates, distributed deformation rates and forecast of seismicity in the Iranian Plateau

    NASA Astrophysics Data System (ADS)

    Khodaverdian, A.; Zafarani, H.; Rahimian, M.

    2015-10-01

    In this paper, the long-term crustal flow of the Iranian Plateau is computed using a kinematic finite-element model (NeoKinema software). Based on the iterated weighted least squares method, the models are fitted to the newest data set of Iran including updated fault traces, geologic fault offset rates, geodetic benchmark velocities, principal stress directions, and velocity boundary conditions. We are successful to find the best kinematic model, in which geological slip rates, geodetic velocities, and interpolated stress directions are fitted at levels of 0.35, 1.0, and 1.0 datum standard deviation, respectively. The best fitted model, for the first time, provides long-term fault slip rates, velocity, and anelastic strain rate field in the Iranian Plateau from all available kinematic data. In order to verify the model, the estimates of fault slip rates are compared to slip rates from merely analyzing geodetic benchmark velocities or paleoseismological studies or published geological rates which have not been used in the model. Our estimated rates are all in the range of geodetic rates and are even more consistent with geological rates than previous GPS-based estimates. Using the selected model, long-term average seismicity maps and long-term moment rates are produced on the basis of the SHIFT hypothesis and previous global calibrations. Our kinematic model also provides a new constraint on ratio of seismic deformation to total deformation for different seismic zones of Iran. The resulting slip rates and the proposed seismic fraction of deformation provide the necessary input data for future time-dependent hazard studies in Iran. Moreover, spatial distribution and total number of strong (M > 6) and major (M > 7) earthquakes, which dominate the seismic hazard, are all compatible with the regional seismic catalog.

  4. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  5. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  6. RADIAL DISTRIBUTION OF X-RAY POINT SOURCES NEAR THE GALACTIC CENTER

    SciTech Connect

    Hong, Jae Sub; Van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas

    2009-11-20

    We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4 deg. from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1.{sup 0}4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band mum images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in

  7. Neuroimaging Evidence for Agenda-Dependent Monitoring of Different Features during Short-Term Source Memory Tests

    ERIC Educational Resources Information Center

    Mitchell, Karen J.; Raye, Carol L.; McGuire, Joseph T.; Frankel, Hillary; Greene, Erich J.; Johnson, Marcia K.

    2008-01-01

    A short-term source monitoring procedure with functional magnetic resonance imaging assessed neural activity when participants made judgments about the format of 1 of 4 studied items (picture, word), the encoding task performed (cost, place), or whether an item was old or new. The results support findings from long-term memory studies showing that…

  8. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources

    PubMed Central

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors’ data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  9. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  10. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-01-01

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented. PMID:27384568

  11. Evaluating the effect of network density and geometric distribution on kinematic source inversion models

    NASA Astrophysics Data System (ADS)

    Zhang, Youbing; Song, Seok Goo; Dalguer, Luis; Clinton, John

    2013-04-01

    An essential element of understanding earthquake source processes is obtaining a reliable source model via geophysical data inversion. The most common procedure to determine the kinematic source parameters (final slip, peak slip velocity, rise time and rupture time) is to invert observed ground motions recorded at a number of different stations (typically strong motion accelerometers). Few studies have been dedicated to evaluate the effect of the number of stations and their geometrical distribution on earthquake source parameters. In this paper we investigate these effects by inverting ground motions from synthetic dynamic earthquake rupture models with heterogeneous stress distribution governed by the slip weakening friction law. Our first target model is a buried strike-slip event (Mw 6.5) in a layered half space. The Compsyn code (Spudich and Xu, 2002) was used in the inversion procedure to generate forward synthetic waveforms, and an Evolutionary Algorithm was used to search for the source parameters: peak slip velocity (PSV), rupture time, and rise time at low frequency (up to 1Hz). The regularized Yoffe function was applied as a single window slip velocity function, which is a flexible slip velocity function defined by three independent parameters: the final slip, the slip duration and the duration of the positive slip acceleration, Tacc (Tinti, et al. 2005). The same velocity structure was used for both the foward and inversion modeling and no noise was added to the synthetic ground motions before inversion. We applied the Tikhonov regularization to smooth the final slip on fault, which is controlled by PSV and rise time. Our preliminary results show that: First, we can capture large slip patches of the dynamic models with good ground velocity waveform fitting, using the regularized Yoffe function, which is consistent with the overall properties of dynamic rupture models. Second, the geometry of station distribution is important for finite kinematic source

  12. Estimating the Distribution of Noise Sources in Ambient Noise Derived Green’s Functions

    NASA Astrophysics Data System (ADS)

    Harmon, N.; Gerstoft, P.; Rychert, C. A.

    2009-12-01

    Estimation of surface wave Green’s functions from ambient seismic noise cross correlation requires an accurate model of the source distribution. We compare and contrast the results from three different methods for estimating the distribution of sources using vertical component data from the 190 stations of the Southern California Seismic Network for 1 year of data (2008). The methods include: 1) stacked beamformer output of short time period noise cross correlations, 2) a least-squares inversion of year long stacked noise correlation functions (NCF) assuming a 2-dimensional plane wave source density model, and 3) a least-squares inversion of NCF assuming a 3-dimensional plane wave source density model. The beamforming method and 3D plane wave source density model both indicate a strong surface wave component in the NCF, with weaker signals from near vertical incidence body waves. All three methods recover similar azimuthal surface wave source density functions, with maxima in the density functions between the methods within ±4° azimuth in the 7-25 s period range. Although there is a difference in the power between the beamforming method and the 2D and 3D plane wave inversions. This is expected as the peak from beamforming on several stations tends to be narrower than the two stations used in cross-correlations. Using the 2D and 3D distribution of plane wave models, we demonstrate how phase corrections for non-homogenous sources can be calculated to improve the accuracy of ambient noise tomographic studies. We also estimate that without correcting for the source effect, the isotropic phase velocity estimate is < 1% different from the observed teleseismic velocity estimates, but that there is up to a 1-3% peak-to-peak azimuthal variation in phase velocity predicted by these source distribution models which could be mapped into azimuthal anisotropy at 7-25 s period. Furthermore, sampling bias in station-to-station distance and azimuth could lead to spurious

  13. Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco

    2016-08-01

    Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.

  14. The Red MSX Source survey: distribution and properties of a sample of massive young stars

    NASA Astrophysics Data System (ADS)

    Urquhart, J. S.; Moore, T. J. T.; Hoare, M. G.; Lumsden, S. L.; Oudmaijer, R. D.; Rathborne, J. M.; Mottram, J. C.; Davies, B.; Stead, J. J.

    2011-01-01

    The Red MSX Source (RMS) survey has identified a large sample of massive young stellar objects and ultra compact H II regions from a sample of ˜2000 MSX and Two Micron All Sky Survey (2MASS) colour selected sources. Using a recent catalogue of molecular clouds derived from the Boston University-Five College Radio Astronomy Observatory (BU-FCRAO) Galactic Ring Survey (GRS), and by applying a Galactic scaleheight cut-off of 120 pc, we solve the distance ambiguity for RMS sources located within 18°<|l|> 54°. These two steps yield kinematic distances to 291 sources out of a possible 326, located within the GRS longitude range. Combining distances and integrated fluxes derived from spectral energy distributions, we estimate luminosities to these sources and find that >90 per cent are indicative of the presence of a massive star. We find the completeness limit of our sample is ˜104 L⊙, which corresponds to a zero-age main-sequence star with a mass of ˜12 M⊙. Selecting only these sources, we construct a complete sample of 196 sources. Comparing the properties of the sample of young massive stars with the general population, we find the RMS clouds are generally larger, more massive, and more turbulent. We examine the distribution of this subsample with respect to the location of the spiral arms and the Galactic bar and find them to be spatially correlated. We identify three significant peaks in the source surface density at Galactocentric radii of approximately 4, 6 and 8 kpc, which correspond to the proposed positions of the Scutum, Sagittarius and Perseus spiral arms, respectively. Fitting a scaleheight to the data we obtain an average value of ˜29 ± 0.5 pc, which agrees well with other reported values in the literature, however we note a dependence of the scaleheight on galactocentric radius with it increases from 30 to 45 pc between 2.5 and 8.5 kpc.

  15. Analysis of the relationship between landslides size distribution and earthquake source area

    NASA Astrophysics Data System (ADS)

    Valagussa, Andrea; Crosta, Giovanni B.; Frattini, Paolo; Xu, Chong

    2014-05-01

    The spatial distribution of earthquake induced landslides around the seismogenetic source has been analysed to better understand the triggering of landslides in seismic areas and to forecast the maximum distance at which an earthquake, with a certain magnitude, can induce landslides (e.g Keefer, 1984). However, when applying such approaches to old earthquakes (e.g 1929 Buller and 1968 Iningahua earthquakes New Zealand; Parker, 2013; 1976 Friuli earthquake, Italy) one should be concerned about the undersampling of smaller landslides which can be cancelled by erosion and landscape evolution. For this reason, it is important to characterize carefully the relationship between landslide area and number with distance from the source, but also the size distribution of landslides as a function of distance from the source. In this paper, we analyse the 2008 Wenchuan earthquake landslide inventory (Xu et al, 2013). The earthquake triggered more than 197,000 landslides of different type, including rock avalanches, rockfalls, translational and rotational slides, lateral spreads and derbies flows. First, we calculated the landslide intensity (number of landslides per unit area) and spatial density (landslide area per unit area) as a function of distance from the source area of the earthquake. Then, we developed magnitude frequency curves (MFC) for different distances from the source area. Comparing these curves, we can describe the relation between the distance and the frequency density of landslide in seismic area. Keefer D K (1984) Landslides caused by earthquakes. Geological Society of America Bulletin, 95(4), 406-421. Parker R N, (2013) Hillslope memory and spatial and temporal distributions of earthquake-induced landslides, Durham theses, Durham University. Xu, C., Xu, X., Yao, X., & Dai, F. (2013). Three (nearly) complete inventories of landslides triggered by the May 12, 2008 Wenchuan Mw 7.9 earthquake of China and their spatial distribution statistical analysis

  16. Long-term Satellite Observations of Asian Dust Storm: Source, Pathway, and Interannual Variability

    NASA Technical Reports Server (NTRS)

    Hsu, N. Christina

    2008-01-01

    between Deep Blue retrievals of aerosol optical thickness and those directly from AERONET sunphotometers over desert and semi-desert regions. New Deep Blue products will allow scientists to determine quantitatively the aerosol properties near sources using high spatial resolution measurements from SeaWiFS and MODIS-like instruments. Long-term satellite measurements (1998 - 2007) from SeaWiFS will be utilized to investigate the interannual variability of source, pathway, and dust loading associated with the Asian dust storm outbreaks. In addition, monthly averaged aerosol optical thickness during the springtime from SeaWiFS will also be compared with the MODIS Deep Blue products.

  17. Coherence properties of the electric field generated by an incoherent source of currents distributed on the surface of a sphere.

    PubMed

    Zurita-Sánchez, Jorge R

    2016-01-01

    We derive analytical expressions of the cross-spectral density of the electric field arising from an incoherent source whose current density is located on the surface of a sphere. Our approach is based on the series expansion in terms of vector spherical harmonics of the electric field generated by the aforementioned current distribution. We analyze in detail the spectrum, the degree of coherence, and the degree of polarization of the electric field for all regions in space (from the near field to the far field). The relationship of the high-order harmonics to the coherence properties is discussed. The spectrum turns out to be isotropic and it is different from that of the source. We found that the degree of coherence and degree of polarization are strongly influenced by the size of the source. We show the appearance of special features: a zone with a high degree of coherence in the near field for a subwavelength source, the radial degree of coherence is nearly constant in an extended region where two radial points belong to the far field, and a particular radial distance for which the degree of polarization vanishes (3D unpolarized light). PMID:26831593

  18. A Monte Carlo study on dose distribution validation of GZP6 60Co stepping source

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Abdollahi, Maliheh; Ghorbani, Mahdi

    2012-01-01

    Aim Stepping source in brachytherapy systems is used to treat a target lesion longer than the effective treatment length of the source. Cancerous lesions in the cervix, esophagus and rectum are examples of such a target lesion. Background In this study, the stepping source of a GZP6 afterloading intracavitary brachytherapy unit was simulated using Monte Carlo (MC) simulation and the results were used for the validation of the GZP6 treatment planning system (TPS). Materials and methods The stepping source was simulated using MCNPX Monte Carlo code. Dose distributions in the longitudinal plane were obtained by using a matrix shift method for esophageal tumor lengths of 8 and 10 cm. A mesh tally has been employed for the absorbed dose calculation in a cylindrical water phantom. A total of 5 × 108 photon histories were scored and the MC statistical error obtained was at the range of 0.008–3.5%, an average of 0.2%. Results The acquired MC and TPS isodose curves were compared and it was shown that the dose distributions in the longitudinal plane were relatively coincidental. In the transverse direction, a maximum dose difference of 7% and 5% was observed for tumor lengths of 8 and 10 cm, respectively. Conclusion Considering that the certified source activity is given with ±10% uncertainty, the obtained difference is reasonable. It can be concluded that the accuracy of the dose distributions produced by GZP6 TPS for the stepping source is acceptable for its clinical applications. PMID:24416537

  19. Short-term X-ray variability of the globular cluster source 4U 1820 - 30 (NGC 6624)

    NASA Technical Reports Server (NTRS)

    Stella, L.; Kahn, S. M.; Grindlay, J. E.

    1984-01-01

    Analytical techniques for improved identification of the temporal and spectral variability properties of globular cluster and galactic bulge X-ray sources are described in terms of their application to a large set of observations of the source 4U 1820 - 30 in the globular cluster NGC 6624. The autocorrelation function, cross-correlations, time skewness function, erratic periodicities, and pulse trains are examined. The results are discussed in terms of current models with particular emphasis on recent accretion disk models. It is concluded that the analyzed observations provide the first evidence for shot-noise variability in a globular cluster X-ray source.

  20. An efficient central DOA tracking algorithm for multiple incoherently distributed sources

    NASA Astrophysics Data System (ADS)

    Hassen, Sonia Ben; Samet, Abdelaziz

    2015-12-01

    In this paper, we develop a new tracking method for the direction of arrival (DOA) parameters assuming multiple incoherently distributed (ID) sources. The new approach is based on a simple covariance fitting optimization technique exploiting the central and noncentral moments of the source angular power densities to estimate the central DOAs. The current estimates are treated as measurements provided to the Kalman filter that model the dynamic property of directional changes for the moving sources. Then, the covariance-fitting-based algorithm and the Kalman filtering theory are combined to formulate an adaptive tracking algorithm. Our algorithm is compared to the fast approximated power iteration-total least square-estimation of signal parameters via rotational invariance technique (FAPI-TLS-ESPRIT) algorithm using the TLS-ESPRIT method and the subspace updating via FAPI-algorithm. It will be shown that the proposed algorithm offers an excellent DOA tracking performance and outperforms the FAPI-TLS-ESPRIT method especially at low signal-to-noise ratio (SNR) values. Moreover, the performances of the two methods increase as the SNR values increase. This increase is more prominent with the FAPI-TLS-ESPRIT method. However, their performances degrade when the number of sources increases. It will be also proved that our method depends on the form of the angular distribution function when tracking the central DOAs. Finally, it will be shown that the more the sources are spaced, the more the proposed method can exactly track the DOAs.

  1. Tsunami source parameters estimated from slip distribution and their relation to tsunami intensity

    NASA Astrophysics Data System (ADS)

    Bolshakova, Anna; Nosov, Mikhail; Kolesov, Sergey

    2015-04-01

    Estimation of the level of tsunami hazard on the basis of earthquake moment magnitude often fails. The most important reason for this is that tsunamis are related to earthquakes in a complex and ambiguous way. In order to reveal a measure of tsunamigenic potential of an earthquake that would be better than moment magnitude of earthquake we introduce a set of tsunami source parameters that can be calculated from co-seismic ocean-bottom deformation and bathymetry. We consider more than two hundred ocean-bottom earthquakes (1923-2014) those for which detailed slip distribution data (Finite Fault Model) are available on USGS, UCSB, Caltech, and eQuake-RC sites. Making use of the Okada formulae the vector fields of co-seismic deformation of ocean bottom are estimated from the slip distribution data. Taking into account bathymetry (GEBCO_08) we determine tsunami source parameters such as double amplitude of bottom deformation, displaced water volume, potential energy of initial elevation, etc. The tsunami source parameters are examined as a function of earthquake moment magnitude. The contribution of horisontal component of ocean bottom deformation to tsunami generation is investigated. We analyse the Soloviev-Imamura tsunami intensity as a function of tsunami source parameters. The possibility of usage of tsunami source parameters instead of moment magnitude in tsunami warning is discussed. This work was supported by the Russian Foundation for Basic Research, project 14-05-31295

  2. Source terms released into the environment for a station blackout severe accident at the Peach Bottom Atomic Power Station

    SciTech Connect

    Carbajo, J.J.

    1995-07-01

    This study calculates source terms released into the environment at the Peach Bottom Atomic Power Station after containment failure during a postulated low-pressure, short-term station blackout severe accident. The severe accident analysis code MELCOR, version 1.8.1, was used in these calculations. Source terms were calculated for three different containment failure modes. The largest environmental releases occur for early containment failure at the drywell liner in contact with the cavity by liner melt-through. This containment failure mode is very likely to occur when the cavity is dry during this postulated severe accident sequence.

  3. Occurrence, distribution and risk assessment of polychlorinated biphenyls and polybrominated diphenyl ethers in nine water sources.

    PubMed

    Yang, Yuyi; Xie, Qilai; Liu, Xinyu; Wang, Jun

    2015-05-01

    Water quality of water sources is a critical issue for human health in South China, which experiences rapid economic development and is the most densely populated region in China. In this study, the pollution of organohalogen compounds in nine important water sources, South China was investigated. Twenty six organohalogen compounds including seventeen polychlorinated biphenyls (PCBs) and nine polybrominated diphenyl ethers (PBDEs) were detected using gas chromatograph analysis. The concentrations of total PCBs ranged from 0.93 to 13.07ngL(-1), with an average value of 7.06ngL(-1). The total concentrations of nine PBDE congeners were found in range not detected (nd) to 7.87ngL(-1) with an average value of 2.59ngL(-1). Compositions of PCBs and PBDEs indicated the historical use of Aroclors 1248, 1254 and 1260, and commercial PBDEs may be the main source of organohalogen compounds in water sources in South China. The nine water sources could be classified into three clusters by self-organizing map neural network. Low halogenated PCBs and PBDEs showed similar distribution in the nine water sources. Cancer risks of PCBs and PBDEs via water consumption were all below 10(-6), indicating the water quality in the nine water sources, South China was safe for human drinking. PMID:25681605

  4. Codon information value and codon transition-probability distributions in short-term evolution

    NASA Astrophysics Data System (ADS)

    Jiménez-Montaño, M. A.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Ramos-Fernández, A.

    2016-07-01

    To understand the way the Genetic Code and the physical-chemical properties of coded amino acids affect accepted amino acid substitutions in short-term protein evolution, taking into account only overall amino acid conservation, we consider an underlying codon-level model. This model employs codon pair-substitution frequencies from an empirical matrix in the literature, modified for single-base mutations only. Ordering the degenerated codons according to their codon information value (Volkenstein, 1979), we found that three-fold and most of four-fold degenerated codons, which have low codon values, were best fitted to rank-frequency distributions with constant failure rate (exponentials). In contrast, almost all two-fold degenerated codons, which have high codon values, were best fitted to rank-frequency distributions with variable failure rate (inverse power-laws). Six-fold degenerated codons are considered to be doubly assigned. The exceptional behavior of some codons, including non-degenerate codons, is discussed.

  5. Security of two-way continuous-variable quantum key distribution with source noise

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Yu, Song; Zhang, Yi-Chen; Gu, Wanyi; Guo, Hong

    2014-11-01

    We investigate the security of reverse reconciliation two-way continuous-variable quantum key distribution with source noise at both legitimate sides. Because the source noise originates from imperfect devices, we ascribe it to the legitimate sides rather than the eavesdropper. The trusted model consists of a thermal noise injected into a beam splitter. The expressions of secret key rate are derived against collective entangling cloner attacks for homodyne and heterodyne detections. Simulation results show that by applying the trusted model, the security bound of the reverse reconciliation two-way protocols can be tightened, while the advantage over one-way protocols still maintains.

  6. Bright integrated photon-pair source for practical passive decoy-state quantum key distribution

    NASA Astrophysics Data System (ADS)

    Krapick, S.; Stefszky, M. S.; Jachura, M.; Brecht, B.; Avenhaus, M.; Silberhorn, C.

    2014-01-01

    We report on a bright, nondegenerate type-I parametric down-conversion source, which is well suited for passive decoy-state quantum key distribution. We show the photon-number-resolved analysis over a broad range of pump powers and we prove heralded higher-order n-photon states up to n =4. The inferred photon click statistics exhibit excellent agreements to the theoretical predictions. From our measurement results we conclude that our source meets the requirements to avert photon-number-splitting attacks.

  7. Post-quantum attacks on key distribution schemes in the presence of weakly stochastic sources

    NASA Astrophysics Data System (ADS)

    Al–Safi, S. W.; Wilmott, C. M.

    2015-09-01

    It has been established that the security of quantum key distribution protocols can be severely compromised were one to permit an eavesdropper to possess a very limited knowledge of the random sources used between the communicating parties. While such knowledge should always be expected in realistic experimental conditions, the result itself opened a new line of research to fully account for real-world weak randomness threats to quantum cryptography. Here we expand of this novel idea by describing a key distribution scheme that is provably secure against general attacks by a post-quantum adversary. We then discuss possible security consequences for such schemes under the assumption of weak randomness.

  8. Reactive hydro- end chlorocarbons in the troposphere and lower stratosphere : sources, distributions, and chemical impact

    NASA Astrophysics Data System (ADS)

    Scheeren, H. A.

    2003-09-01

    The work presented in this thesis focuses on measurements of chemical reactive C2 C7 non-methane hydrocarbons (NMHC) and C1 C2 chlorocarbons with atmospheric lifetimes of a few hours up to about a year. The group of reactive chlorocarbons includes the most abundant atmospheric species with large natural sources, which are chloromethane (CH3Cl), dichloromethane (CH2Cl2), and trichloromethane (CHCl3), and tetrachloroethylene (C2Cl4) with mainly anthropogenic sources. The NMHC and chlorocarbons are present at relatively low quantities in our atmosphere (10-12 10-9 mol mol-1 of air). Nevertheless, they play a key role in atmospheric photochemistry. For example, the oxidation of NMHC plays a dominant role in the formation of ozone in the troposphere, while the photolysis of chlorocarbons contributes to enhanced ozone depletion in the stratosphere. In spite of their important role, however, their global source and sinks budgets are still poorly understood. Hence, this study aims at improving our understanding of the sources, distribution, and chemical role of reactive NMHC and chlorocarbons in the troposphere and lower stratosphere. To meet this aim, a comprehensive data set of selected C2 C7 NMHC and chlorocarbons has been analyzed, derived from six aircraft measurement campaigns with two different jet aircrafts (the Dutch TUD/NLR Cessna Citation PH-LAB, and the German DLR Falcon) conducted between 1995 and 2001 (STREAM 1995 and 1997 and 1998, LBA-CLAIRE 1998, INDOEX 1999, MINOS 2001). The NMHC and chlorocarbons have been detected by gas-chromatography (GC-FID/ECD) in pre-concentrated whole air samples collected in stainless steel canister on-board the measurement aircrafts. The measurement locations include tropical (Maldives/Indian Ocean and Surinam), midlatitude (Western Europe and Canada) and polar regions (Lapland/northern Sweden) between the equator to about 70ºN, covering different seasons and pollution levels in the troposphere and lower stratosphere. Of

  9. Distribution of Practice and Metacognition in Learning and Long-Term Retention of a Discrete Motor Task

    ERIC Educational Resources Information Center

    Dail, Teresa K.; Christina, Robert W.

    2004-01-01

    This study examined judgments of learning and the long-term retention of a discrete motor task (golf putting) as a function of practice distribution. The results indicated that participants in the distributed practice group performed more proficiently than those in the massed practice group during both acquisition and retention phases. No…

  10. Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks

    NASA Astrophysics Data System (ADS)

    Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.

    2010-12-01

    Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for

  11. Source contributions to the size and composition distribution of urban particulate air pollution

    NASA Astrophysics Data System (ADS)

    Kleeman, Michael J.; Cass, Glen R.

    A mechanistic air quality model has been constructed which is capable of predicting the contribution of individual emissions source types to the size- and chemical-composition distribution of airborne particles. This model incorporates all of the major aerosol processes relevant to regional air pollution studies including emissions, transport, deposition, gas-to-particle conversion and fog chemistry. In addition, the aerosol is represented as a source-oriented external mixture which is allowed to age in a more realistic fashion than can be accomplished when fresh particle-phase emissions are averaged into the pre-existing atmospheric aerosol size and composition distribution. A source-oriented external mixture is created by differentiating the primary particles emitted from the following source types: catalyst-equipped gasoline engines, non-catalyst-equipped gasoline engines, diesel engines, meat cooking, paved road dust, crustal material from sources other than paved road dust, and sulfur-bearing particles from fuel burning and industrial processes. Discrete primary seed particles from each of these source types are emitted into a simulation of atmospheric transport and chemical reaction. The individual particles evolve over time in the presence of gas-to-particle conversion processes while retaining information on the initial source from which they were emitted. The source- and age-resolved particle mechanics model is applied to the 1987 August SCAQS episode and comparisons are made between model predictions and observations at Claremont, CA. The model explains the origin of the bimodal character of the sub-micron aerosol size distribution. The mode located between 0.2 and 0.3 μm particle diameter is shaped by transformed emissions from diesel engines and meat cooking operations with lesser contributions from gasolinepowered vehicles and other fuel burning. The larger mode located at 0.7-0.8 μm particle diameter is due to fine particle background aerosol that

  12. Long-term fluctuations of hailstorms in South Moravia, Czech Republic: synthesis of different data sources

    NASA Astrophysics Data System (ADS)

    Chromá, Kateřina; Brázdil, Rudolf; Dolák, Lukáš; Řezníčková, Ladislava; Valášek, Hubert; Zahradníček, Pavel

    2016-04-01

    Hailstorms belong to natural phenomena causing great material damage in present time, similarly as it was in the past. In Moravia (eastern part of the Czech Republic), systematic meteorological observations started generally in the latter half of the 19th century. Therefore, in order to create long-term series of hailstorms, it is necessary to search for other sources of information. Different types of documentary evidence are used in historical climatology, such as annals, chronicles, diaries, private letters, newspapers etc. Besides them, institutional documentary evidence of economic and administrative character (e.g. taxation records) has particular importance. This study aims to create a long-term series of hailstorms in South Moravia using various types of documentary evidence (such as taxation records, family archives, chronicles and newspapers which are the most important) and systematic meteorological observations in the station network. Although available hailstorm data cover the 1541-2014 period, incomplete documentary evidence allows reasonable analysis of fluctuations in hailstorm frequency only since the 1770s. The series compiled from documentary data and systematic meteorological observations is used to identify periods of lower and higher hailstorm frequency. Existing data may be used also for the study of spatial hailstorm variability. Basic uncertainties of compiled hailstorm series are discussed. Despite some bias in hailstorm data, South-Moravian hailstorm series significantly extends our knowledge about this phenomenon in the south-eastern part of the Czech Republic. The study is a part of the research project "Hydrometeorological extremes in Southern Moravia derived from documentary evidence" supported by the Grant Agency of the Czech Republic, reg. no. 13-19831S.

  13. Passive-scheme analysis for solving the untrusted source problem in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Xu, Bingjie; Guo, Hong

    2010-04-01

    As a practical method, the passive scheme is useful to monitor the photon statistics of an untrusted source in a “Plug & Play” quantum key distribution (QKD) system. In a passive scheme, three kinds of monitor mode can be adopted: average photon number (APN) monitor, photon number analyzer (PNA), and photon number distribution (PND) monitor. In this paper, the security analysis is rigorously given for the APN monitor, while for the PNA, the analysis, including statistical fluctuation and random noise, is addressed with a confidence level. The results show that the PNA can achieve better performance than the APN monitor and can asymptotically approach the theoretical limit of the PND monitor. Also, the passive scheme with the PNA works efficiently when the signal-to-noise ratio (RSN) is not too low and so is highly applicable to solve the untrusted source problem in the QKD system.

  14. Electron energy distribution function by using probe method in electron cyclotron resonance multicharged ion source

    SciTech Connect

    Kumakura, Sho Kurisu, Yosuke; Kimura, Daiju; Yano, Keisuke; Imai, Youta; Sato, Fuminobu; Kato, Yushi; Iida, Toshiyuki

    2014-02-15

    We are constructing a tandem type electron cyclotron resonance (ECR) ion source (ECRIS). High-energy electrons in ECRIS plasma affect electron energy distribution and generate multicharged ion. In this study, we measure electron energy distribution function (EEDF) of low energy region (≦100 eV) in ECRIS plasma at extremely low pressures (10{sup −3}–10{sup −5} Pa) by using cylindrical Langmuir probe. From the result, it is found that the EEDF correlates with the electron density and the temperature from the conventional probe analysis. In addition, we confirm that the tail of EEDF spreads to high energy region as the pressure rises and that there are electrons with high energy in ECR multicharged ion source plasma. The effective temperature estimated from the experimentally obtained EEDF is larger than the electron temperature obtained from the conventional method.

  15. Theoretical and measured electric field distributions within an annular phased array: consideration of source antennas.

    PubMed

    Zhang, Y; Joines, W T; Jirtle, R L; Samulski, T V

    1993-08-01

    The magnitude of E-field patterns generated by an annular array prototype device has been calculated and measured. Two models were used to describe the radiating sources: a simple linear dipole and a stripline antenna model. The stripline model includes detailed geometry of the actual antennas used in the prototype and an estimate of the antenna current based on microstrip transmission line theory. This more detailed model yields better agreement with the measured field patterns, reducing the rms discrepancy by a factor of about 6 (from approximately 23 to 4%) in the central region of interest where the SEM is within 25% of the maximum. We conclude that accurate modeling of source current distributions is important for determining SEM distributions associated with such heating devices. PMID:8258444

  16. Auditory evoked off-response: its source distribution is different from that of on-response.

    PubMed

    Noda, K; Tonoike, M; Doi, K; Koizuka, I; Yamaguchi, M; Seo, R; Matsumoto, N; Noiri, T; Takeda, N; Kubo, T

    1998-08-01

    Offset auditory responses were investigated by electroencephalography mainly in the 1970s, but since then no particular attention has been paid to them. Among the studies using magnetoencephalography (MEG) devices there are, to our knowledge, only three studies of the auditory off-response, and no significant variance has ever been observed between the source locations of on- and off-responses elicited from pure tones. We measured auditory evoked magnetic fields (AEFs) to various frequency pure tone stimulation in 5 healthy subjects with a 122-channel helmet-shaped magnetometer, and compared the distributions of the source locations of auditory N100m-Off (magnetic off-response around 100 ms) with those of N100m-On. Their spatial distributions were quite close to each other, and yet they were significantly different. PMID:9721944

  17. Passive-scheme analysis for solving the untrusted source problem in quantum key distribution

    SciTech Connect

    Peng Xiang; Xu Bingjie; Guo Hong

    2010-04-15

    As a practical method, the passive scheme is useful to monitor the photon statistics of an untrusted source in a 'Plug and Play' quantum key distribution (QKD) system. In a passive scheme, three kinds of monitor mode can be adopted: average photon number (APN) monitor, photon number analyzer (PNA), and photon number distribution (PND) monitor. In this paper, the security analysis is rigorously given for the APN monitor, while for the PNA, the analysis, including statistical fluctuation and random noise, is addressed with a confidence level. The results show that the PNA can achieve better performance than the APN monitor and can asymptotically approach the theoretical limit of the PND monitor. Also, the passive scheme with the PNA works efficiently when the signal-to-noise ratio (R{sup SN}) is not too low and so is highly applicable to solve the untrusted source problem in the QKD system.

  18. Long-term mechanical life testing of polymeric post insulators for distribution and a comparison to porcelain

    SciTech Connect

    Cherney, E.A. )

    1988-07-01

    The paper presents the results and analyses of long-term cantilever strength tests on polymeric line post insulators. The time-to-failure data for static cantilever loads are represented by the Weibull distribution. The life distribution, obtained from the maximum likelihood estimates of the accelerated failure times, fits an exponential model. An extrapolation of the life distribution to normal loads provides an estimate of the strength rating and mechanical equivalence to porcelain line post insulators.

  19. Evaluating the effect of network density and geometric distribution on kinematic source inversion models

    NASA Astrophysics Data System (ADS)

    Zhang, Youbing; Dalguer, Luis A.; Song, Seok Goo; Clinton, John; Giardini, Domenico

    2015-01-01

    The effect of network density and geometric distribution on kinematic non-linear source inversion is investigated by inverting synthetic ground motions from a buried strike-slip fault (Mw 6.5), that have been generated by dynamic spontaneous rupture modelling. For the inversion, we use a physics-based regularized Yoffe function as slip velocity function. We test three different cases of station network geometry: (i) single station, varying azimuth and epicentral distance; (ii) multistation circular configurations, that is stations at similar distances from the fault, and regularly spaced around the fault; (iii) irregular multistation configurations using different numbers of stations. Our results show: (1) single station tests suggest that it may be possible to obtain a relatively good source model even using a single station. The best source model using a single station is obtained with stations at which amplitude ratios between three components are not large. We infer that both azimuthal angle and source-to-station distance play an important role in the design of optimal seismic network for source inversion. (2) Multistation tests show that the quality of the inverted source systematically correlates neither with the number of stations, nor with waveform misfit. (3) Waveform misfit has a direct correlation with the number of stations, resulting in overfitting the observed data without any systematic improvement of the source. It suggests that the best source model is not necessarily derived from the model with minimum waveform misfit. (4) A seismic network with a small number of well-spaced stations around the fault may be sufficient to obtain acceptable source inversion.

  20. A Monte Carlo study on dose distribution evaluation of Flexisource 192Ir brachytherapy source

    PubMed Central

    Alizadeh, Majid; Ghorbani, Mahdi; Haghparast, Abbas; Zare, Naser; Ahmadi Moghaddas, Toktam

    2015-01-01

    Aim The aim of this study is to evaluate the dose distribution of the Flexisource 192Ir source. Background Dosimetric evaluation of brachytherapy sources is recommended by task group number 43 (TG. 43) of American Association of Physicists in Medicine (AAPM). Materials and methods MCNPX code was used to simulate Flexisource 192Ir source. Dose rate constant and radial dose function were obtained for water and soft tissue phantoms and compared with previous data on this source. Furthermore, dose rate along the transverse axis was obtained by simulation of the Flexisource and a point source and the obtained data were compared with those from Flexiplan treatment planning system (TPS). Results The values of dose rate constant obtained for water and soft tissue phantoms were equal to 1.108 and 1.106, respectively. The values of the radial dose function are listed in the form of tabulated data. The values of dose rate (cGy/s) obtained are shown in the form of tabulated data and figures. The maximum difference between TPS and Monte Carlo (MC) dose rate values was 11% in a water phantom at 6.0 cm from the source. Conclusion Based on dosimetric parameter comparisons with values previously published, the accuracy of our simulation of Flexisource 192Ir was verified. The results of dose rate constant and radial dose function in water and soft tissue phantoms were the same for Flexisource and point sources. For Flexisource 192Ir source, the results of TPS calculations in a water phantom were in agreement with the simulations within the calculation uncertainties. Furthermore, the results from the TPS calculation for Flexisource and MC calculation for a point source were practically equal within the calculation uncertainties. PMID:25949224

  1. Influence of the electron source distribution on field-aligned currents

    NASA Astrophysics Data System (ADS)

    Bruening, K.; Goertz, C. K.

    1985-01-01

    The field-aligned current density above a discrete auroral arc has been deduced from the downward electron flux and magnetic field measurements onboard the rocket Porcupine flight 4. Both measurements show that the field-aligned current density is, in spite of decreasing peak energies towards the edge of the arc, about 4 times higher there than in the center of the arc. This can be explained by using the single particle description for an anisotropic electron source distribution.

  2. Distribution and probable source of nitrate in ground water of Paradise Valley, Arizona

    SciTech Connect

    Silver, B.A.; Fielden, J.R.

    1980-01-01

    Two theories have been proposed regarding the source of nitrate in Paradise Valley ground water: one suggests contamination by fertilizers and by treated wastewater effluent, and the other suggests that ammonium chloride, leached from tuffs in the adjacent Superstition Mountains, is oxidized to nitrate and deposited in a braided stream complex. The geology, hydrogeology, and distribution of nitrate in Paradise Valley ground water are described.

  3. Temporal-spatial distribution of non-point source pollution in a drinking water source reservoir watershed based on SWAT

    NASA Astrophysics Data System (ADS)

    Wang, M.; Cheng, W.; Yu, B.-S.; Fang, Y.

    2015-05-01

    The conservation of drinking water source reservoirs has a close relationship between regional economic development and people's livelihood. Research on the non-point pollution characteristics in its watershed is crucial for reservoir security. Tang Pu Reservoir watershed was selected as the study area. The non-point pollution model of Tang Pu Reservoir was established based on the SWAT (Soil and Water Assessment Tool) model. The model was adjusted to analyse the temporal-spatial distribution patterns of total nitrogen (TN) and total phosphorus (TP). The results showed that the loss of TN and TP in the reservoir watershed were related to precipitation in flood season. And the annual changes showed an "M" shape. It was found that the contribution of loss of TN and TP accounted for 84.5% and 85.3% in high flow years, and for 70.3% and 69.7% in low flow years, respectively. The contributions in normal flow years were 62.9% and 63.3%, respectively. The TN and TP mainly arise from Wangtan town, Gulai town, and Wangyuan town, etc. In addition, it was found that the source of TN and TP showed consistency in space.

  4. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    NASA Astrophysics Data System (ADS)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Vittino, Andrea

    2016-08-01

    The source-count distribution as a function of their flux, {dN}/{dS}, is one of the main quantities characterizing gamma-ray source populations. We employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (| b| ≥slant 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6 yr Fermi-LAT data set (P7REP), we show that the {dN}/{dS} distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure {dN}/{dS} down to an integral flux of ∼ 2× {10}-11 {{cm}}-2 {{{s}}}-1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall {dN}/{dS} distribution is consistent with a broken power law, with a break at {2.1}-1.3+1.0× {10}-8 {{cm}}-2 {{{s}}}-1. The power-law index {n}1={3.1}-0.5+0.7 for bright sources above the break hardens to {n}2=1.97+/- 0.03 for fainter sources below the break. A possible second break of the {dN}/{dS} distribution is constrained to be at fluxes below 6.4× {10}-11 {{cm}}-2 {{{s}}}-1 at 95% confidence level. The high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ∼25% point sources, ∼69.3% diffuse Galactic foreground emission, and ∼6% isotropic diffuse background.

  5. Experimental measurement-device-independent quantum key distribution with imperfect sources

    NASA Astrophysics Data System (ADS)

    Tang, Zhiyuan; Wei, Kejin; Bedroya, Olinka; Qian, Li; Lo, Hoi-Kwong

    2016-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks, is the most promising solution to the security issues in practical quantum key distribution systems. Although several experimental demonstrations of MDI-QKD have been reported, they all make one crucial but not yet verified assumption, that is, there are no flaws in state preparation. Such an assumption is unrealistic and security loopholes remain in the source. Here we present a MDI-QKD experiment with the modulation error taken into consideration. By applying the loss-tolerant security proof by Tamaki et al. [Phys. Rev. A 90, 052314 (2014)], 10.1103/PhysRevA.90.052314, we distribute secure keys over fiber links up to 40 km with imperfect sources, which would not have been possible under previous security proofs. By simultaneously closing loopholes at the detectors and a critical loophole—modulation error in the source, our work shows the feasibility of secure QKD with practical imperfect devices.

  6. Energy Distribution of a Prototype KSTAR Neutral Beam Ion Source for 300 s Arc Discharge

    NASA Astrophysics Data System (ADS)

    Chang, Doo-Hee; Jeong, Seung Ho; Oh, Byung-Hoon

    2008-02-01

    A neutral beam test-stand (NBTS) system has been developed for the extraction of a 300 s deuterium beam of 120 kV/65 A as an auxiliary heating system of Korea Superconducting Tokamak Advanced Research (KSTAR). The prototype long pulse ion source (LPIS) consists of a plasma generator and a set of tetrode accelerators. Beam extraction for 300 s was achieved at a maximum hydrogen beam power of 1.6 MW (70 kV/23 A) with an arc discharge power of 63 kW. The energy distribution of the ion source was analyzed by water-flow calorimetry (WFC) by monitoring the cooling-water temperature during the arc discharge. The power dissipation rate on the accelerator column was 0.97% of the total extracted ion beam power with a power loss of 0.2% caused by the collision of back stream electrons with the electron dump plate of the plasma generator. 74.2% of the total energy of was estimated to be distributed in the plasma generator and the accelerator for an arc discharge of 300 s. Also, 75.6% of the total energy was distributed in the ion source for an arc discharge of 2 s. The remaining energy was lost through the structures around the water-cooling path.

  7. Source contributions to the regional distribution of secondary particulate matter in California

    NASA Astrophysics Data System (ADS)

    Ying, Qi; Kleeman, Michael J.

    Source contributions to PM2.5 nitrate, sulfate and ammonium ion concentrations in California's San Joaquin Valley (SJV) (4-6 January 1996) and South Coast Air Basin (SoCAB) surrounding Los Angeles (23-25 September 1996) were predicted using a three-dimensional source-oriented Eulerian air quality model. The air quality model tracks the formation of PM2.5 nitrate, sulfate and ammonium ion from primary particles and precursor gases emitted from different sources though a mathematical simulation of emission, chemical reaction, gas-to-particle conversion, transport and deposition. The observed PM2.5 nitrate, sulfate and ammonium ion concentrations, and the mass distribution of nitrate, sulfate and ammonium ion as a function of particle size have been successfully reproduced by the model simulation. Approximately 45-57% of the PM2.5 nitrate and 34-40% of the PM2.5 ammonium ion in the SJV is formed from precursor gaseous species released from sources upwind of the valley. In the SoCAB, approximately 83% of the PM2.5 nitrate and 82% of the PM2.5 ammonium ion is formed from precursor gaseous species released from sources within the air basin. In the SJV, transportation related sources contribute approximately 24-30% of the PM2.5 nitrate (diesel engines ˜13.5-17.0%, catalyst equipped gasoline engines ˜10.2-12.8% and non-catalyst equipped gasoline engines ˜0.3-0.4%). In the SoCAB, transportation related sources directly contribute to approximately 67% of the PM2.5 nitrate (diesel engines 34.6%, non-catalyst equipped gasoline engine 4.7% and catalyst equipped gasoline engine 28.1%). PM2.5 ammonium ion concentrations in the SJV were dominated by area (including animal) NH 3 sources (16.7-25.3%), soil (7.2-10.9%), fertilizer NH 3 sources (11.4-17.3%) and point NH 3 sources (14.3-21.7%). In the SoCAB, ammonium ion is mainly associated with animal sources (28.2%) and catalyst equipped gasoline engines (16.2%). In both regions, the majority of the relatively low PM2.5 sulfate

  8. Polycyclic aromatic hydrocarbons in the dagang oilfield (china): distribution, sources, and risk assessment.

    PubMed

    Jiao, Haihua; Rui, Xiaoping; Wu, Shanghua; Bai, Zhihui; Zhuang, Xuliang; Huang, Zhanbin

    2015-06-01

    The levels of 16 polycyclic aromatic hydrocarbons (PAHs) were investigated in 27 upper layer (0-25 cm) soil samples collected from the Dagang Oilfield (China) in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs) varied between 103.6 µg·kg(-1) and 5872 µg·kg(-1), with a mean concentration of 919.8 µg·kg(-1); increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg(-1)), to oil well areas (mean of 627.3 µg·kg(-1)), to urban and residential zones (mean of 1856 µg·kg(-1)). Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B) or not classified/non-carcinogenic (NB). The total concentrations of carcinogenic PAHs (∑BPAHs) varied from 13.3 µg·kg(-1) to 4397 µg·kg(-1) across all samples, with a mean concentration of 594.4 µg·kg(-1). The results suggest that oilfield soil is subject to a certain level of ecological environment risk. PMID:26016436

  9. Polycyclic Aromatic Hydrocarbons in the Dagang Oilfield (China): Distribution, Sources, and Risk Assessment

    PubMed Central

    Jiao, Haihua; Rui, Xiaoping; Wu, Shanghua; Bai, Zhihui; Zhuang, Xuliang; Huang, Zhanbin

    2015-01-01

    The levels of 16 polycyclic aromatic hydrocarbons (PAHs) were investigated in 27 upper layer (0–25 cm) soil samples collected from the Dagang Oilfield (China) in April 2013 to estimate their distribution, possible sources, and potential risks posed. The total concentrations of PAHs (∑PAHs) varied between 103.6 µg·kg−1 and 5872 µg·kg−1, with a mean concentration of 919.8 µg·kg−1; increased concentrations were noted along a gradient from arable desert soil (mean 343.5 µg·kg−1), to oil well areas (mean of 627.3 µg·kg−1), to urban and residential zones (mean of 1856 µg·kg−1). Diagnostic ratios showed diverse source of PAHs, including petroleum, liquid fossil fuels, and biomass combustion sources. Combustion sources were most significant for PAHs in arable desert soils and residential zones, while petroleum sources were a significant source of PAHs in oilfield areas. Based ontheir carcinogenity, PAHs were classified as carcinogenic (B) or not classified/non-carcinogenic (NB). The total concentrations of carcinogenic PAHs (∑BPAHs) varied from 13.3 µg·kg−1 to 4397 µg·kg−1 across all samples, with a mean concentration of 594.4 µg·kg−1. The results suggest that oilfield soil is subject to a certain level of ecological environment risk. PMID:26016436

  10. Two-dimensional extended fluid model for a dc glow discharge with nonlocal ionization source term

    NASA Astrophysics Data System (ADS)

    Rafatov, Ismail; Bogdanov, Eugeny; Kudryavtsev, Anatoliy

    2013-09-01

    Numerical techniques applied to the gas discharge plasma modelling are generally grouped into fluid and kinetic (particle) methods, and their combinations which lead to the hybrid models. Hybrid models usually employ Monte Carlo method to simulate fast electron dynamics, while slow plasma species are described as fluids. However, since fast electrons contribution to these models is limited to deriving the ionization rate distribution, their effect can be expressed by the analytical approximation of the ionization source function, and then integrating it into the fluid model. In the context of this approach, we incorporated effect of fast electrons into the ``extended fluid model'' of glow discharge, using two spatial dimensions. Slow electrons, ions and excited neutral species are described by the fluid plasma equations. Slow electron transport (diffusion and mobility) coefficients as well as electron induced reaction rates are determined from the solutions of the electron Boltzmann equation. The self-consistent electric field is calculated using the Poisson equation. We carried out test calculations for the discharge in argon gas. Comparison with the experimental data as well as with the hybrid model results exhibits good applicability of the proposed model. The work was supported by the joint research grant from the Scientific and Technical Research Council of Turkey (TUBITAK) 212T164 and Russian Foundation for Basic Research (RFBR).

  11. Mercury in soil near a long-term air emission source in southeastern Idaho

    USGS Publications Warehouse

    Abbott, M.L.; Susong, D.D.; Olson, M.; Krabbenhoft, D.P.

    2003-01-01

    At the Idaho National Engineering and Environmental Laboratory in southeastern Idaho, a 500??C fluidized bed calciner was intermittently operated for 37 years, with measured Hg emission rates of 9-11 g/h. Surface soil was sampled at 57 locations around the facility to determine the spatial distribution of Hg fallout and surface Hg variability, and to predict the total residual Hg mass in the soil from historical emissions. Measured soil concentrations were slightly higher (p<0.05) within 5 km of the source but were overall very low (15-20 ng/g) compared to background Hg levels published for similar soils in the USA (50-70 ng/g). Concentrations decreased 4%/cm with depth and were found to be twice as high under shrubs and in depressions. Mass balance calculations accounted for only 2.5-20% of the estimated total Hg emitted over the 37-year calciner operating history. These results suggest that much of the Hg deposited from calciner operations may have been reduced in the soil and re-emitted as Hg(0) to the global atmospheric pool.

  12. LMFBR source term experiments in the Fuel Aerosol Simulant Test (FAST) facility

    SciTech Connect

    Petrykowski, J.C.; Longest, A.W.

    1985-01-01

    The transport of uranium dioxide (UO/sub 2/) aerosol through liquid sodium was studied in a series of ten experiments in the Fuel Aerosol Simulant Test (FAST) facility at Oak Ridge National Laboratory (ORNL). The experiments were designed to provide a mechanistic basis for evaluating the radiological source term associated with a postulated, energetic core disruptive accident (CDA) in a liquid metal fast breeder reactor (LMFBR). Aerosol was generated by capacitor discharge vaporization of UO/sub 2/ pellets which were submerged in a sodium pool under an argon cover gas. Measurements of the pool and cover gas pressures were used to study the transport of aerosol contained by vapor bubbles within the pool. Samples of cover gas were filtered to determine the quantity of aerosol released from the pool. The depth at which the aerosol was generated was found to be the most critical parameter affecting release. The largest release was observed in the baseline experiment where the sample was vaporized above the sodium pool. In the nine ''undersodium'' experiments aerosol was generated beneath the surface of the pool at depths varying from 30 to 1060 mm. The mass of aerosol released from the pool was found to be a very small fraction of the original specimen. It appears that the bulk of aerosol was contained by bubbles which collapsed within the pool. 18 refs., 11 figs., 4 tabs.

  13. Implementation of a source term control program in a mature boiling water reactor.

    PubMed

    Vargo, G J; Jarvis, A J; Remark, J F

    1991-06-01

    The implementation and results of a source term control program implemented at the James A. FitzPatrick Nuclear Power Plant (JAF), a mature boiling water reactor (BWR) facility that has been in commercial operation since 1975, are discussed. Following a chemical decontamination of the reactor water recirculation piping in the Reload 8/Cycle 9 refueling outage in 1988, hydrogen water chemistry (HWC) and feedwater Zn addition were implemented. This is the first application of both HWC and feedwater Zn addition in a BWR facility. The radiological benefits and impacts of combined operation of HWC and feedwater Zn addition at JAF during Cycle 9 are detailed and summarized. The implementation of hydrogen water chemistry resulted in a significant transport of corrosion products within the reactor coolant system that was greater than anticipated. Feedwater Zn addition appears to be effective in controlling buildup of other activated corrosion products such as 60Co on reactor water recirculation piping; however, adverse impacts were encountered. The major adverse impact of feedwater Zn addition is the production of 65Zn that is released during plant outages and operational transients. PMID:2032839

  14. A Source-Term Based Boundary Layer Bleed/Effusion Model for Passive Shock Control

    NASA Technical Reports Server (NTRS)

    Baurle, Robert A.; Norris, Andrew T.

    2011-01-01

    A modeling framework for boundary layer effusion has been developed based on the use of source (or sink) terms instead of the usual practice of specifying bleed directly as a boundary condition. This framework allows the surface boundary condition (i.e. isothermal wall, adiabatic wall, slip wall, etc.) to remain unaltered in the presence of bleed. This approach also lends itself to easily permit the addition of empirical models for second order effects that are not easily accounted for by simply defining effective transpiration values. Two effusion models formulated for supersonic flows have been implemented into this framework; the Doerffer/Bohning law and the Slater formulation. These models were applied to unit problems that contain key aspects of the flow physics applicable to bleed systems designed for hypersonic air-breathing propulsion systems. The ability of each model to predict bulk bleed properties was assessed, as well as the response of the boundary layer as it passes through and downstream of a porous bleed system. The model assessment was performed with and without the presence of shock waves. Three-dimensional CFD simulations that included the geometric details of the porous plate bleed systems were also carried out to supplement the experimental data, and provide additional insights into the bleed flow physics. Overall, both bleed formulations fared well for the tests performed in this study. However, the sample of test problems considered in this effort was not large enough to permit a comprehensive validation of the models.

  15. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  16. High order finite difference methods with subcell resolution for advection equations with stiff source terms

    SciTech Connect

    Wang, Wei; Shu, Chi-Wang; Yee, H.C.; Sjögreen, Björn

    2012-01-01

    A new high order finite-difference method utilizing the idea of Harten ENO subcell resolution method is proposed for chemical reactive flows and combustion. In reaction problems, when the reaction time scale is very small, e.g., orders of magnitude smaller than the fluid dynamics time scales, the governing equations will become very stiff. Wrong propagation speed of discontinuity may occur due to the underresolved numerical solution in both space and time. The present proposed method is a modified fractional step method which solves the convection step and reaction step separately. In the convection step, any high order shock-capturing method can be used. In the reaction step, an ODE solver is applied but with the computed flow variables in the shock region modified by the Harten subcell resolution idea. For numerical experiments, a fifth-order finite-difference WENO scheme and its anti-diffusion WENO variant are considered. A wide range of 1D and 2D scalar and Euler system test cases are investigated. Studies indicate that for the considered test cases, the new method maintains high order accuracy in space for smooth flows, and for stiff source terms with discontinuities, it can capture the correct propagation speed of discontinuities in very coarse meshes with reasonable CFL numbers.

  17. Aerosol climatology at Delhi in the western Indo-Gangetic Plain: Microphysics, long-term trends, and source strengths

    NASA Astrophysics Data System (ADS)

    Lodhi, Neelesh K.; Beegum, S. Naseema; Singh, Sachchidanand; Kumar, Krishan

    2013-02-01

    present the climatology of aerosol microphysics, its trends, and impact of potential sources based on the long term measurements (for a period of 11.5 years from December 2001 to May 2012) of aerosol optical depths (AOD) in the spectral range 340-1020 nm from an urban center Delhi (28.6°N, 77.3°E, 238 m mean sea level) in the western Indo-Gangetic Plain (IGP). The study is the first ever long-term characterization of aerosols over the western IGP from the ground-based measurements. AODs are known to affect the air quality, visibility, radiative balance, and cloud microphysics of the region and IGP is one of the highest populated and polluted regions of the world. Our measurements show consistently high AOD during the entire period of observation. The seasonal variations of spectral AODs and Angstrom parameters are generally consistent every year. The AODs show a weak but statistically significant (in 95% confidence level) decreasing trend approximately -0.02/year at 500 nm, possibly, modulated by the pre-monsoon heavy dust loading during the first half of the observation period. The climatological monthly mean AOD at shorter wavelengths peaks twice, during June and November, while at longer wavelengths it shows only one peak in June. The annual variations of Angstrom exponent, α and its derivative, α' suggest the prevalence of multi-modal aerosol size distributions at Delhi. The coarse-mode aerosols dominate during summer (March-June) and monsoon (July-September) seasons, whereas fine/accumulation mode enhances during post-monsoon (October-November) and winter (December-February) seasons. Potential advection pathways have been identified using concentration weighted trajectory (CWT) analysis of the 5 day isentropic air mass back trajectories at the observation site and their seasonal variations are discussed.

  18. Geochemistry of dissolved trace elements and heavy metals in the Dan River Drainage (China): distribution, sources, and water quality assessment.

    PubMed

    Meng, Qingpeng; Zhang, Jing; Zhang, Zhaoyu; Wu, Tairan

    2016-04-01

    Dissolved trace elements and heavy metals in the Dan River drainage basin, which is the drinking water source area of South-to-North Water Transfer Project (China), affect large numbers of people and should therefore be carefully monitored. To investigate the distribution, sources, and quality of river water, this study integrating catchment geology and multivariate statistical techniques was carried out in the Dan River drainage from 99 river water samples collected in 2013. The distribution of trace metal concentrations in the Dan River drainage was similar to that in the Danjiangkou Reservoir, indicating that the reservoir was significantly affected by the Dan River drainage. Moreover, our results suggested that As, Sb, Cd, Mn, and Ni were the major pollutants. We revealed extremely high concentrations of As and Sb in the Laoguan River, Cd in the Qingyou River, Mn, Ni, and Cd in the Yinhua River, As and Sb in the Laojun River, and Sb in the Dan River. According to the water quality index, water in the Dan River drainage was suitable for drinking; however, an exposure risk assessment model suggests that As and Sb in the Laojun and Laoguan rivers could pose a high risk to humans in terms of adverse health and potential non-carcinogenic effects. PMID:26782327

  19. A review of the environmental distribution, fate, and control of tetrabromobisphenol A released from sources.

    PubMed

    Malkoske, Tyler; Tang, Yulin; Xu, Wenying; Yu, Shuili; Wang, Hongtao

    2016-11-01

    Tetrabromobisphenol A (TBBPA), a high use brominated flame retardant (BFR), raising concerns of widespread pollution and harm to human and ecological health. BFR manufacturing, TBBPA-based product manufacturing, e-waste recycling, and wastewater treatment plants have been identified as the main emission point sources. This paper discusses the occurrence, distribution, and fate of TBBPA from source to the environment. After release to the environment, TBBPA may undergo adsorption, photolysis, and biological degradation. Exposure of humans and biota is also discussed along with the role of treatment and regulations in reducing release of TBBPA to the environment and exposure risks. In general this review found stronger enforcement of existing legislation, and investment in treatment of e-waste plastics and wastewater from emission point sources could be effective methods in reducing release and exposure of TBBPA in the environment. PMID:27325014

  20. Balancing continuous-variable quantum key distribution with source-tunable linear optics cloning machine

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Lv, Geli; Zeng, Guihua

    2015-11-01

    We show that the tolerable excess noise can be dynamically balanced in source preparation while inserting a tunable linear optics cloning machine (LOCM) for balancing the secret key rate and the maximal transmission distance of continuous-variable quantum key distribution (CVQKD). The intensities of source noise are sensitive to the tunable LOCM and can be stabilized to the suitable values to eliminate the impact of channel noise and defeat the potential attacks even in the case of the degenerated linear optics amplifier (LOA). The LOCM-additional noise can be elegantly employed by the reference partner of reconciliation to regulate the secret key rate and the transmission distance. Simulation results show that there is a considerable improvement in the secret key rate of the LOCM-based CVQKD while providing a tunable LOCM for source preparation with the specified parameters in suitable ranges.

  1. Heralded single-photon sources for quantum-key-distribution applications

    NASA Astrophysics Data System (ADS)

    Schiavon, Matteo; Vallone, Giuseppe; Ticozzi, Francesco; Villoresi, Paolo

    2016-01-01

    Single-photon sources (SPSs) are a fundamental building block for optical implementations of quantum information protocols. Among SPSs, multiple crystal heralded single-photon sources seem to give the best compromise between high pair production rate and low multiple photon events. In this work, we study their performance in a practical quantum-key-distribution experiment, by evaluating the achievable key rates. The analysis focuses on the two different schemes, symmetric and asymmetric, proposed for the practical implementation of heralded single-photon sources, with attention on the performance of their composing elements. The analysis is based on the protocol proposed by Bennett and Brassard in 1984 and on its improvement exploiting decoy state technique. Finally, a simple way of exploiting the postselection mechanism for a passive, one decoy state scheme is evaluated.

  2. Qualitative analysis of precipiation distribution in Poland with use of different data sources

    NASA Astrophysics Data System (ADS)

    Walawender, J.; Dyras, I.; Łapeta, B.; Serafin-Rek, D.; Twardowski, A.

    2008-04-01

    Geographical Information Systems (GIS) can be used to integrate data from different sources and in different formats to perform innovative spatial and temporal analysis. GIS can be also applied for climatic research to manage, investigate and display all kinds of weather data. The main objective of this study is to demonstrate that GIS is a useful tool to examine and visualise precipitation distribution obtained from different data sources: ground measurements, satellite and radar data. Three selected days (30 cases) with convective rainfall situations were analysed. Firstly, scalable GRID-based approach was applied to store data from three different sources in comparable layout. Then, geoprocessing algorithm was created within ArcGIS 9.2 environment. The algorithm included: GRID definition, reclassification and raster algebra. All of the calculations and procedures were performed automatically. Finally, contingency tables and pie charts were created to show relationship between ground measurements and both satellite and radar derived data. The results were visualised on maps.

  3. Source apportionment of ambient fine particle size distribution using positive matrix factorization in Erfurt, Germany

    PubMed Central

    Yue, Wei; Stölzel, Matthias; Cyrys, Josef; Pitz, Mike; Heinrich, Joachim; Kreyling, Wolfgang G.; Wichmann, H.-Erich; Peters, Annette; Wang, Sheng; Hopke, Philip K.

    2008-01-01

    Particle size distribution data collected between September 1997 and August 2001 in Erfurt, Germany were used to investigate the sources of ambient particulate matter by positive matrix factorization (PMF). A total of 29,313 hourly averaged particle size distribution measurements covering the size range of 0.01 to 3.0 μm were included in the analysis. The particle number concentrations (cm−3) for the 9 channels in the ultrafine range, and mass concentrations (ng m−3) for the 41 size bins in the accumulation mode and particle up to 3 μm in aerodynamic diameter were used in the PMF. The analysis was performed separately for each season. Additional analyses were performed including calculations of the correlations of factor contributions with gaseous pollutants (O3, NO, NO2, CO and SO2) and particle composition data (sulfate, organic carbon and elemental carbon), estimating the contributions of each factor to the total number and mass concentration, identifying the directional locations of the sources using the conditional probability function, and examining the diurnal patterns of factor scores. These results were used to assist in the interpretation of the factors. Five factors representing particles from airborne soil, ultrafine particles from local traffic, secondary aerosols from local fuel combustion, particles from remote traffic sources, and secondary aerosols from multiple sources were identified in all seasons. PMID:18433834

  4. Regional Sources of Nitrous Oxide over the United States: Seasonal Variation and Spatial Distribution

    SciTech Connect

    Miller, S. M.; Kort, E. A.; Hirsch, A. I.; Dlugokencky, E. J.; Andrews, A. E.; Xu, X.; Tian, H.; Nehrkorn, T.; Eluszkiewicz, J.; Michalak, A. M.; Wofsy, S. C.

    2012-01-01

    This paper presents top-down constraints on the magnitude, spatial distribution, and seasonality of nitrous oxide (N{sub 2}O) emissions over the central United States. We analyze data from tall towers in 2004 and 2008 using a high resolution Lagrangian particle dispersion model paired with both geostatistical and Bayesian inversions. Our results indicate peak N{sub 2}O emissions in June with a strong seasonal cycle. The spatial distribution of sources closely mirrors data on fertilizer application with particularly large N{sub 2}O sources over the US Cornbelt. Existing inventories for N{sub 2}O predict emissions that differ substantially from the inverse model results in both seasonal cycle and magnitude. We estimate a total annual N{sub 2}O budget over the central US of 0.9-1.2 TgN/yr and an extrapolated budget for the entire US and Canada of 2.1-2.6 TgN/yr. By this estimate, the US and Canada account for 12-15% of the total global N{sub 2}O source or 32-39% of the global anthropogenic source as reported by the Intergovernmental Panel on Climate Change in 2007.

  5. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romanach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  6. Effect of tissue inhomogeneity on dose distribution of point sources of low-energy electrons.

    PubMed

    Kwok, C S; Bialobzyski, P J; Yu, S K; Prestwich, W V

    1990-01-01

    Perturbation in dose distributions of point sources of low-energy electrons at planar interfaces of cortical bone (CB) and red marrow (RM) was investigated experimentally and by Monte Carlo codes EGS and the TIGER series. Ultrathin LiF thermoluminescent dosimeters were used to measure the dose distributions of point sources of 204Tl and 147Pm in RM. When the point sources were at 12 mg/cm2 from a planar interface of CB and RM equivalent plastics, dose enhancement ratios in RM averaged over the region 0-12 mg/cm2 from the interface were measured to be 1.08 +/- 0.03 (SE) and 1.03 +/- 0.03 (SE) for 204Tl and 147Pm, respectively. The Monte Carlo codes predicted 1.05 +/- 0.02 and 1.01 +/- 0.02 for the two nuclides, respectively. However, EGS gave consistently 3% higher dose in the dose scoring region than the TIGER series when point sources of monoenergetic electrons up to 0.75 MeV energy were considered in the homogeneous RM situation or in the CB and RM heterogeneous situation. By means of the TIGER series, it was demonstrated that aluminum, which is normally assumed to be equivalent to CB in radiation dosimetry, leads to an overestimation of backscattering of low-energy electrons in soft tissue at a CB-soft-tissue interface by as much as a factor of 2. PMID:2233564

  7. Analysis of electron energy distribution function in the Linac4 H⁻ source.

    PubMed

    Mochizuki, S; Mattei, S; Nishida, K; Hatayama, A; Lettry, J

    2016-02-01

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H(-) negative ion production by reducing the gas pressure. PMID:26931990

  8. Spatial distribution of the source-receptor relationship of sulfur in Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kajino, M.; Ueda, H.; Sato, K.; Sakurai, T.

    2011-07-01

    The spatial distribution of the source-receptor relationship (SRR) of sulfur over Northeast Asia was examined using a chemical transport model (RAQM) off-line coupled with a meteorological model (MM5). The simulation was conducted for the entire year of 2002. The results were evaluated using monitoring data for six remote stations of the Acid Deposition Monitoring Network in East Asia (EANET). The modeled SO2 and O3 concentrations agreed well with the observations quantitatively. The modeled aerosol and wet deposition fluxes of SO42- were underestimated by 30 % and 50 %, respectively. The domain was divided into 5 source-receptor regions: (I) North China; (II) Central China; (III) South China; (IV) South Korea; and (V) Japan. The sulfur deposition in each receptor region amounted to about 50-75 % of the emissions from the same region. The largest contribution to the deposition in each region was originated from the same region, accounting for 53-84 %. The second largest contribution was due to Region II, supplying 14-43 %. The spatial distributions of the SRRs revealed that subregional values varied by about two times more than regional averages due to nonuniformity across the deposition fields. Examining the spatial distributions of the deposition fields was important for identifying subregional areas where the deposition was highest within a receptor region. The horizontal distribution changed substantially according to season.

  9. Spatial distribution of the source-receptor relationship of sulfur in Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kajino, M.; Ueda, H.; Sato, K.; Sakurai, T.

    2010-12-01

    The spatial distribution of the source-receptor relationship (SRR) of sulfur over Northeast Asia was examined using an off-line coupled meteorological/chemical transport model (MM5/RAQM). The simulation was conducted for the entire year of 2002. The results were evaluated using monitoring data for six remote stations of the Acid Deposition Monitoring Network in East Asia (EANET). The modeled SO2 and O3 concentrations agreed well with the observations quantitatively. The modeled aerosol and wet deposition fluxes of SO42- were underestimated by 30% and 50%, respectively, whereas the modeled precipitation was overestimated by 1.6 to 1.9 times. The domain was divided into 5 source-receptor regions: I, North China; II, Central China; III, South China; IV, South Korea; and V, Japan. The sulfur deposition in each receptor region amounted to about 50-75% of the emissions from the same region. The largest contribution to the deposition in each region was the domestic origin, accounting for 53-84%. The second largest contribution after the domestic origin was due to region II, supplying 14-43%, outside region II itself. The spatial distributions of the SRRs revealed that subregional values varied by about two times more than regional averages due to nonuniformity across the deposition fields. Examining the spatial distributions of the deposition fields was important for identifying subregional areas where the deposition was highest within a receptor region. The horizontal distribution changed substantially according to season.

  10. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, Judy L.

    1987-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogeneous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the source terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  11. Analysis of the Variability of Classsified and Unclassified Radiological Source term Inventories in the Frenchman Flat Area, Nevada test Site

    SciTech Connect

    Zhao, P; Zavarin, M

    2008-06-04

    It has been proposed that unclassified source terms used in the reactive transport modeling investigations at NTS CAUs should be based on yield-weighted source terms calculated using the average source term from Bowen et al. (2001) and the unclassified announced yields reported in DOE/NV-209. This unclassified inventory is likely to be used in unclassified contaminant boundary calculations and is, thus, relevant to compare to the classified inventory. They have examined the classified radionuclide inventory produced by 10 underground nuclear tests conducted in the Frenchman Flat (FF) area of the Nevada Test Site. The goals were to (1) evaluate the variability in classified radiological source terms among the 10 tests and (2) compare that variability and inventory uncertainties to an average unclassified inventory (e.g. Bowen 2001). To evaluate source term variability among the 10 tests, radiological inventories were compared on two relative scales: geometric mean and yield-weighted geometric mean. Furthermore, radiological inventories were either decay corrected to a common date (9/23/1992) or the time zero (t{sub 0}) of each test. Thus, a total of four data sets were produced. The date of 9/23/1992 was chosen based on the date of the last underground nuclear test at the Nevada Test Site.

  12. Long term structural health monitoring by distributed fiber-optic sensing

    NASA Astrophysics Data System (ADS)

    Persichetti, G.; Minardo, A.; Testa, G.; Bernini, R.

    2012-04-01

    Structural health monitoring (SHM) systems allow to detect unusual structural behaviors that indicate a malfunction in the structure, which is an unhealthy structural condition. Depending on the complexity level of the SHM system, it can even perform the diagnosis and the prognosis steps, supplying the required information to carry out the most suitable actuation. While standard SHM systems are based on the use of point sensors (e.g., strain gauges, crackmeters, tiltmeters, etc.), there is an increasing interest towards the use of distributed optical fiber sensors, in which the whole structure is monitored by use of a single optical fiber. In particular, distributed optical fiber sensors based on stimulated Brillouin scattering (SBS) permit to detect the strain in a fully distributed manner, with a spatial resolution in the meter or submeter range, and a sensing length that can reach tens of km. These features, which have no performance equivalent among the traditional electronic sensors, are to be considered extremely valuable. When the sensors are opportunely installed on the most significant structural members, this system can lead to the comprehension of the real static behaviour of the structure rather than merely measuring the punctual strain level on one of its members. In addition, the sensor required by Brillouin technology is an inexpensive, telecom-grade optical fiber that shares most of the typical advantages of other fiber-optic sensors, such as high resistance to moisture and corrosion, immunity to electromagnetic fields and potential for long-term monitoring. In this work, we report the result of a test campaign performed on a concrete bridge. In particular, the tests were performed by an portable prototype based on Brillouin Optical Time-Domain Analysis (BOTDA) [1,2]. This type of analysis makes use of a pulsed laser light and a frequency-shifted continuous-wave (CW) laser light, launched simultaneously at the two opposite ends of an optical fiber

  13. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material...

  14. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material...

  15. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Specifically Licensed Items § 32.74 Manufacture and distribution of sources or devices containing...

  16. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Generally Licensed Items § 32.74 Manufacture and distribution of sources or devices containing byproduct material...

  17. 10 CFR 32.74 - Manufacture and distribution of sources or devices containing byproduct material for medical use.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Manufacture and distribution of sources or devices... SPECIFIC DOMESTIC LICENSES TO MANUFACTURE OR TRANSFER CERTAIN ITEMS CONTAINING BYPRODUCT MATERIAL Specifically Licensed Items § 32.74 Manufacture and distribution of sources or devices containing...

  18. Management of Ultimate Risk of Nuclear Power Plants by Source Terms - Lessons Learned from the Chernobyl Accident

    SciTech Connect

    Genn Saji

    2006-07-01

    The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessons learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and deposition

  19. Heavy metals in soils from a typical county in Shanxi Province, China: Levels, sources and spatial distribution.

    PubMed

    Pan, Li-bo; Ma, Jin; Wang, Xian-liang; Hou, Hong

    2016-04-01

    The concentrations of As, Cd, Cr, Cu, Pb, Ni, Zn, and Hg in 128 surface soil samples from Xiangfen County, Shanxi Province, China were measured. The concentrations of these eight heavy metals were lower than the critical values in the national soil quality standard. However, these concentrations were found to be slightly higher than their background values in soils in Shanxi Province, indicating enrichment of these metals in soils in Xiangfen County, especially for Hg and Cd. Principal component analysis coupled with cluster analysis was used to analyze the data and identify possible sources of these heavy metals; the results showed that the eight heavy metals in soils from Xiangfen County came from three different sources. Lead, Cd, Cu and Zn mainly arose from agricultural practices and vehicle emissions. Arsenic and Ni arose mainly from parent materials. Industrial practices were the main sources of Cr and Hg. The spatial distribution of the heavy metals varied greatly, and was closely correlated to local anthropogenic activities. This study will be helpful not only for improving local soil environmental quality but will also provide a basis for effectively targeting policies to protect soils from long-term heavy metal accumulation. PMID:26807946

  20. Efficient construction of high-resolution TVD conservative schemes for equations with source terms: application to shallow water flows

    NASA Astrophysics Data System (ADS)

    Burguete, J.; García-Navarro, P.

    2001-09-01

    High-resolution total variation diminishing (TVD) schemes are widely used for the numerical approximation of hyperbolic conservation laws. Their extension to equations with source terms involving spatial derivatives is not obvious. In this work, efficient ways of constructing conservative schemes from the conservative, non-conservative or characteristic form of the equations are described in detail. An upwind, as opposed to a pointwise, treatment of the source terms is adopted here, and a new technique is proposed in which source terms are included in the flux limiter functions to get a complete second-order compact scheme. A new correction to fix the entropy problem is also presented and a robust treatment of the boundary conditions according to the discretization used is stated. Copyright

  1. Sources/sinks analysis with satellite sensing for exploring global atmospheric CO2 distributions

    NASA Astrophysics Data System (ADS)

    Shim, C.; Nassar, R.; Kim, J.

    2010-12-01

    There is growing interest in CO2 budget analysis since space-borne measurements of global CO2 distribution have been conducted (e.g, GOSAT project). Here we simulated the global CO2 distribution to estimate individual source/sink contributions. The chemical transport model (GEOS-Chem) was used in order to simulate the global CO2 distribution with updated global sources/sinks with 2°x2.5° horizontal resolution. In addition, 3-D emissions from aviation and chemical oxidation of CO are implemented. The model simulated CO2 amounts were compared with the GOSAT column averaged CO2 column (SWIR L2 data) from April 2009 to May 2010. The seasonal cycles of CO2 concentration were compared and the regional patterns of CO2 distribution are explained by the model with a systemic difference by 1 ~ 2% in the CO2 concentration. In other work, the GEOS-Chem CO2 concentrations show reasonable agreement with GLOBALVIEW-CO2. We further estimated the sources/sinks contributions to the global CO2 budget through 9 tagged CO2 tracers (fossil fuels, ocean exchanges, biomass burning, biofuel burning, balanced biosphere, net terrestrial exchange, ship emissions, aviation emissions, and oxidation from carbon precursors) over the years 2005-2009. Global CO2 concentration shows an increase of 2.1 ppbv/year in which the human fossil fuel and cement emissions are the main driving force (5.0 ppbv/year) for the trend. Net terrestrial and oceanic exchange of CO2 are main sinks (-2.1 ppbv/year and -0.7 ppbv/year, respectively). Our model results will help to suggest the level of reduction in global human CO2 emissions which could control the global CO2 trends in 21th century.

  2. Effect of seasonal and long-term changes in stress on sources of water to wells

    USGS Publications Warehouse

    Reilly, Thomas E.; Pollock, David W.

    1995-01-01

    The source of water to wells is ultimately the location where the water flowing to a well enters the boundary surface of the ground-water system . In ground-water systems that receive most of their water from areal recharge, the location of the water entering the system is at the water table . The area contributing recharge to a discharging well is the surface area that defines the location of the water entering the groundwater system. Water entering the system at the water table flows to the well and is eventually discharged from the well. Many State agencies are currently (1994) developing wellhead-protection programs. The thrust of some of these programs is to protect water supplies by determining the areas contributing recharge to water-supply wells and by specifying regulations to minimize the opportunity for contamination of the recharge water by activities at the land surface. In the analyses of ground-water flow systems, steady-state average conditions are frequently used to simplify the problem and make a solution tractable. Recharge is usually cyclic in nature, however, having seasonal cycles and longer term climatic cycles. A hypothetical system is quantitatively analyzed to show that, in many cases, these cyclic changes in the recharge rates apparently do not significantly affect the location and size of the areas contributing recharge to wells. The ratio of the mean travel time to the length of the cyclic stress period appears to indicate whether the transient effects of the cyclic stress must be explicitly represented in the analysis of contributing areas to wells. For the cases examined, if the ratio of the mean travel time to the period of the cyclic stress was much greater than one, then the transient area contributing recharge to wells was similar to the area calculated using an average steady-state condition. Noncyclic long-term transient changes in water use, however, and cyclic stresses on systems with ratios less than 1 can and do affect the

  3. Short-term Music Training Enhances Complex, Distributed Neural Communication during Music and Linguistic Tasks.

    PubMed

    Carpentier, Sarah M; Moreno, Sylvain; McIntosh, Anthony R

    2016-10-01

    Musical training is frequently associated with benefits to linguistic abilities, and recent focus has been placed on possible benefits of bilingualism to lifelong executive functions; however, the neural mechanisms for such effects are unclear. The aim of this study was to gain better understanding of the whole-brain functional effects of music and second-language training that could support such previously observed cognitive transfer effects. We conducted a 28-day longitudinal study of monolingual English-speaking 4- to 6-year-old children randomly selected to receive daily music or French language training, excluding weekends. Children completed passive EEG music note and French vowel auditory oddball detection tasks before and after training. Brain signal complexity was measured on source waveforms at multiple temporal scales as an index of neural information processing and network communication load. Comparing pretraining with posttraining, musical training was associated with increased EEG complexity at coarse temporal scales during the music and French vowel tasks in widely distributed cortical regions. Conversely, very minimal decreases in complexity at fine scales and trends toward coarse-scale increases were displayed after French training during the tasks. Spectral analysis failed to distinguish between training types and found overall theta (3.5-7.5 Hz) power increases after all training forms, with spatially fewer decreases in power at higher frequencies (>10 Hz). These findings demonstrate that musical training increased diversity of brain network states to support domain-specific music skill acquisition and music-to-language transfer effects. PMID:27243611

  4. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  5. THE ENVIRONMENT AND DISTRIBUTION OF EMITTING ELECTRONS AS A FUNCTION OF SOURCE ACTIVITY IN MARKARIAN 421

    SciTech Connect

    Mankuzhiyil, Nijil; Ansoldi, Stefano; Tavecchio, Fabrizio

    2011-05-20

    For the high-frequency-peaked BL Lac object Mrk 421, we study the variation of the spectral energy distribution (SED) as a function of source activity, from quiescent to active. We use a fully automatized {chi}{sup 2}-minimization procedure, instead of the 'eyeball' procedure more commonly used in the literature, to model nine SED data sets with a one-zone synchrotron self-Compton (SSC) model and examine how the model parameters vary with source activity. The latter issue can finally be addressed now, because simultaneous broadband SEDs (spanning from optical to very high energy photon) have finally become available. Our results suggest that in Mrk 421 the magnetic field (B) decreases with source activity, whereas the electron spectrum's break energy ({gamma}{sub br}) and the Doppler factor ({delta}) increase-the other SSC parameters turn out to be uncorrelated with source activity. In the SSC framework, these results are interpreted in a picture where the synchrotron power and peak frequency remain constant with varying source activity, through a combination of decreasing magnetic field and increasing number density of {gamma} {<=} {gamma}{sub br} electrons: since this leads to an increased electron-photon scattering efficiency, the resulting Compton power increases, and so does the total (= synchrotron plus Compton) emission.

  6. Spatial distribution of the plasma parameters in the RF negative ion source prototype for fusion

    SciTech Connect

    Lishev, S.; Schiesko, L.; Wünderlich, D.; Fantz, U.

    2015-04-08

    A numerical model, based on the fluid plasma theory, has been used for description of the spatial distribution of the plasma parameters (electron density and temperature, plasma potential as well as densities of the three types of positive hydrogen ions) in the IPP prototype RF negative hydrogen ion source. The model covers the driver and the expansion plasma region of the source with their actual size and accounts for the presence of the magnetic filter field with its actual value and location as well as for the bias potential applied to the plasma grid. The obtained results show that without a magnetic filter the two 2D geometries considered, respectively, with an axial symmetry and a planar one, represent accurately the complex 3D structure of the source. The 2D model with a planar symmetry (where the E×B and diamagnetic drifts could be involved in the description) has been used for analysis of the influence, via the charged-particle and electron-energy fluxes, of the magnetic filter and of the bias potential on the spatial structure of the plasma parameters in the source. Benchmarking of results from the code to experimental data shows that the model reproduces the general trend in the axial behavior of the plasma parameters in the source.

  7. Spatial distribution of the plasma parameters in the RF negative ion source prototype for fusion

    NASA Astrophysics Data System (ADS)

    Lishev, S.; Schiesko, L.; Wünderlich, D.; Fantz, U.

    2015-04-01

    A numerical model, based on the fluid plasma theory, has been used for description of the spatial distribution of the plasma parameters (electron density and temperature, plasma potential as well as densities of the three types of positive hydrogen ions) in the IPP prototype RF negative hydrogen ion source. The model covers the driver and the expansion plasma region of the source with their actual size and accounts for the presence of the magnetic filter field with its actual value and location as well as for the bias potential applied to the plasma grid. The obtained results show that without a magnetic filter the two 2D geometries considered, respectively, with an axial symmetry and a planar one, represent accurately the complex 3D structure of the source. The 2D model with a planar symmetry (where the E×B and diamagnetic drifts could be involved in the description) has been used for analysis of the influence, via the charged-particle and electron-energy fluxes, of the magnetic filter and of the bias potential on the spatial structure of the plasma parameters in the source. Benchmarking of results from the code to experimental data shows that the model reproduces the general trend in the axial behavior of the plasma parameters in the source.

  8. Passive sources for the Bennett-Brassard 1984 quantum-key-distribution protocol with practical signals

    SciTech Connect

    Curty, Marcos; Ma Xiongfeng; Luetkenhaus, Norbert; Lo, Hoi-Kwong

    2010-11-15

    Most experimental realizations of quantum key distribution are based on the Bennett-Brassard 1984 (the so-called BB84) protocol. In a typical optical implementation of this scheme, the sender uses an active source to produce the required BB84 signal states. While active state preparation of BB84 signals is a simple and elegant solution in principle, in practice passive state preparation might be desirable in some scenarios, for instance, in those experimental setups operating at high transmission rates. Passive schemes might also be more robust against side-channel attacks than active sources. Typical passive devices involve parametric down-conversion. In this paper, we show that both coherent light and practical single-photon sources are also suitable for passive generation of BB84 signal states. Our method does not require any externally driven element, but only linear optical components and photodetectors. In the case of coherent light, the resulting key rate is similar to the one delivered by an active source. When the sender uses practical single-photon sources, however, the distance covered by a passive transmitter might be longer than that of an active configuration.

  9. Size distributions and source function of sea spray aerosol over the South China Sea

    NASA Astrophysics Data System (ADS)

    Chu, Yingjia; Sheng, Lifang; Liu, Qian; Zhao, Dongliang; Jia, Nan; Kong, Yawen

    2016-08-01

    The number concentrations in the radius range of 0.06-5 μm of aerosol particles and meteorological parameters were measured on board during a cruise in the South China Sea from August 25 to October 12, 2012. Effective fluxes in the reference height of 10 m were estimated by steady state dry deposition method based on the observed data, and the influences of different air masses on flux were discussed in this paper. The number size distribution was characterized by a bimodal mode, with the average total number concentration of (1.50 ± 0.76)×103 cm-3. The two mode radii were 0.099 µm and 0.886 µm, both of which were within the scope of accumulation mode. A typical daily average size distribution was compared with that measured in the Bay of Bengal. In the whole radius range, the number concentrations were in agreement with each other; the modes were more distinct in this study than that abtained in the Bay of Bengal. The size distribution of the fluxes was fitted with the sum of log-normal and power-law distribution. The impact of different air masses was mainly on flux magnitude, rather than the shape of spectral distribution. A semiempirical source function that is applicable in the radius range of 0.06 µm< r 80<0.3 µm with the wind speed varying from 1.00 m s-1 to 10.00 m s-1 was derived.

  10. Recurring flood distribution patterns related to short-term Holocene climatic variability.

    PubMed

    Benito, Gerardo; Macklin, Mark G; Panin, Andrei; Rossato, Sandro; Fontana, Alessandro; Jones, Anna F; Machado, Maria J; Matlakhova, Ekaterina; Mozzi, Paolo; Zielhofer, Christoph

    2015-01-01

    Millennial- and multi-centennial scale climate variability during the Holocene has been well documented, but its impact on the distribution and timing of extreme river floods has yet to be established. Here we present a meta-analysis of more than 2000 radiometrically dated flood units to reconstruct centennial-scale Holocene flood episodes in Europe and North Africa. Our data analysis shows a general increase in flood frequency after 5000 cal. yr BP consistent with a weakening in zonal circulation over the second half of the Holocene, and with an increase in winter insolation. Multi-centennial length phases of flooding in UK and central Europe correspond with periods of minimum solar irradiance, with a clear trend of increasing flood frequency over the last 1000 years. Western Mediterranean regions show synchrony of flood episodes associated with negative phases of the North Atlantic Oscillation that are out-of-phase with those evident within the eastern Mediterranean. This long-term flood record reveals complex but geographically highly interconnected climate-flood relationships, and provides a new framework to understand likely future spatial changes of flood frequency. PMID:26549043

  11. Recurring flood distribution patterns related to short-term Holocene climatic variability

    PubMed Central

    Benito, Gerardo; Macklin, Mark G.; Panin, Andrei; Rossato, Sandro; Fontana, Alessandro; Jones, Anna F.; Machado, Maria J.; Matlakhova, Ekaterina; Mozzi, Paolo; Zielhofer, Christoph

    2015-01-01

    Millennial- and multi-centennial scale climate variability during the Holocene has been well documented, but its impact on the distribution and timing of extreme river floods has yet to be established. Here we present a meta-analysis of more than 2000 radiometrically dated flood units to reconstruct centennial-scale Holocene flood episodes in Europe and North Africa. Our data analysis shows a general increase in flood frequency after 5000 cal. yr BP consistent with a weakening in zonal circulation over the second half of the Holocene, and with an increase in winter insolation. Multi-centennial length phases of flooding in UK and central Europe correspond with periods of minimum solar irradiance, with a clear trend of increasing flood frequency over the last 1000 years. Western Mediterranean regions show synchrony of flood episodes associated with negative phases of the North Atlantic Oscillation that are out-of-phase with those evident within the eastern Mediterranean. This long-term flood record reveals complex but geographically highly interconnected climate-flood relationships, and provides a new framework to understand likely future spatial changes of flood frequency. PMID:26549043

  12. Recurring flood distribution patterns related to short-term Holocene climatic variability

    NASA Astrophysics Data System (ADS)

    Benito, Gerardo; Macklin, Mark G.; Panin, Andrei; Rossato, Sandro; Fontana, Alessandro; Jones, Anna F.; Machado, Maria J.; Matlakhova, Ekaterina; Mozzi, Paolo; Zielhofer, Christoph

    2015-11-01

    Millennial- and multi-centennial scale climate variability during the Holocene has been well documented, but its impact on the distribution and timing of extreme river floods has yet to be established. Here we present a meta-analysis of more than 2000 radiometrically dated flood units to reconstruct centennial-scale Holocene flood episodes in Europe and North Africa. Our data analysis shows a general increase in flood frequency after 5000 cal. yr BP consistent with a weakening in zonal circulation over the second half of the Holocene, and with an increase in winter insolation. Multi-centennial length phases of flooding in UK and central Europe correspond with periods of minimum solar irradiance, with a clear trend of increasing flood frequency over the last 1000 years. Western Mediterranean regions show synchrony of flood episodes associated with negative phases of the North Atlantic Oscillation that are out-of-phase with those evident within the eastern Mediterranean. This long-term flood record reveals complex but geographically highly interconnected climate-flood relationships, and provides a new framework to understand likely future spatial changes of flood frequency.

  13. The geostatistic-based spatial distribution variations of soil salts under long-term wastewater irrigation.

    PubMed

    Wu, Wenyong; Yin, Shiyang; Liu, Honglu; Niu, Yong; Bao, Zhe

    2014-10-01

    The purpose of this study was to determine and evaluate the spatial changes in soil salinity by using geostatistical methods. The study focused on the suburb area of Beijing, where urban development led to water shortage and accelerated wastewater reuse to farm irrigation for more than 30 years. The data were then processed by GIS using three different interpolation techniques of ordinary kriging (OK), disjunctive kriging (DK), and universal kriging (UK). The normality test and overall trend analysis were applied for each interpolation technique to select the best fitted model for soil parameters. Results showed that OK was suitable for soil sodium adsorption ratio (SAR) and Na(+) interpolation; UK was suitable for soil Cl(-) and pH; DK was suitable for soil Ca(2+). The nugget-to-sill ratio was applied to evaluate the effects of structural and stochastic factors. The maps showed that the areas of non-saline soil and slight salinity soil accounted for 6.39 and 93.61%, respectively. The spatial distribution and accumulation of soil salt were significantly affected by the irrigation probabilities and drainage situation under long-term wastewater irrigation. PMID:25127658

  14. Angiogenin distribution in human term placenta, and expression by cultured trophoblastic cells

    PubMed Central

    Pavlov, Nadine; Hatzi, Elissavet; Bassaglia, Yann; Frendo, Jean-Louis; Evain-Brion, Danièle; Badet, Josette

    2003-01-01

    Human angiogenin is a 14-kDa secreted protein with angiogenic and ribonucleolytic activities. Angiogenin is associated with tumour development but is also present in normal biological fluids and tissues. To further address the physiological role of angiogenin, we studied its expression in situ and in vitro, using the human term placenta as a model of physiological angiogenesis. Angiogenin was immunodetected by light and transmission electron microscopy, and its cellular distribution was established by double immunolabelling with cell markers including von Willebrand factor, platelet/endothelial cell adhesion molecule-1 (PECAM-1), CD34, Tie-2, vascular endothelial cadherin (VE-cadherin), vascular endothelial growth factor receptor-2 (VEGF-R2), erythropoeitin receptor (Epo-R), alpha-smooth muscle actin, CD45, cytokeratin 7, and Ki-67. Angiogenin immunoreactivity was detected in villous and extravillous trophoblasts, the trophoblast basement membrane, the endothelial basal lamina, foetal blood vessels, foetal and maternal red blood cells, and amnionic cells. Its expression was confirmed by in situ hybridisation with a digoxygenin-labelled cDNA probe and reverse transcriptase-polymerase chain reaction amplification. Villous cytotrophoblasts, isolated and differentiated in vitro into a functional syncytiotrophoblast, expressed and secreted angiogenin. Given its known biological activities in vitro and its observed pattern of expression, these data suggest that, in human placenta, angiogenin has a role not only in angiogenesis but also in vascular and tissue homeostasis, maternal immune tolerance of the foetus, and host defences. PMID:15166501

  15. Long-term tidal level distribution using a wave-by-wave approach

    NASA Astrophysics Data System (ADS)

    Castanedo, Sonia; Mendez, Fernando J.; Medina, Raul; Abascal, Ana J.

    2007-11-01

    Tidal analysis is usually performed in the time domain by means of the decomposition of the time series of the free surface in a number of harmonics, characterizing every single component along a shelf or inside an estuary. Although this kind of analysis has proven to be very useful in numerous studies, when it comes to characterizing the tide statistically (i.e., the long-term sea level distribution) this approach is inadequate. This paper presents a different approach. Instead of working with the complete time series, some statistical properties of the signal, such as the probability density function (pdf) of the tidal wave heights (TWH) are used. The tidal elevation (TE) pdf is obtained by means of a statistical procedure that consists of the definition of the compound pdf as a function of the TWH pdf and the U-shaped pdf for the elevations of a single wave. In order to have an analytical representation of the probability density functions, the use of kernel density functions is explored. An extension to account for asymmetries in the tidal elevations is also proposed. Both, the symmetric and the asymmetric models are applied to different tide gauge data along the World's coastline (symmetric and asymmetric - positive and negative skewed -). The results show that the symmetric approach is capable of representing the TE pdfs for roughly symmetric tides. However, in shallow areas where the distortion of the tide is more pronounced, the asymmetric model provides a better description of the TE pdfs.

  16. Practical monitoring of the short-term distribution of dispersed oils

    SciTech Connect

    Railsback, S.F.; Robilliard, G.A.; Mortenson, J.R.

    1987-01-01

    An experimental program for monitoring the short-term distribution and concentration of chemically dispersed oil slicks has been developed for Clean Bay, the San Francisco area oil spill cleanup cooperative. The methods used in the program are experimental and still under development. The objectives of the program are to (1) document the surface area and volume of water affected by dispersed oil, (2) estimate the effectiveness of the dispersant, (3) determine the peak and (4) the range of oil-dispersant concentrations in the affected water. Additional objectives that may be attained if field conditions are acceptable are to: (5) estimate the rate at which oil disperses downward, and (6) estimate what fraction of the light-molecular-weight hydrocarbons are evaporated after application of the dispersant. The program includes oil concentration measurements made with a field fluorimeter and by laboratory analysis. The program is flexibly designed so that it can be adapted to a variety of field conditions. 7 refs., 1 fig.

  17. Distribution and long-term trends in various fog types over South Korea

    NASA Astrophysics Data System (ADS)

    Belorid, Miloslav; Lee, Chong Bum; Kim, Jea-Chul; Cheon, Tae-Hun

    2015-11-01

    This study analyzed the spatial and temporal distributions of various fog types over South Korea. Six types of fogs were identified using a classification algorithm based on simple conceptual models of fog formation. The algorithm was applied to a 25-year record of meteorological observations. The most common fog types were radiation fog, prevailing at inland stations, and precipitation fog at coastal and island stations. Declining temporal trends in the frequency of fog events ranging between 2.1 and 10.9 fog events per decade were found at eight inland and two coastal stations. Long-term trends for each fog type show that the decrease in the frequency of fog events is mainly due to a decrease in the frequency of radiation fogs ranging between 1.1 and 8.5 fog events per decade. To identify the potential factors related to the decrease in radiation fog events, the temporal trends in annual mean nocturnal maximal cooling rates and annual mean nocturnal specific humidity during nights with clear sky and clam winds were examined. The results show that the decrease in the frequency of radiation fog events is associated mainly with the pattern of urbanization occurring during the past two decades.

  18. The distribution of polarized radio sources >15 μJy IN GOODS-N

    SciTech Connect

    Rudnick, L.; Owen, F. N.

    2014-04-10

    We present deep Very Large Array observations of the polarization of radio sources in the GOODS-N field at 1.4 GHz at resolutions of 1.''6 and 10''. At 1.''6, we find that the peak flux cumulative number count distribution is N(> p) ∼ 45*(p/30 μJy){sup –0.6} per square degree above a detection threshold of 14.5 μJy. This represents a break from the steeper slopes at higher flux densities, resulting in fewer sources predicted for future surveys with the Square Kilometer Array and its precursors. It provides a significant challenge for using background rotation measures (RMs) to study clusters of galaxies or individual galaxies. Most of the polarized sources are well above our detection limit, and they are also radio galaxies that are well-resolved even at 10'', with redshifts from ∼0.2-1.9. We determined a total polarized flux for each source by integrating the 10'' polarized intensity maps, as will be done by upcoming surveys such as POSSUM. These total polarized fluxes are a factor of two higher, on average, than the peak polarized flux at 1.''6; this would increase the number counts by ∼50% at a fixed flux level. The detected sources have RMs with a characteristic rms scatter of ∼11 rad m{sup –2} around the local Galactic value, after eliminating likely outliers. The median fractional polarization from all total intensity sources does not continue the trend of increasing at lower flux densities, as seen for stronger sources. The changes in the polarization characteristics seen at these low fluxes likely represent the increasing dominance of star-forming galaxies.

  19. Models for Deploying Open Source and Commercial Software to Support Earth Science Data Processing and Distribution

    NASA Astrophysics Data System (ADS)

    Yetman, G.; Downs, R. R.

    2011-12-01

    Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may

  20. Semi-implicit and fully implicit shock-capturing methods for hyperbolic conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Shinn, J. L.

    1986-01-01

    Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.

  1. The development of a realistic source term for sodium-cooled fast reactors : assessment of current status and future needs.

    SciTech Connect

    LaChance, Jeffrey L.; Phillips, Jesse; Parma, Edward J., Jr.; Olivier, Tara Jean; Middleton, Bobby D.

    2011-06-01

    Sodium-cooled fast reactors (SFRs) continue to be proposed and designed throughout the United States and the world. Although the number of SFRs actually operating has declined substantially since the 1980s, a significant interest in advancing these types of reactor systems remains. Of the many issues associated with the development and deployment of SFRs, one of high regulatory importance is the source term to be used in the siting of the reactor. A substantial amount of modeling and experimental work has been performed over the past four decades on accident analysis, sodium coolant behavior, and radionuclide release for SFRs. The objective of this report is to aid in determining the gaps and issues related to the development of a realistic, mechanistically derived source term for SFRs. This report will allow the reader to become familiar with the severe accident source term concept and gain a broad understanding of the current status of the models and experimental work. Further, this report will allow insight into future work, in terms of both model development and experimental validation, which is necessary in order to develop a realistic source term for SFRs.

  2. Residues, Distributions, Sources, and Ecological Risks of OCPs in the Water from Lake Chaohu, China

    PubMed Central

    Liu, Wen-Xiu; He, Wei; Qin, Ning; Kong, Xiang-Zhen; He, Qi-Shuang; Ouyang, Hui-Ling; Yang, Bin; Wang, Qing-Mei; Yang, Chen; Jiang, Yu-Jiao; Wu, Wen-Jing; Xu, Fu-Liu

    2012-01-01

    The levels of 18 organochlorine pesticides (OCPs) in the water from Lake Chaohu were measured by a solid phase extraction-gas chromatography-mass spectrometer detector. The spatial and temporal distribution, possible sources, and potential ecological risks of the OCPs were analyzed. The annual mean concentration for the OCPs in Lake Chaohu was 6.99 ng/L. Aldrin, HCHs, and DDTs accounted for large proportions of the OCPs. The spatial pollution followed the order of Central Lakes > Western Lakes > Eastern Lakes and water area. The sources of the HCHs were mainly from the historical usage of lindane. DDTs were degraded under aerobic conditions, and the main sources were from the use of technical DDTs. The ecological risks of 5 OCPs were assessed by the species sensitivity distribution (SSD) method in the order of heptachlor > γ-HCH > p,p′-DDT > aldrin > endrin. The combining risks of all sampling sites were MS > JC > ZM > TX, and those of different species were crustaceans > fish > insects and spiders. Overall, the ecological risks of OCP contaminants on aquatic animals were very low. PMID:23251107

  3. Organic micropollutants in coastal waters from NW Mediterranean Sea: sources distribution and potential risk.

    PubMed

    Sánchez-Avila, Juan; Tauler, Romà; Lacorte, Silvia

    2012-10-01

    This study provides a first estimation on the sources, distribution and risk of organic micropollutants (OMPs) in coastal waters from NW Mediterranean Sea. Polycyclic aromatic hydrocarbons, polychlorinated biphenyls, organochlorinated pesticides, polybrominated diphenyl ethers, phthalates and alkylphenols were analyzed by solid phase extraction and gas chromatography coupled to tandem mass spectrometry (SPE-GC-EI-MS/MS). River waters and wastewater treatment plant effluents discharging to the sea were identified as the main sources of OMPs to coastal waters, with an estimated input amount of around of 25,800 g d(-1). The concentration of ΣOMPs in coastal areas ranged from 17.4 to 8442 ng L(-1), and was the highest in port waters, followed by coastal and river mouth seawaters. A summarized overview of the patterns and sources of OMP contamination on the investigated coastal sea waters of NW Mediterranean Sea, as well as of their geographical distribution was obtained by Principal Component Analysis of the complete data set after its adequate pretreatment. Alkylphenols, bisphenol A and phthalates were the main contributors to ΣOMPs and produced an estimated significant pollution risk for fish, algae and the sensitive mysid shrimp organisms in seawater samples. The combination of GC-MS/MS, chemometrics and risk analysis is proven to be useful for a better control and management of OMP discharges. PMID:22706016

  4. Distribution and geological sources of selenium in environmental materials in Taoyuan County, Hunan Province, China.

    PubMed

    Ni, Runxiang; Luo, Kunli; Tian, Xinglei; Yan, Songgui; Zhong, Jitai; Liu, Maoqiu

    2016-06-01

    The selenium (Se) distribution and geological sources in Taoyuan County, China, were determined by using hydride generation atomic fluorescence spectrometry on rock, soil, and food crop samples collected from various geological regions within the county. The results show Se contents of 0.02-223.85, 0.18-7.05, and 0.006-5.374 mg/kg in the rock, soil, and food crops in Taoyuan County, respectively. The region showing the highest Se content is western Taoyuan County amid the Lower Cambrian and Ediacaran black rock series outcrop, which has banding distributed west to east. A relatively high-Se environment is found in the central and southern areas of Taoyuan County, where Quaternary Limnetic sedimentary facies and Neoproterozoic metamorphic volcanic rocks outcrop, respectively. A relatively low-Se environment includes the central and northern areas of Taoyuan County, where Middle and Upper Cambrian and Ordovician carbonate rocks and Cretaceous sandstones and conglomerates outcrop. These results indicate that Se distribution in Taoyuan County varies markedly and is controlled by the Se content of the bedrock. The Se-enriched Lower Cambrian and Ediacaran black rock series is the primary source of the seleniferous environment observed in Taoyuan County. Potential seleniferous environments are likely to be found near outcrops of the Lower Cambrian and Ediacaran black rock series in southern China. PMID:26563208

  5. Aerosol structure and vertical distribution in a multi-source dust region.

    PubMed

    Zhang, Jie; Zhang, Qiang; Tang, Congguo; Han, Yongxiang

    2012-01-01

    The vertical distribution of aerosols was directly observed under various atmospheric conditions in the free troposphere using surface micro-pulse lidar (MPL4) at the Zhangye Station (39.08 degrees N, 100.27 degrees E) in western China in the spring of 2008. The study shows that the aerosol distribution over Zhangye can be vertically classified into upper, middle and lower layers with altitudes of 4.5 to 9 km, 2.5 to 4.5 km, and less than 2.5 km, respectively. The aerosol in the upper layer originated from the external sources at higher altitude regions, from far desert regions upwind of Zhangye or transported from higher atmospheric layers by free convection, and the altitude of this aerosol layer decreased with time; the aerosols in the middle and lower layers originated from both external and local sources. The aerosol extinction coefficients in the upper and lower layers decreased with altitude, whereas the coefficient in the middle layer changed only slightly, which suggests that aerosol mixing occurs in the middle layer. The distribution of aerosols with altitude has three features: a single peak that forms under stable atmospheric conditions, an exponential decrease with altitude that occurs under unstable atmospheric conditions, and slight change in the mixed layer. Due to the impact of the top of the atmospheric boundary layer, the diurnal variation in the aerosol extinction coefficient has a single peak, which is higher in the afternoon and lower in the morning. PMID:23513689

  6. Wall-loss distribution of charge breeding ions in an electron cyclotron resonance ion source

    SciTech Connect

    Jeong, S. C.; Oyaizu, M.; Imai, N.; Hirayama, Y.; Ishiyama, H.; Miyatake, H.; Niki, K.; Okada, M.; Watanabe, Y. X.; Otokawa, Y.; Osa, A.; Ichikawa, S.

    2011-03-15

    The ion loss distribution in an electron cyclotron resonance ion source (ECRIS) was investigated to understand the element dependence of the charge breeding efficiency in an electron cyclotron resonance (ECR) charge breeder. The radioactive {sup 111}In{sup 1+} and {sup 140}Xe{sup 1+} ions (typical nonvolatile and volatile elements, respectively) were injected into the ECR charge breeder at the Tokai Radioactive Ion Accelerator Complex to breed their charge states. Their respective residual activities on the sidewall of the cylindrical plasma chamber of the source were measured after charge breeding as functions of the azimuthal angle and longitudinal position and two-dimensional distributions of ions lost during charge breeding in the ECRIS were obtained. These distributions had different azimuthal symmetries. The origins of these different azimuthal symmetries are qualitatively discussed by analyzing the differences and similarities in the observed wall-loss patterns. The implications for improving the charge breeding efficiencies of nonvolatile elements in ECR charge breeders are described. The similarities represent universal ion loss characteristics in an ECR charge breeder, which are different from the loss patterns of electrons on the ECRIS wall.

  7. Inferring the Spatial and Energy Distribution of Gamma-Ray Burst Sources. II. Isotropic Models

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.; Wasserman, Ira M.

    1998-07-01

    We use Bayesian methods to analyze the distribution of gamma-ray burst intensities reported in the Third BATSE Catalog (3B catalog) of gamma-ray bursts, presuming the distribution of burst sources (``bursters'') is isotropic. We study both phenomenological and cosmological source distribution models, using Bayes's theorem both to infer unknown parameters in the models and to compare rival models. We analyze the distribution of the time-averaged peak photon number flux, Φ, measured on both 64 ms and 1024 ms timescales, performing the analysis of data based on each timescale independently. Several of our findings differ from those of previous analyses that modeled burst detection less completely. In particular, we find that the width of the intrinsic luminosity function for bursters is unconstrained, and the luminosity function of the actually observed bursts can be extremely broad, in contrast to the findings of all previous studies. Useful constraints probably require observation of bursts significantly fainter than those visible to BATSE. We also find that the 3B peak flux data do not usefully constrain the redshifts of burst sources; useful constraints require the analysis of data beyond that in the 3B catalog (such as burst time histories) or data from brighter bursts than have been seen by BATSE (such as those observed by the Pioneer Venus Orbiter). In addition, we find that an accurate understanding of the peak flux distributions reported in the 3B almost certainly requires consideration of data on the temporal and spectral properties of bursts beyond that reported in the 3B catalog and more sophisticated modeling than has so far been attempted. We first analyze purely phenomenological power-law and broken power-law models for the distribution of observed peak fluxes. We find that the 64 ms data are adequately fitted by a single power law, but that the 1024 ms data significantly favor models with a sharp, steep break near the highest observed fluxes. At

  8. DOES SIZE MATTER? THE UNDERLYING INTRINSIC SIZE DISTRIBUTION OF RADIO SOURCES AND IMPLICATIONS FOR UNIFICATION BY ORIENTATION

    SciTech Connect

    DiPompeo, M. A.; Runnoe, J. C.; Myers, A. D.; Boroson, T. A.

    2013-09-01

    Unification by orientation is a ubiquitous concept in the study of active galactic nuclei. A gold standard of the orientation paradigm is the hypothesis that radio galaxies and radio-loud quasars are intrinsically the same, but are observed over different ranges of viewing angles. Historically, strong support for this model was provided by the projected sizes of radio structure in luminous radio galaxies, which were found to be significantly larger than those of quasars, as predicted due to simple geometric projection. Recently, this test of the simplest prediction of orientation-based models has been revisited with larger samples that cover wider ranges of fundamental properties-and no clear difference in projected sizes of radio structure is found. Cast solely in terms of viewing angle effects, these results provide convincing evidence that unification of these objects solely through orientation fails. However, it is possible that conflicting results regarding the role orientation plays in our view of radio sources simply result from insufficient sampling of their intrinsic size distribution. We test this possibility using Monte Carlo simulations constrained by real sample sizes and properties. We develop models for the real intrinsic size distribution of radio sources, simulate observations by randomly sampling intrinsic sizes and viewing angles, and analyze how likely each sample is to support or dispute unification by orientation. We find that, while it is possible to reconcile conflicting results purely within a simple, orientation-based framework, it is very unlikely. We analyze the effects that sample size, relative numbers of radio galaxies and quasars, the critical angle that separates the two subclasses, and the shape of the intrinsic size distribution have on this type of test.

  9. An inverse modeling method to assess the source term of the Fukushima nuclear power plant accident using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Saunier, O.; Mathieu, A.; Didier, D.; Tombette, M.; Quélo, D.; Winiarek, V.; Bocquet, M.

    2013-06-01

    The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term including the time evolution of the release rate and its distribution between radioisotopes. Inverse modeling methods, which combine environmental measurements and atmospheric dispersion models, have proven efficient in assessing source term due to an accidental situation (Gudiksen, 1989; Krysta and Bocquet, 2007; Stohl et al., 2012a; Winiarek et al., 2012). Most existing approaches are designed to use air sampling measurements (Winiarek et al., 2012) and some of them also use deposition measurements (Stohl et al., 2012a; Winiarek et al., 2013) but none of them uses dose rate measurements. However, it is the most widespread measurement system, and in the event of a nuclear accident, these data constitute the main source of measurements of the plume and radioactive fallout during releases. This paper proposes a method to use dose rate measurements as part of an inverse modeling approach to assess source terms. The method is proven efficient and reliable when applied to the accident at the Fukushima Daiichi nuclear power plant (FD-NPP). The emissions for the eight main isotopes 133Xe, 134Cs, 136Cs, 137Cs, 137mBa, 131I, 132I and 132Te have been assessed. Accordingly, 103 PBq of 131I, 35.5 PBq of 132I, 15.5 PBq of 137Cs and 12 100 PBq of noble gases were released. The events at FD-NPP (such as venting, explosions, etc.) known to have caused atmospheric releases are well identified in the retrieved source term. The estimated source term is validated by comparing simulations of atmospheric dispersion and deposition with environmental observations. The result is that the model-measurement agreement for all of the monitoring locations is correct for 80% of simulated dose rates that are within a factor of 2 of the observed values. Changes in dose rates over time have been overall properly reconstructed, especially

  10. Required distribution of noise sources for Green's function recovery in diffusive fields

    NASA Astrophysics Data System (ADS)

    Shamsalsadati, S.; Weiss, C. J.

    2011-12-01

    In the most general sense, noise is the part of the signal of little or no interest, due to a multitude of reasons such as operator error, imperfect instrumentation, experiment design, or inescapable background interference. Considering the latter, it has been shown that Green's function can be extracted from cross-correlation of the ambient, diffusive wavefields arising from background random noise sources. Pore pressure and low-frequency electromagnetic induction are two such examples of diffusive fields. In theory, applying Green's function method in geophysical exploration requires infinity of volumetrically distributed sources; however, in the real world the number of noise sources in an area is limited, and furthermore, unevenly distributed in time, space and spectral content. Hence, quantification of the requisite noise sources that enable us to calculate Green's function acceptably well remains an open research question. The purpose of this study is to find the area of noise sources that contribute most to the Green's function estimation in diffusive systems. We call such a region the Volume of Relevance (VoR). Our analysis builds upon recent work in 1D homogeneous system where it was shown that sources located between two receivers positions are the most important ones for the purpose of Green's function recovery. Our results confirm the previous finding but we also examine the effect of heterogeneity, dimensionality and receiver location in both 1D and 2D at a fixed frequency. We demonstrate that for receivers located symmetrically across an interface between regions of contrasting diffusivity, the VoR rapidly shifts from one side of the interface to the other, and back again, as receiver separation increases. We also demonstrate that where the receiver pair is located on the interface itself, the shifting is less rapid, and for moderate to high diffusivity contrasts, the VoR remains entirely on the more diffusive side. In addition, because classical

  11. A methodology for quantitatively estimating a distributed source of volatiles in a cometary coma

    NASA Astrophysics Data System (ADS)

    De Keyser, Johan; Dhooghe, Frederik; Gunell, Herbert; Maggiolo, Romain; Mann, Ingrid

    2014-05-01

    Rosetta will rendez-vous with comet 67P/Churyumov-Gerasimenko in May 2014. One of its objectives is to study the gas in the cometary coma. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) package aims to determine the gas composition in the coma, in particular during the planned close flyby episodes. An outstanding question - since earlier findings by Giotto at 1P/Halley - is the role of outgassing from dust particles. The gas in the coma mainly originates from sublimation of volatiles on the nucleus surface, but additional sublimation from dust particles may contribute as well. The dust therefore provides a "distributed source" of such volatiles. In particular, outgassing from dust particles necessarily will lead to different radial profiles of the sublimated gas and/or its photo-dissociation products, and through chemical reactions it may affect many other species. In the present contribution we demonstrate how information about the dust source can be inferred from the measured abundance of species in the coma, preferably over a range of distances from the nucleus. This is achieved by solving an inverse problem that is based on knowledge of the reaction pathways, the solar UV flux, and the coma measurements. The detailed characteristics of the distributed source depend on the dust grain size distribution, the outgassing rate, the possibly different composition of the volatile material carried by the grains, the dust grain outflow velocities, and dust grain fragmentation. The proposed methodology is limited in determining these characteristics, but it can be refined or constrained by incorporating information, for instance, provided by the dust instruments on board Rosetta.

  12. Propagation of Source Grain-size Distribution Uncertainty by Using a Lagrangian Volcanic Particle Dispersal Model

    NASA Astrophysics Data System (ADS)

    Neri, A.; De'Michieli Vitturi, M.; Pardini, F.; Salvetti, M. V.; Spanu, A.

    2014-12-01

    Lagrangian particle dispersal models are commonly used for tracking ash particles emitted from volcanic plumes and transported under the action of atmospheric wind fields. In this work, we adopted a Lagrangian particle model to carry out an uncertainty quantification analysis of volcanic ash dispersal in the atmosphere focused on the uncertainties affecting particle source conditions. To this aim the Eulerian fully compressible mesoscale non-hydrostatic model WRF was used to generate the driving wind field. The Lagrangian particle model LPAC (de'Michieli Vitturi et al., JGR 2010) was then used to simulate the transport of mass particles under the action of atmospheric conditions. The particle motion equations were derived by expressing the Lagrangian particle acceleration as the sum of the forces acting along its trajectory, with drag forces calculated as a function of particle diameter, density, shape and Reynolds number. The simulations were representative of weak plume events of Mt. Etna and aimed to quantify the effect on the dispersal process of the uncertainty in the mean and variance of a Gaussian density function describing the grain-size distribution of the mixture and in the particle sphericity. In order to analyze the sensitivity of particle dispersal to these uncertain parameters with a reasonable number of simulations, and therefore with affordable computational costs, response surfaces in the parameter space were built by using the generalized polynomial chaos technique. The uncertainty analysis allowed to quantify the most probable values, as well as their pdf, of the number of particles as well as of the mean and variance of the grain size distribution at various distances from the source, both in air and on the ground. In particular, results highlighted the strong reduction of the uncertainty ranges of the mean and variance of the grain-size distribution with increasing distance from source and the significant control of particle sphericity on the

  13. Development of Approach for Long-Term Management of Disused Sealed Radioactive Sources - 13630

    SciTech Connect

    Kinker, M.; Reber, E.; Mansoux, H.; Bruno, G.

    2013-07-01

    Radioactive sources are used widely throughout the world in a variety of medical, industrial, research and military applications. When such radioactive sources are no longer used and are not intended to be used for the practice for which an authorization was granted, they are designated as 'disused sources'. Whether appropriate controls are in place during the useful life of a source or not, the end of this useful life is often a turning point after which it is more difficult to ensure the safety and security of the source over time. For various reasons, many disused sources cannot be returned to the manufacturer or the supplier for reuse or recycling. When these attempts fail, disused sources should be declared as radioactive waste and should be managed as such, in compliance with relevant international legal instruments and safety standards. However, disposal remains an unresolved issue in many counties, due to in part to limited public acceptance, insufficient funding, and a lack of practical examples of strategies for determining suitable disposal options. As a result, disused sources are often stored indefinitely at the facilities where they were once used. In order to prevent disused sources from becoming orphan sources, each country must develop and implement a comprehensive waste management strategy that includes disposal of disused sources. The International Atomic Energy Agency (IAEA) fosters international cooperation between countries and encourages the development of a harmonized 'cradle to grave' approach to managing sources consistent with international legal instruments, IAEA safety standards, and international good practices. This 'cradle to grave' approach requires the development of a national policy and implementing strategy, an adequate legal and regulatory framework, and adequate resources and infrastructure that cover the entire life cycle, from production and use of radioactive sources to disposal. (authors)

  14. Anthropogenic {sup 129}I in western New York: Distribution, sources and pathways

    SciTech Connect

    Fehn, U.; Rao, U.; Teng, R.T.D.

    1995-12-01

    The present {sup 129}I concentration at the surface of the earth is dominated by releases from anthropogenic sources such as atmospheric weapons tests and nuclear facilities. We report here {sup 129}I concentrations in waters, plants and soils from Western New York and the surrounding areas. Values found in waters and plants outside Western New York are approximately three orders of magnitude above natural levels. Concentrations in Western New York are another order of magnitude higher with a distinct pattern of concentration pointing to the source at West Valley, a former reprocessing facility, which continues to add significantly to the {sup 129}I budget in this region. Transport of anthropogenic {sup 129}I is clearly detectable in waters draining the West Valley area but the overall distribution of this isotope indicates also a significant component of aerial transport. The continued presence of bomb-related {sup 129}I demonstrates a long residence time of this isotope in biosphere and hydrosphere.

  15. Long distance measurement-device-independent quantum key distribution with entangled photon sources

    SciTech Connect

    Xu, Feihu; Qi, Bing; Liao, Zhongfa; Lo, Hoi-Kwong

    2013-08-05

    We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.

  16. Photon-monitoring attack on continuous-variable quantum key distribution with source in middle

    NASA Astrophysics Data System (ADS)

    Wang, Yijun; Huang, Peng; Guo, Ying; Huang, Dazu

    2014-12-01

    Motivated by a fact that the non-Gaussian operation may increase entanglement of an entangled system, we suggest a photon-monitoring attack strategy in the entanglement-based (EB) continuous-variable quantum key distribution (CVQKD) using the photon subtraction operations, where the entangled source originates from the center instead of one of the legal participants. It shows that an eavesdropper, Eve, can steal large information from participants after intercepting the partial beams with the photon-monitoring attach strategy. The structure of the proposed CVQKD protocol is useful in simply analyzing how quantum loss in imperfect channels can decrease the performance of the CVQKD protocol. The proposed attack strategy can be implemented under current technology, where a newly developed and versatile no-Gaussian operation can be well employed with the entangled source in middle in order to access to mass information in the EB CVQKD protocol, as well as in the prepare-and-measure (PM) CVQKD protocol.

  17. Distribution, richness, quality, and thermal maturity of source rock units on the North Slope of Alaska

    USGS Publications Warehouse

    Peters, K.E.; Bird, K.J.; Keller, M.A.; Lillis, P.G.; Magoon, L.B.

    2003-01-01

    Four source rock units on the North Slope were identified, characterized, and mapped to better understand the origin of petroleum in the area: Hue-gamma ray zone (Hue-GRZ), pebble shale unit, Kingak Shale, and Shublik Formation. Rock-Eval pyrolysis, total organic carbon analysis, and well logs were used to map the present-day thickness, organic quantity (TOC), quality (hydrogen index, HI), and thermal maturity (Tmax) of each unit. To map these units, we screened all available geochemical data for wells in the study area and assumed that the top and bottom of the oil window occur at Tmax of ~440° and 470°C, respectively. Based on several assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original organic richness prior to thermal maturation.

  18. A directional array approach for the measurement of rotor noise source distributions with controlled spatial resolution

    NASA Technical Reports Server (NTRS)

    Brooks, T. F.; Marcolini, M. A.; Pope, D. S.

    1987-01-01

    A special array system has been designed to examine noise source distributions over a helicopter rotor model. The particular measurement environment is for a rotor operating in the open jet of an anechoic wind tunnel. An out-of-flow directional microphone element array is used with a directivity pattern whose major directional lobe projects on the rotor disk. If significant contributions from extraneous tunnel noise sources in the direction of the side lobes are excluded, the dominant output from the array would be that noise emitted from the projected area on the rotor disk. The design incorporates an array element signal blending features which serves to control the spatial resolution of the size of the directional lobes. (Without blending, the resolution and side lobe size are very strong functions of frequency, which severely limits the array's usefulness).

  19. Quantitative assessment of seismic source performance: Feasibility of small and affordable seismic sources for long term monitoring at the Ketzin CO2 storage site, Germany

    NASA Astrophysics Data System (ADS)

    Sopher, Daniel; Juhlin, Christopher; Huang, Fei; Ivandic, Monika; Lueth, Stefan

    2014-08-01

    We apply a range of quantitative pre-stack analysis techniques to assess the feasibility of using smaller and cheaper seismic sources, than those currently used at the Ketzin CO2 storage site. Results from two smaller land sources are presented alongside those from a larger, more powerful source, typically utilized for seismic acquisition at the Ketzin. The geological target for the study is the Triassic Stuttgart Formation which contains a saline aquifer currently used for CO2 storage. The reservoir lies at a depth of approximately 630 m, equivalent to a travel time of 500 ms along the study profile. The three sources discussed in the study are the Vibsist 3000, Vibsist 500 (using industrial hydraulic driven concrete breaking hammers) and a drop hammer source. Data were collected for the comparison using the three sources in 2011, 2012 and 2013 along a 984 m long line with 24 m receiver spacing and 12 m shot spacing. Initially a quantitative analysis is performed of the noise levels between the 3 surveys. The raw shot gathers are then analyzed quantitatively to investigate the relative energy output, signal to noise ratio, penetration depth, repeatability and frequency content for the different sources. The performance of the sources is also assessed based on stacked seismic sections. Based on the results from this study it appears that both of the smaller sources are capable of producing good images of the target reservoir and can both be considered suitable as lower cost, less invasive sources for use at the Ketzin site or other shallow CO2 storage projects. Finally, the results from the various pre-stack analysis techniques are discussed in terms of how representative they are of the final stacked sections.

  20. Finite-key security analysis of quantum key distribution with imperfect light sources

    NASA Astrophysics Data System (ADS)

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2015-09-01

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called ‘rejected data analysis’, and showed that its security—in the limit of infinitely long keys—is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.

  1. Finite-key security analysis of quantum key distribution with imperfect light sources

    SciTech Connect

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2015-09-09

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called 'rejected data analysis', and showed that its security in the limit of infinitely long keys is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.

  2. Space-bound optical source for satellite-ground decoy-state quantum key distribution.

    PubMed

    Li, Yang; Liao, Sheng-Kai; Chen, Xie-Le; Chen, Wei; Cheng, Kun; Cao, Yuan; Yong, Hai-Lin; Wang, Tao; Yang, Hua-Qiang; Liu, Wei-Yue; Yin, Juan; Liang, Hao; Peng, Cheng-Zhi; Pan, Jian-Wei

    2014-11-01

    Satellite-ground quantum key distribution has embarked on the stage of engineering implementation, and a global quantum-secured network is imminent in the foreseeable future. As one payload of the quantum-science satellite which will be ready before the end of 2015, we report our recent work of the space-bound decoy-state optical source. Specialized 850 nm laser diodes have been manufactured and the integrated optical source has gotten accomplished based on these LDs. The weak coherent pulses produced by our optical source feature a high clock rate of 100 MHz, intensity stability of 99.5%, high polarization fidelity of 99.7% and phase randomization. A series of space environment tests have been conducted to verify the optical source's performance and the results are satisfactory. The emulated final secure keys are about 120 kbits during one usable pass of the low Earth orbit satellite. This work takes a significant step forward towards satellite-ground QKD and the global quantum-secured network. PMID:25401878

  3. Finite-key security analysis of quantum key distribution with imperfect light sources

    DOE PAGESBeta

    Mizutani, Akihiro; Curty, Marcos; Lim, Charles Ci Wen; Imoto, Nobuyuki; Tamaki, Kiyoshi

    2015-09-09

    In recent years, the gap between theory and practice in quantum key distribution (QKD) has been significantly narrowed, particularly for QKD systems with arbitrarily flawed optical receivers. The status for QKD systems with imperfect light sources is however less satisfactory, in the sense that the resulting secure key rates are often overly dependent on the quality of state preparation. This is especially the case when the channel loss is high. Very recently, to overcome this limitation, Tamaki et al proposed a QKD protocol based on the so-called 'rejected data analysis', and showed that its security in the limit of infinitelymore » long keys is almost independent of any encoding flaw in the qubit space, being this protocol compatible with the decoy state method. Here, as a step towards practical QKD, we show that a similar conclusion is reached in the finite-key regime, even when the intensity of the light source is unstable. More concretely, we derive security bounds for a wide class of realistic light sources and show that the bounds are also efficient in the presence of high channel loss. Our results strongly suggest the feasibility of long distance provably secure communication with imperfect light sources.« less

  4. Co Spectral Line Energy Distributions in Orion Sources: Templates for Extragalactic Observations

    NASA Astrophysics Data System (ADS)

    Indriolo, Nick; Bergin, Edwin

    2015-06-01

    The Herschel Space Observatory has enabled the observation of CO emission lines originating in the J=5 through J=48 rotational levels. Surveys of active galaxies (e.g., starbursts, Seyferts, ULIRGs) detect emission from levels as high as J=30, but the precise excitation mechanisms responsible for producing the observed CO SLEDs (Spectral Line Energy Distribution) remain ambiguous. To better constrain the possible excitation mechanisms in extragalactic sources, we investigate the CO SLEDs arising from sources with known characteristics in the nearby Orion region. Targets include Orion-KL (high-mass star forming region containing a hot core, embedded protostars, outflows, and shocks), Orion South (high-mass star forming region containing embedded protostars, outflows, and a photodissociation region), Orion H_2 Peak 1 (molecular shock), and the Orion Bar (a photodissociation region). Emission lines from complex sources are decomposed using velocity information from high spectral resolution observations made with Herschel-HIFI (Heterodyne Instrument for the Far-Infrared). Each source and/or component is taken as a template for a particular excitation mechanism, and then applied to interpret excitation in more distant regions within the Galaxy, as well as external galaxies.

  5. Plans for a Collaboratively Developed Distributed Control System for the Spallation Neutron Source

    SciTech Connect

    DeVan, W.R.; Gurd, D.P.; Hammonds, J.; Lewis, S.A.; Smith, J.D.

    1999-03-29

    The Spallation Neutron Source (SNS) is an accelerator-based pulsed neutron source to be built in Oak Ridge, Tennessee. The facility has five major sections - a ''front end'' consisting of a 65 keV H{sup -} ion source followed by a 2.5 MeV RFQ; a 1 GeV linac; a storage ring; a 1MW spallation neutron target (upgradeable to 2 MW); the conventional facilities to support these machines and a suite of neutron scattering instruments to exploit them. These components will be designed and implemented by five collaborating institutions: Lawrence Berkeley National Laboratory (Front End), Los Alamos National Laboratory (Linac); Brookhaven National Laboratory (Storage Ring); Argonne National Laboratory (Instruments); and Oak Ridge National Laboratory (Neutron Source and Conventional Facilities). It is proposed to implement a fully integrated control system for all aspects of this complex. The system will be developed collaboratively, with some degree of local autonomy for distributed systems, but centralized accountability. Technical integration will be based upon the widely-used EPICS control system toolkit, and a complete set of hardware and software standards. The scope of the integrated control system includes site-wide timing and synchronization, networking and machine protection. This paper discusses the technical and organizational issues of planning a large control system to be developed collaboratively at five different institutions, the approaches being taken to address those issues, as well as some of the particular technical challenges for the SNS control system.

  6. Fast optical source for quantum key distribution based on semiconductor optical amplifiers.

    PubMed

    Jofre, M; Gardelein, A; Anzolin, G; Amaya, W; Capmany, J; Ursin, R; Peñate, L; Lopez, D; San Juan, J L; Carrasco, J A; Garcia, F; Torcal-Milla, F J; Sanchez-Brea, L M; Bernabeu, E; Perdigues, J M; Jennewein, T; Torres, J P; Mitchell, M W; Pruneri, V

    2011-02-28

    A novel integrated optical source capable of emitting faint pulses with different polarization states and with different intensity levels at 100 MHz has been developed. The source relies on a single laser diode followed by four semiconductor optical amplifiers and thin film polarizers, connected through a fiber network. The use of a single laser ensures high level of indistinguishability in time and spectrum of the pulses for the four different polarizations and three different levels of intensity. The applicability of the source is demonstrated in the lab through a free space quantum key distribution experiment which makes use of the decoy state BB84 protocol. We achieved a lower bound secure key rate of the order of 3.64 Mbps and a quantum bit error ratio as low as 1.14×10⁻² while the lower bound secure key rate became 187 bps for an equivalent attenuation of 35 dB. To our knowledge, this is the fastest polarization encoded QKD system which has been reported so far. The performance, reduced size, low power consumption and the fact that the components used can be space qualified make the source particularly suitable for secure satellite communication. PMID:21369207

  7. Electric Field Distribution Excited by Indoor Radio Source for Exposure Compliance Assessment

    NASA Astrophysics Data System (ADS)

    Higashiyama, Junji; Tarusawa, Yoshiaki

    Correction factors are presented for estimating the RF electromagnetic field strength in the compliance assessment of human exposure from an indoor RF radio source in the frequency range from 800MHz to 3.5GHz. The correction factors are derived from the increase in the spatial average electric field strength distribution, which is dependent on the building materials. The spatial average electric field strength is calculated using relative complex dielectric constants of building materials. The relative complex dielectric constant is obtained through measurement of the transmission and reflection losses for eleven kinds of building materials used in business office buildings and single family dwellings.

  8. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    SciTech Connect

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  9. Rapid, high-order accurate calculation of flows due to free source or vortex distributions

    NASA Technical Reports Server (NTRS)

    Halsey, D.

    1981-01-01

    Fast Fourier transform (FFT) techniques are applied to the problem of finding the flow due to source or vortex distributions in the field outside an airfoil or other two-dimensional body. Either the complex potential or the complex velocity may be obtained to a high order of accuracy, with computational effort similar to that required by second-order fast Poisson solvers. These techniques are applicable to general flow problems with compressibility and rotation. An example is given of their use for inviscid compressible flow.

  10. Polycyclic aromatic hydrocarbons (PAHs) in the surficial sediments from Lake Iznik (Turkey): spatial distributions and sources.

    PubMed

    Unlü, Selma; Alpar, Bedri; Oztürk, Kurultay; Vardar, Denizhan

    2010-12-01

    The concentrations of 12 polycyclic aromatic hydrocarbons (PAHs) were determined from 28 sediment samples taken from the Lake Iznik located in the north-west area in Turkey. Total concentration of the PAHs was observed as in the range of 17-835 ng g⁻¹ dry weight, with the highest values recorded offshore the cities of Iznik and Orhangazi, and the Sölöz creek. According to the molecular indices, contamination of the PAHs in the lake was a mixture of the atmospheric input of high temperature pyrolytic processes and the petrogenic sources transported by the creeks. Further, the higher proportion of high molecular-weight PAHs (> 85%) suggests the domination of combustion-related sources. Compared to the consensus-based sediment quality guidelines for PAHs, there are no harmful biological effects on the short term to aquatic life. PMID:21069284

  11. Spatial distribution of old and emerging flame retardants in Chinese forest soils: sources, trends and processes.

    PubMed

    Zheng, Qian; Nizzetto, Luca; Li, Jun; Mulder, Marie D; Sáňka, Ondřej; Lammel, Gerhard; Bing, Haijian; Liu, Xin; Jiang, Yishan; Luo, Chunling; Zhang, Gan

    2015-03-01

    The levels and distribution of polybrominated diphenylethers (PBDEs), novel brominated flame retardants (NBFRs) and Dechlorane Plus (DP) in soils and their dependence on environmental and anthropological factors were investigated in 159 soil samples from 30 background forested mountain sites across China. Decabromodiphenylethane (DBDPE) was the most abundant flame retardant (25-18,000 pg g(-1) and 5-13,000 pg g(-1) in O-horizon and A-horizon, respectively), followed by BDE 209 (nd-5900 pg g(-1) and nd-2400 pg g(-1) in O-horizon and A-horizon, respectively). FRs distributions were primarily controlled by source distribution. The distributions of most phasing-out PBDEs, DP isomers and TBPH were in fact correlated to a population density-based index used as proxy of areas with elevated usage and waste of FR containing products. High concentrations of some NBFRs were however observed in industrialized regions and FR manufacturing plants. Strongly positive correlations were observed between PBDEs and their replacement products suggesting similar emission pattern and environmental behavior. Exposure of mineral subsoils depended on precipitations driving leaching of FRs into the soil core. This was especially evident for some emerging BFRs (TBE, TBPH, and TBB etc.) possibly indicating potential for diffuse groundwater contamination. PMID:25661400

  12. Online data sources for regulation and remediation of chemical production, distribution, use and disposal

    SciTech Connect

    Snow, B.; Arnold, S.

    1995-12-01

    Environmental awareness is essential for todays corporation. Corporations have been held liable for the short-term and long-term effects of such chemicals as pharmaceuticals, agrochemicals and petrochemicals to name a few. Furthermore, corporations have been held accountable for disposal of wastes or by-products of chemical production. Responsibility for the environment either mandated by government agencies or done voluntarily is an economic factor for business operations. Remediation of environmental hazards on a voluntary basis has often created goodwill and a payoff for being socially responsible. Remediation also can result in new business opportunities or savings in production costs. To be environmentally aware and socially responsible, the chemist should know where to find regulatory information for countries worldwide. Using online data sources is an efficient method of seeking this information.

  13. On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources

    NASA Astrophysics Data System (ADS)

    Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.

    2008-12-01

    The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives

  14. The occurrence and distribution of a group of organic micropollutants in Mexico City's water sources.

    PubMed

    Félix-Cañedo, Thania E; Durán-Álvarez, Juan C; Jiménez-Cisneros, Blanca

    2013-06-01

    The occurrence and distribution of a group of 17 organic micropollutants in surface and groundwater sources from Mexico City was determined. Water samples were taken from 7 wells, 4 dams and 15 tanks where surface and groundwater are mixed and stored before distribution. Results evidenced the occurrence of seven of the target compounds in groundwater: salicylic acid, diclofenac, di-2-ethylhexylphthalate (DEHP), butylbenzylphthalate (BBP), triclosan, bisphenol A (BPA) and 4-nonylphenol (4-NP). In surface water, 11 target pollutants were detected: same found in groundwater as well as naproxen, ibuprofen, ketoprofen and gemfibrozil. In groundwater, concentration ranges of salicylic acid, 4-NP and DEHP, the most frequently found compounds, were 1-464, 1-47 and 19-232 ng/L, respectively; while in surface water, these ranges were 29-309, 89-655 and 75-2,282 ng/L, respectively. Eleven target compounds were detected in mixed water. Concentrations in mixed water were higher than those determined in groundwater but lower than the detected in surface water. Different to that found in ground and surface water, the pesticide 2,4-D was found in mixed water, indicating that some pollutants can reach areas where they are not originally present in the local water sources. Concentration of the organic micropollutants found in this study showed similar to lower to those reported in water sources from developed countries. This study provides information that enriches the state of the art on the occurrence of organic micropollutants in water sources worldwide, notably in megacities of developing countries. PMID:23542484

  15. [Recent Distribution and Sources of Polycyclic Aromatic Hydrocarbons in Surface Soils from Yangtze River Delta].

    PubMed

    Li, Jing-ya; Wu, Di; Xu, Yun-song; Li, Xiang-dong; Wang, Xi-long; Zeng, Chao-hua; Fu, Xiao-fang; Liu, Wen-xin

    2016-01-15

    A total of 243 surface soil samples collected from 11 cities in the Yangtze River Delta region were analyzed for the concentrations, spatial distribution, component profiles and emission sources of 29 PAH species. The analytical results indicated the total concentrations of PAHs in Yangtze River Delta fell in the range from 21. 0 ng x g(-1) to 3 578.5 ng x g(-1) with an arithmetic mean and standard deviation of 310.6 ng x g(-1) and 459.1 ng x g(-1), respectively. Our data showed spatial distribution of PAHs concentrations varied greatly in the region. In addition, the contents of PAHs were positively correlated with the total organic carbon fractions in topsoil. The sites with the highest levels of PAHs in the 11 cities studied were located in Suzhou with 759.0 ng x g(-1) +/- 132.9 ng x g(-1) ollowed by the areas of Wuxi and Shanghai, with the total PAHs concentrations of 565. 3 ng x g(-1) +/- 705.5 ng x g(-1) and 349.4 ng g(-1) 220. 1 ng-g(-1) respectively. The profiles of different components pointed to a predominant role of the species with 2-4 rings, and especially for the low molecular weight components with 2-3 rings. A preliminary identification on emission sources of local PAHs was performed by the specific ratios of isomeric species and principal component analysis (PCA). The results designated industrial coal and biomass combustion as the main mixed emission sources of PAHs in surface soils from Yangtze River Delta, and tail gas from transport as another major source in some areas. PMID:27078965

  16. Nonradioactive Environmental Emissions Chemical Source Term for the Double Shell Tank (DST) Vapor Space During Waste Retrieval Operations

    SciTech Connect

    MAY, T.H.

    2000-04-21

    A nonradioactive chemical vapor space source term for tanks on the Phase 1 and the extended Phase 1 delivery, storage, and disposal mission was determined. Operations modeled included mixer pump operation and DST waste transfers. Concentrations of ammonia, specific volatile organic compounds, and quantitative volumes of aerosols were estimated.

  17. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    SciTech Connect

    Schwinkendorf, K.N.

    1996-04-15

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2.

  18. Hanford tank residual waste – contaminant source terms and release models

    SciTech Connect

    Deutsch, William J.; Cantrell, Kirk J.; Krupka, Kenneth M.; Lindberg, Michael J.; Serne, R. Jeffrey

    2011-08-23

    Residual waste is expected to be left in 177 underground storage tanks after closure at the U.S. Department of Energy’s Hanford Site in Washington State (USA). In the long term, the residual wastes represent a potential source of contamination to the subsurface environment. Residual materials that cannot be completely removed during the tank closure process are being studied to identify and characterize the solid phases and estimate the release of contaminants from these solids to water that might enter the closed tanks in the future. As of the end of 2009, residual waste from five tanks has been evaluated. Residual wastes from adjacent tanks C-202 and C-203 have high U concentrations of 24 and 59 wt%, respectively, while residual wastes from nearby tanks C-103 and C-106 have low U concentrations of 0.4 and 0.03 wt%, respectively. Aluminum concentrations are high (8.2 to 29.1 wt%) in some tanks (C-103, C-106, and S-112) and relatively low (<1.5 wt%) in other tanks (C-202 and C-203). Gibbsite is a common mineral in tanks with high Al concentrations, while non-crystalline U-Na-C-O-P±H phases are common in the U-rich residual wastes from tanks C-202 and C-203. Iron oxides/hydroxides have been identified in all residual waste samples studied to date. Contaminant release from the residual wastes was studied by conducting batch leach tests using distilled deionized water, a Ca(OH)2-saturated solution, or a CaCO3-saturated water. Uranium release concentrations are highly dependent on waste and leachant compositions with dissolved U concentrations one or two orders of magnitude higher in the tests with high U residual wastes, and also higher when leached with the CaCO3-saturated solution than with the Ca(OH)2-saturated solution. Technetium leachability is not as strongly dependent on the concentration of Tc in the waste, and it appears to be slightly more leachable by the Ca(OH)2-saturated solution than by the CaCO3-saturated solution. In general, Tc is much less

  19. Concordance between distributed EEG source localization and simultaneous EEG-fMRI studies of epileptic spikes.

    PubMed

    Grova, C; Daunizeau, J; Kobayashi, E; Bagshaw, A P; Lina, J-M; Dubeau, F; Gotman, J

    2008-01-15

    In order to analyze where epileptic spikes are generated, we assessed the level of concordance between EEG source localization using distributed source models and simultaneous EEG-fMRI which measures the hemodynamic correlates of EEG activity. Data to be compared were first estimated on the same cortical surface and two comparison strategies were used: (1) MEM-concordance: a comparison between EEG sources localized with the Maximum Entropy on the Mean (MEM) method and fMRI clusters showing a significant hemodynamic response. Minimal geodesic distances between local extrema and overlap measurements between spatial extents of EEG sources and fMRI clusters were used to quantify MEM-concordance. (2) fMRI-relevance: estimation of the fMRI-relevance index alpha quantifying if sources located in an fMRI cluster could explain some scalp EEG data, when this fMRI cluster was used to constrain the EEG inverse problem. Combining MEM-concordance and fMRI-relevance (alpha) indexes, each fMRI cluster showing a significant hemodynamic response (p<0.05 corrected) was classified according to its concordance with EEG data. Nine patients with focal epilepsy who underwent EEG-fMRI examination followed by EEG recording outside the scanner were selected for this study. Among the 62 fMRI clusters analyzed (7 patients), 15 (24%) found in 6 patients were highly concordant with EEG according to both MEM-concordance and fMRI-relevance. EEG concordance was found for 5 clusters (8%) according to alpha only, suggesting sources missed by the MEM. No concordance with EEG was found for 30 clusters (48%) and for 10 clusters (16%) alpha was significantly negative, suggesting EEG-fMRI discordance. We proposed two complementary strategies to assess and classify EEG-fMRI concordance. We showed that for most patients, part of the hemodynamic response to spikes was highly concordant with EEG sources, whereas other fMRI clusters in response to the same spikes were found distant or discordant with EEG

  20. Observational manifestations and intrinsic properties of the RCR sources in terms of a unified model

    NASA Astrophysics Data System (ADS)

    Zhelenkova, O. P.; Majorova, E. K.

    2016-04-01

    We present a summary results of the study of radio sources showing significant variations of integral flux density using the data from the RATAN-600 surveys of 1980-1994 at a frequency of 7.6 cm. The majority of the detected variable sources have flat radio spectra, although there are also all other spectrum types found. Point and compact sources predominate, although all known morphological structures are found in the sample. Variability is detected both in quasars and galaxies. Using the catalog data, we found brightness variations in the optical and/or infrared ranges for a half of host objects of radio sources. We analyzed the properties of nonvariable and variable RCR sources. We compared the ratio of absolute magnitude to radio luminosity for sources with the active nucleus types determined from the optical data. It is found that this parameter is approximately the same for quasars with different radio luminosity. It isminimum for the strongest radio galaxies and grows up to the level characteristic of quasars with the decrease of radio luminosity. Considering that the ratio depends on obscuring properties of a dust torus, such behavior can be explained if we assume that the torus geometry and its optical depth depend on the source long. This parameter is slightly higher among variable sources than among nonvariable ones which counts in favor of the nucleus more open to an observer.