Sample records for source term calculations

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.C. Ryman

    This calculation is a revision of a previous calculation (Ref. 7.5) that bears the same title and has the document identifier BBAC00000-01717-0210-00006 REV 01. The purpose of this revision is to remove TBV (to-be-verified) -41 10 associated with the output files of the previous version (Ref. 7.30). The purpose of this and the previous calculation is to generate source terms for a representative boiling water reactor (BWR) spent nuclear fuel (SNF) assembly for the first one million years after the SNF is discharged from the reactors. This calculation includes an examination of several ways to represent BWR assemblies and operatingmore » conditions in SAS2H in order to quantify the effects these representations may have on source terms. These source terms provide information characterizing the neutron and gamma spectra in particles per second, the decay heat in watts, and radionuclide inventories in curies. Source terms are generated for a range of burnups and enrichments (see Table 2) that are representative of the waste stream and stainless steel (SS) clad assemblies. During this revision, it was determined that the burnups used for the computer runs of the previous revision were actually about 1.7% less than the stated, or nominal, burnups. See Section 6.6 for a discussion of how to account for this effect before using any source terms from this calculation. The source term due to the activation of corrosion products deposited on the surfaces of the assembly from the coolant is also calculated. The results of this calculation support many areas of the Monitored Geologic Repository (MGR), which include thermal evaluation, radiation dose determination, radiological safety analyses, surface and subsurface facility designs, and total system performance assessment. This includes MGR items classified as Quality Level 1, for example, the Uncanistered Spent Nuclear Fuel Disposal Container (Ref. 7.27, page 7). Therefore, this calculation is subject to the requirements of the Quality Assurance Requirements and Description (Ref. 7.28). The performance of the calculation and development of this document are carried out in accordance with AP-3.124, ''Design Calculation and Analyses'' (Ref. 7.29).« less

  2. BWR ASSEMBLY SOURCE TERMS FOR WASTE PACKAGE DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T.L. Lotz

    1997-02-15

    This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide boiling water reactor (BWR) assembly radiation source term data for use during Waste Package (WP) design. The BWR assembly radiation source terms are to be used for evaluation of radiolysis effects at the WP surface, and for personnel shielding requirements during assembly or WP handling operations. The objectives of this evaluation are to generate BWR assembly radiation source terms that bound selected groupings of BWR assemblies, with regard to assembly average burnup and cooling time, which comprise the anticipated MGDS BWR commercialmore » spent nuclear fuel (SNF) waste stream. The source term data is to be provided in a form which can easily be utilized in subsequent shielding/radiation dose calculations. Since these calculations may also be used for Total System Performance Assessment (TSPA), with appropriate justification provided by TSPA, or radionuclide release rate analysis, the grams of each element and additional cooling times out to 25 years will also be calculated and the data included in the output files.« less

  3. Shielding calculation and criticality safety analysis of spent fuel transportation cask in research reactors.

    PubMed

    Mohammadi, A; Hassanzadeh, M; Gharib, M

    2016-02-01

    In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Source terms, shielding calculations and soil activation for a medical cyclotron.

    PubMed

    Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E

    2016-12-01

    Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .

  5. The exact calculation of quadrupole sources for some incompressible flows

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1988-01-01

    This paper is concerned with the application of the acoustic analogy of Lighthill to the acoustic and aerodynamic problems associated with moving bodies. The Ffowcs Williams-Hawkings equation, which is an interpretation of the acoustic analogy for sound generation by moving bodies, manipulates the source terms into surface and volume sources. Quite often in practice the volume sources, or quadrupoles, are neglected for various reasons. Recently, Farassat, Long and others have attempted to use the FW-H equation with the quadrupole source and neglected to solve for the surface pressure on the body. The purpose of this paper is to examine the contribution of the quadrupole source to the acoustic pressure and body surface pressure for some problems for which the exact solution is known. The inviscid, incompressible, 2-D flow, calculated using the velocity potential, is used to calculate the individual contributions of the various surface and volume source terms in the FW-H equation. The relative importance of each of the sources is then assessed.

  6. Radiological analysis of plutonium glass batches with natural/enriched boron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainisch, R.

    2000-06-22

    The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less

  7. Recent skyshine calculations at Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degtyarenko, P.

    1997-12-01

    New calculations of the skyshine dose distribution of neutrons and secondary photons have been performed at Jefferson Lab using the Monte Carlo method. The dose dependence on neutron energy, distance to the neutron source, polar angle of a source neutron, and azimuthal angle between the observation point and the momentum direction of a source neutron have been studied. The azimuthally asymmetric term in the skyshine dose distribution is shown to be important in the dose calculations around high-energy accelerator facilities. A parameterization formula and corresponding computer code have been developed which can be used for detailed calculations of the skyshinemore » dose maps.« less

  8. Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments

    DOE PAGES

    Liang, Taiee; Bauer, Johannes M.; Liu, James C.; ...

    2016-12-01

    A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less

  9. Bremsstrahlung Dose Yield for High-Intensity Short-Pulse Laser–Solid Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Taiee; Bauer, Johannes M.; Liu, James C.

    A bremsstrahlung source term has been developed by the Radiation Protection (RP) group at SLAC National Accelerator Laboratory for high-intensity short-pulse laser–solid experiments between 10 17 and 10 22 W cm –2. This source term couples the particle-in-cell plasma code EPOCH and the radiation transport code FLUKA to estimate the bremsstrahlung dose yield from laser–solid interactions. EPOCH characterizes the energy distribution, angular distribution, and laser-to-electron conversion efficiency of the hot electrons from laser–solid interactions, and FLUKA utilizes this hot electron source term to calculate a bremsstrahlung dose yield (mSv per J of laser energy on target). The goal of thismore » paper is to provide RP guidelines and hazard analysis for high-intensity laser facilities. In conclusion, a comparison of the calculated bremsstrahlung dose yields to radiation measurement data is also made.« less

  10. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.

  11. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  12. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  13. Hydraulic transients: a seismic source in volcanoes and glaciers.

    PubMed

    Lawrence, W S; Qamar, A

    1979-02-16

    A source for certain low-frequency seismic waves is postulated in terms of the water hammer effect. The time-dependent displacement of a water-filled sub-glacial conduit is analyzed to demonstrate the nature of the source. Preliminary energy calculations and the observation of hydraulically generated seismic radiation from a dam indicate the plausibility of the proposed source.

  14. Fission Product Appearance Rate Coefficients in Design Basis Source Term Determinations - Past and Present

    NASA Astrophysics Data System (ADS)

    Perez, Pedro B.; Hamawi, John N.

    2017-09-01

    Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.

  15. Far-Field Accumulation of Fissile Material From Waste Packages Containing Plutonium Disposition Waste Form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.P. Nicot

    The objective of this calculation is to estimate the quantity of fissile material that could accumulate in fractures in the rock beneath plutonium-ceramic (Pu-ceramic) and Mixed-Oxide (MOX) waste packages (WPs) as they degrade in the potential monitored geologic repository at Yucca Mountain. This calculation is to feed another calculation (Ref. 31) computing the probability of criticality in the systems described in Section 6 and then ultimately to a more general report on the impact of plutonium on the performance of the proposed repository (Ref. 32), both developed concurrently to this work. This calculation is done in accordance with the developmentmore » plan TDP-DDC-MD-000001 (Ref. 9), item 5. The original document described in item 5 has been split into two documents: this calculation and Ref. 4. The scope of the calculation is limited to only very low flow rates because they lead to the most conservative cases for Pu accumulation and more generally are consistent with the way the effluent from the WP (called source term in this calculation) was calculated (Ref. 4). Ref. 4 (''In-Drift Accumulation of Fissile Material from WPs Containing Plutonium Disposition Waste Forms'') details the evolution through time (breach time is initial time) of the chemical composition of the solution inside the WP as degradation of the fuel and other materials proceed. It is the chemical solution used as a source term in this calculation. Ref. 4 takes that same source term and reacts it with the invert; this calculation reacts it with the rock. In addition to reactions with the rock minerals (that release Si and Ca), the basic mechanisms for actinide precipitation are dilution and mixing with resident water as explained in Section 2.1.4. No other potential mechanism such as flow through a reducing zone is investigated in this calculation. No attempt was made to use the effluent water from the bottom of the invert instead of using directly the effluent water from the WP. This calculation supports disposal criticality analysis and has been prepared in accordance with AP-3.12Q, Calculations (Ref. 49). This calculation uses results from Ref. 4 on actinide accumulation in the invert and more generally does reference heavily the cited calculation. In addition to the information provided in this calculation, the reader is referred to the cited calculation for a more thorough treatment of items applying to both the invert and fracture system such as the choice of the thermodynamic database, the composition of J-13 well water, tuff composition, dissolution rate laws, Pu(OH){sub 4} solubility and also for details on the source term composition. The flow conditions (seepage rate, water velocity in fractures) in the drift and the fracture system beneath initially referred to the TSPA-VA because this work was prepared before the release of the work feeding the TSPA-SR. Some new information feeding the TSPA-SR has since been included. Similarly, the soon-to-be-qualified thermodynamic database data0.ymp has not been released yet.« less

  16. Analysis of neutron and gamma-ray streaming along the maze of NRCAM thallium production target room.

    PubMed

    Raisali, G; Hajiloo, N; Hamidi, S; Aslani, G

    2006-08-01

    Study of the shield performance of a thallium-203 production target room has been investigated in this work. Neutron and gamma-ray equivalent dose rates at various points of the maze are calculated by simulating the transport of streaming neutrons, and photons using Monte Carlo method. For determination of neutron and gamma-ray source intensities and their energy spectrum, we have applied SRIM 2003 and ALICE91 computer codes to Tl target and its Cu substrate for a 145 microA of 28.5 MeV protons beam. The MCNP/4C code has been applied with neutron source term in mode n p to consider both prompt neutrons and secondary gamma-rays. Then the code is applied for the prompt gamma-rays as the source term. The neutron-flux energy spectrum and equivalent dose rates for neutron and gamma-rays in various positions in the maze have been calculated. It has been found that the deviation between calculated and measured dose values along the maze is less than 20%.

  17. Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, D.; Brunett, A.; Passerini, S.

    Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less

  18. Fiber-based polarization-sensitive Mueller matrix optical coherence tomography with continuous source polarization modulation.

    PubMed

    Jiao, Shuliang; Todorović, Milos; Stoica, George; Wang, Lihong V

    2005-09-10

    We report on a new configuration of fiber-based polarization-sensitive Mueller matrix optical coherence tomography that permits the acquisition of the round-trip Jones matrix of a biological sample using only one light source and a single depth scan. In this new configuration, a polarization modulator is used in the source arm to continuously modulate the incident polarization state for both the reference and the sample arms. The Jones matrix of the sample can be calculated from the two frequency terms in the two detection channels. The first term is modulated by the carrier frequency, which is determined by the longitudinal scanning mechanism, whereas the other term is modulated by the beat frequency between the carrier frequency and the second harmonic of the modulation frequency of the polarization modulator. One important feature of this system is that, for the first time to our knowledge, the Jones matrix of the sample can be calculated with a single detection channel and a single measurement when diattenuation is negligible. The system was successfully tested by imaging both standard polarization elements and biological samples.

  19. Watershed nitrogen and phosphorus balance: The upper Potomac River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaworski, N.A.; Groffman, P.M.; Keller, A.A.

    1992-01-01

    Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less

  20. Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran

    2011-01-01

    A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.

  1. Proceedings of the Annual DARPA/AFGL Seismic Research Symposium (7th) Held in Colorado Springs, Colorado on 6-8 May 1985

    DTIC Science & Technology

    1990-11-08

    seismograms were calculated for the three fundemental sources needed to construct an arbitrarily oriented dislocation or deviatoric moment tensor...or the first motion approximation method(FMA). Vertical and radial displacements for the three fundemental source terms are shown since each source...significantly interfere with the SV body wave to produce varying levels of distortion of the waveform among the three fundemental sources. Note, for example

  2. Studying the Puzzle of the Pion Nucleon Sigma Term

    NASA Astrophysics Data System (ADS)

    Kane, Christopher; Lin, Huey-Wen

    2017-09-01

    The pion nucleon sigma term (σπN) is a fundamental parameter of QCD and is integral in the experimental search for dark matter particles as it is used to calculate the cross section of interactions between potential dark matter candidates and nucleons. Recent calculations of this term from lattice-QCD data disagree with calculations done using phenomenological data. This disparity is large enough to cause concern in the dark matter community as it would change the constraints on their experiments. We investigate one potential source of this disparity by studying the flavor dependence on LQCD data used to calculate σπN. To calculate σπN, we study the nucleon mass dependence on the pion mass and implement the Hellmann-Feynman Theorem. Previous calculations only consider LQCD data that accounted for 2 and 3 of the lightest quarks in the quark sea. We extend this study by using new high statistic data that considers 2, 3, and 4 quarks in the quark sea to see if the exclusion of the heavier quarks can account for this disparity. National Science Foundation.

  3. The sound of moving bodies. Ph.D. Thesis - Cambridge Univ.

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth Steven

    1990-01-01

    The importance of the quadrupole source term in the Ffowcs, Williams, and Hawkings (FWH) equation was addressed. The quadrupole source contains fundamental components of the complete fluid mechanics problem, which are ignored only at the risk of error. The results made it clear that any application of the acoustic analogy should begin with all of the source terms in the FWH theory. The direct calculation of the acoustic field as part of the complete unsteady fluid mechanics problem using CFD is considered. It was shown that aeroelastic calculation can indeed be made with CFD codes. The results indicate that the acoustic field is the most susceptible component of the computation to numerical error. Therefore, the ability to measure the damping of acoustic waves is absolutely essential both to develop acoustic computations. Essential groundwork for a new approach to the problem of sound generation by moving bodies is presented. This new computational acoustic approach holds the promise of solving many problems hitherto pushed aside.

  4. Seismic hazard assessment of the Province of Murcia (SE Spain): analysis of source contribution to hazard

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, J.; Gaspar-Escribano, J. M.; Benito, B.

    2007-10-01

    A probabilistic seismic hazard assessment of the Province of Murcia in terms of peak ground acceleration (PGA) and spectral accelerations [SA( T)] is presented in this paper. In contrast to most of the previous studies in the region, which were performed for PGA making use of intensity-to-PGA relationships, hazard is here calculated in terms of magnitude and using European spectral ground-motion models. Moreover, we have considered the most important faults in the region as specific seismic sources, and also comprehensively reviewed the earthquake catalogue. Hazard calculations are performed following the Probabilistic Seismic Hazard Assessment (PSHA) methodology using a logic tree, which accounts for three different seismic source zonings and three different ground-motion models. Hazard maps in terms of PGA and SA(0.1, 0.2, 0.5, 1.0 and 2.0 s) and coefficient of variation (COV) for the 475-year return period are shown. Subsequent analysis is focused on three sites of the province, namely, the cities of Murcia, Lorca and Cartagena, which are important industrial and tourism centres. Results at these sites have been analysed to evaluate the influence of the different input options. The most important factor affecting the results is the choice of the attenuation relationship, whereas the influence of the selected seismic source zonings appears strongly site dependant. Finally, we have performed an analysis of source contribution to hazard at each of these cities to provide preliminary guidance in devising specific risk scenarios. We have found that local source zones control the hazard for PGA and SA( T ≤ 1.0 s), although contribution from specific fault sources and long-distance north Algerian sources becomes significant from SA(0.5 s) onwards.

  5. The influence of cross-order terms in interface mobilities for structure-borne sound source characterization

    NASA Astrophysics Data System (ADS)

    Bonhoff, H. A.; Petersson, B. A. T.

    2010-08-01

    For the characterization of structure-borne sound sources with multi-point or continuous interfaces, substantial simplifications and physical insight can be obtained by incorporating the concept of interface mobilities. The applicability of interface mobilities, however, relies upon the admissibility of neglecting the so-called cross-order terms. Hence, the objective of the present paper is to clarify the importance and significance of cross-order terms for the characterization of vibrational sources. From previous studies, four conditions have been identified for which the cross-order terms can become more influential. Such are non-circular interface geometries, structures with distinctively differing transfer paths as well as a suppression of the zero-order motion and cases where the contact forces are either in phase or out of phase. In a theoretical study, the former four conditions are investigated regarding the frequency range and magnitude of a possible strengthening of the cross-order terms. For an experimental analysis, two source-receiver installations are selected, suitably designed to obtain strong cross-order terms. The transmitted power and the source descriptors are predicted by the approximations of the interface mobility approach and compared with the complete calculations. Neglecting the cross-order terms can result in large misinterpretations at certain frequencies. On average, however, the cross-order terms are found to be insignificant and can be neglected with good approximation. The general applicability of interface mobilities for structure-borne sound source characterization and the description of the transmission process thereby is confirmed.

  6. GPS Block 2R Time Standard Assembly (TSA) architecture

    NASA Technical Reports Server (NTRS)

    Baker, Anthony P.

    1990-01-01

    The underlying philosophy of the Global Positioning System (GPS) 2R Time Standard Assembly (TSA) architecture is to utilize two frequency sources, one fixed frequency reference source and one system frequency source, and to couple the system frequency source to the reference frequency source via a sample data loop. The system source is used to provide the basic clock frequency and timing for the space vehicle (SV) and it uses a voltage controlled crystal oscillator (VCXO) with high short term stability. The reference source is an atomic frequency standard (AFS) with high long term stability. The architecture can support any type of frequency standard. In the system design rubidium, cesium, and H2 masers outputting a canonical frequency were accommodated. The architecture is software intensive. All VCXO adjustments are digital and are calculated by a processor. They are applied to the VCXO via a digital to analog converter.

  7. Numerical study of supersonic combustion using a finite rate chemistry model

    NASA Technical Reports Server (NTRS)

    Chitsomboon, T.; Tiwari, S. N.; Kumar, A.; Drummond, J. P.

    1986-01-01

    The governing equations of two-dimensional chemically reacting flows are presented together with a global two-step chemistry model for H2-air combustion. The explicit unsplit MacCormack finite difference algorithm is used to advance the discrete system of the governing equations in time until convergence is attained. The source terms in the species equations are evaluated implicitly to alleviate stiffness associated with fast reactions. With implicit source terms, the species equations give rise to a block-diagonal system which can be solved very efficiently on vector-processing computers. A supersonic reacting flow in an inlet-combustor configuration is calculated for the case where H2 is injected into the flow from the side walls and the strut. Results of the calculation are compared against the results obtained by using a complete reaction model.

  8. A general circulation model study of atmospheric carbon monoxide

    NASA Technical Reports Server (NTRS)

    Pinto, J. P.; Rind, D.; Russell, G. L.; Lerner, J. A.; Hansen, J. E.; Yung, Y. L.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 to the 15th g/yr, in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 750,000/cu cm. Models that calculate globally averaged OH concentrations much lower than this nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources.

  9. Radiological Source Terms for Tank Farms Safety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    COWLEY, W.L.

    2000-06-27

    This document provides Unit Liter Dose factors, atmospheric dispersion coefficients, breathing rates and instructions for using and customizing these factors for use in calculating radiological doses for accident analyses in the Hanford Tank Farms.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeze, R.A.

    Many emerging remediation technologies are designed to remove contaminant mass from source zones at DNAPL sites in response to regulatory requirements. There is often concern in the regulated community as to whether mass removal actually reduces risk, or whether the small risk reductions achieved warrant the large costs incurred. This paper sets out a framework for quantifying the degree to which risk is reduced as mass is removed from shallow, saturated, low-permeability, dual-porosity, DNAPL source zones. Risk is defined in terms of meeting an alternate concentration level (ACL) at a compliance well in an aquifer underlying the source zone. Themore » ACL is back-calculated from a carcinogenic health-risk characterization at a downstream water-supply well. Source-zone mass-removal efficiencies are heavily dependent on the distribution of mass between media (fractures, matrix) and phases (dissolved, sorbed, free product). Due to the uncertainties in currently-available technology performance data, the scope of the paper is limited to developing a framework for generic technologies rather than making risk-reduction calculations for specific technologies. Despite the qualitative nature of the exercise, results imply that very high mass-removal efficiencies are required to achieve significant long-term risk reduction with technology, applications of finite duration. 17 refs., 7 figs., 6 tabs.« less

  11. SU-E-T-554: Monte Carlo Calculation of Source Terms and Attenuation Lengths for Neutrons Produced by 50–200 MeV Protons On Brass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramos-Mendez, J; Faddegon, B; Paganetti, H

    2015-06-15

    Purpose: We used TOPAS (TOPAS wraps and extends Geant4 for medical physicists) to compare Geant4 physics models with published data for neutron shielding calculations. Subsequently, we calculated the source terms and attenuation lengths (shielding data) of the total ambient dose equivalent (TADE) in concrete for neutrons produced by protons in brass. Methods: Stage1: The Bertini and Binary nuclear models available in Geant4 were compared with published attenuation at depth of the TADE in concrete and iron. Stage2: Shielding data of the TADE in concrete was calculated for 50– 200 MeV proton beams on brass. Stage3: Shielding data from Stage2 wasmore » extrapolated for 235 MeV proton beams. This data was used in a point-line-source analytical model to calculate the ambient dose per unit therapeutic dose at two locations inside one treatment room at the Francis H Burr Proton Therapy Center. Finally, we compared these results with experimental data and full TOPAS simulations. Results: At larger angles (∼130o) the TADE in concrete calculated with the Bertini model was about 9 times larger than that calculated with the Binary model. The attenuation length in concrete calculated with the Binary model agreed with published data within 7%±0.4% (statistical uncertainty) for the deepest regions and 5%±0.1% for shallower regions. For iron the agreement was within 3%±0.1%. The ambient dose per therapeutic dose calculated with the Binary model, relative to the experimental data, was a ratio of 0.93±0.16 and 1.23±0.24 for two locations. The analytical model overestimated the dose by four orders of magnitude. These differences are attributed to the complexity of the geometry. Conclusion: The Binary and Bertini models gave comparable results, with the Binary model giving the best agreement with published data at large angle. Shielding data we calculated using the Binary model is useful for fast shielding calculations with other analytical models. This work was supported by National Cancer Institute Grant R01CA140735.« less

  12. Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2015-05-05

    Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less

  13. An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.

    2000-01-01

    A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.

  14. Interlaboratory study of the ion source memory effect in 36Cl accelerator mass spectrometry

    NASA Astrophysics Data System (ADS)

    Pavetich, Stefan; Akhmadaliev, Shavkat; Arnold, Maurice; Aumaître, Georges; Bourlès, Didier; Buchriegler, Josef; Golser, Robin; Keddadouche, Karim; Martschini, Martin; Merchel, Silke; Rugel, Georg; Steier, Peter

    2014-06-01

    Understanding and minimization of contaminations in the ion source due to cross-contamination and long-term memory effect is one of the key issues for accurate accelerator mass spectrometry (AMS) measurements of volatile elements. The focus of this work is on the investigation of the long-term memory effect for the volatile element chlorine, and the minimization of this effect in the ion source of the Dresden accelerator mass spectrometry facility (DREAMS). For this purpose, one of the two original HVE ion sources at the DREAMS facility was modified, allowing the use of larger sample holders having individual target apertures. Additionally, a more open geometry was used to improve the vacuum level. To evaluate this improvement in comparison to other up-to-date ion sources, an interlaboratory comparison had been initiated. The long-term memory effect of the four Cs sputter ion sources at DREAMS (two sources: original and modified), ASTER (Accélérateur pour les Sciences de la Terre, Environnement, Risques) and VERA (Vienna Environmental Research Accelerator) had been investigated by measuring samples of natural 35Cl/37Cl-ratio and samples highly-enriched in 35Cl (35Cl/37Cl ∼ 999). Besides investigating and comparing the individual levels of long-term memory, recovery time constants could be calculated. The tests show that all four sources suffer from long-term memory, but the modified DREAMS ion source showed the lowest level of contamination. The recovery times of the four ion sources were widely spread between 61 and 1390 s, where the modified DREAMS ion source with values between 156 and 262 s showed the fastest recovery in 80% of the measurements.

  15. Calculation note for an underground leak which remains underground

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, H.J.

    1997-05-20

    This calculation note supports the subsurface leak accident scenario which remains subsurface. It is assumed that a single walled pipe carrying waste from tank 106-C ruptures, releasing the liquid waste into the soil. In this scenario, the waste does not form a surface pool, but remains subsurface. However, above the pipe is a berm, 0.762 m (2.5 ft) high and 2.44 m (8 ft) wide, and the liquid released from the leak rises into the berm. The slurry line, which transports a source term of higher activity than the sluice line, leaks into the soil at a rate of 5%more » of the maximum flow rate of 28.4 L/s (450 gpm) for twelve hours. The dose recipient was placed a perpendicular distance of 100 m from the pipe. Two source terms were considered, mitigated and unmitigated release as described in section 3.4.1 of UANF-SD-WM-BIO-001, Addendum 1. The unmitigated consisted of two parts of AWF liquid and one part AWF solid. The mitigated release consisted of two parts SST liquid, eighteen parts AWF liquid, nine parts SST solid, and one part AWF solid. The isotopic breakdown of the release in these cases is presented. Two geometries were considered in preliminary investigations, disk source, and rectangular source. Since the rectangular source results from the assumption that the contamination is wicked up into the berm, only six inches of shielding from uncontaminated earth is present, while the disk source, which remains six inches below the level of the surface of the land is often shielded by a thick shield due to the slant path to the dose point. For this reason, only the rectangular source was considered in the final analysis. The source model was a rectangle 2.134 m (7 ft) thick, 0.6096 m (2 ft) high, and 130.899 m (131 ft) long. The top and sides of this rectangular source was covered with earth of density 1.6 g/cm{sup 3} to a thickness of 15.24 cm (6 in). This soil is modeled as 40% void space. The source consisted of earth of the same density with the void spaces filled with the liquid waste which added 0.56 g/cm{sup 3} to the density. The dose point was 100 m (328 ft) away from the berm in a perpendicular direction off the center. The computer code MICROSKYSHINEO was used to calculate the skyshine from the source. This code calculates exposure rate at the receptor point. The photon spectrum from 2 MeV to 0.15 MeV, obtained from ISOSHLD, was used as input, although this did not differ substantially from the results obtained from using Co, 137mBa, and 154Eu. However, this methodology allowed the bremsstrahlung contribution to be included in the skyshine calculation as well as in the direct radiation calculation.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gydesen, S.P.

    The purpose of this letter report is to reconstruct from available information that data which can be used to develop daily reactor operating history for 1960--1964. The information needed for source team calculations (as determined by the Source Terms Task Leader) were extracted and included in this report. The data on the amount of uranium dissolved by the separations plants (expressed both as tons and as MW) is also included in this compilation.

  17. Electron Energy Deposition in Atomic Nitrogen

    DTIC Science & Technology

    1987-10-06

    knovn theoretical results, and their relative accuracy in comparison to existing measurements and calculations is given elsevhere. 20 2.1 The Source Term...with the proper choice of parameters, reduces to vell-known theoretical results. 20 Table 2 gives the parameters for collisional excitation of the...calculations of McGuire 36 and experimental measurements of Brook et al.3 7 Additional theoretical and experimental results are discussed in detail elsevhere

  18. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  19. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less

  20. Replacing effective spectral radiance by temperature in occupational exposure limits to protect against retinal thermal injury from light and near IR radiation.

    PubMed

    Madjidi, Faramarz; Behroozy, Ali

    2014-01-01

    Exposure to visible light and near infrared (NIR) radiation in the wavelength region of 380 to 1400 nm may cause thermal retinal injury. In this analysis, the effective spectral radiance of a hot source is replaced by its temperature in the exposure limit values in the region of 380-1400 nm. This article describes the development and implementation of a computer code to predict those temperatures, corresponding to the exposure limits proposed by the American Conference of Governmental Industrial Hygienists (ACGIH). Viewing duration and apparent diameter of the source were inputs for the computer code. At the first stage, an infinite series was created for calculation of spectral radiance by integration with Planck's law. At the second stage for calculation of effective spectral radiance, the initial terms of this infinite series were selected and integration was performed by multiplying these terms by a weighting factor R(λ) in the wavelength region 380-1400 nm. At the third stage, using a computer code, the source temperature that can emit the same effective spectral radiance was found. As a result, based only on measuring the source temperature and accounting for the exposure time and the apparent diameter of the source, it is possible to decide whether the exposure to visible and NIR in any 8-hr workday is permissible. The substitution of source temperature for effective spectral radiance provides a convenient way to evaluate exposure to visible light and NIR.

  1. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of atmospheric dispersion model with improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2014-06-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Dai-ichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data with atmospheric model simulations from WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information), and simulations from the oceanic dispersion model SEA-GEARN-FDM, both developed by the authors. A sophisticated deposition scheme, which deals with dry and fogwater depositions, cloud condensation nuclei (CCN) activation and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The fallout to the ocean surface calculated by WSPEEDI-II was used as input data for the SEA-GEARN-FDM calculations. Reverse and inverse source-term estimation methods based on coupling the simulations from both models was adopted using air dose rates and concentrations, and sea surface concentrations. The results revealed that the major releases of radionuclides due to FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, the morning of 13 March after the venting event at Unit 3, midnight of 14 March when the SRV (Safely Relief Valve) at Unit 2 was opened three times, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates associated with reactor pressure changes in Units 2 and 3. The modified WSPEEDI-II simulation using the new source term reproduced local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (MLDP0, HYSPLIT, and NAME) for regional and global calculations and showed good agreement between calculated and observed air concentration and surface deposition of 137Cs in East Japan. Moreover, HYSPLIT model using the new source term also reproduced the plume arrivals at several countries abroad showing a good correlation with measured air concentration data. A large part of deposition pattern of total 131I and 137Cs in East Japan was explained by in-cloud particulate scavenging. However, for the regional scale contaminated areas, there were large uncertainties due to the overestimation of rainfall amounts and the underestimation of fogwater and drizzle depositions. The computations showed that approximately 27% of 137Cs discharged from FNPS1 deposited to the land in East Japan, mostly in forest areas.

  2. Regulatory Technology Development Plan - Sodium Fast Reactor: Mechanistic Source Term – Trial Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-10-01

    The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident, and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is notmore » without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.« less

  3. Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix

    DOE PAGES

    Pigni, M. T.; Croft, S.; Gauld, I. C.

    2016-04-25

    Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less

  4. Decomposition of the Seismic Source Using Numerical Simulations and Observations of Nuclear Explosions

    DTIC Science & Technology

    2017-05-31

    SUBJECT TERMS nonlinear finite element calculations, nuclear explosion monitoring, topography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...3D North Korea calculations........ Figure 6. The CRAM 3D finite element outer grid (left) is rectangular......................... Figure 7. Stress...Figure 6. The CRAM 3D finite element outer grid (left) is rectangular. The inner grid (center) is shaped to match the shape of the explosion shock wave

  5. Probabilistic Seismic Hazard Analysis for Georgia

    NASA Astrophysics Data System (ADS)

    Tsereteli, N. S.; Varazanashvili, O.; Sharia, T.; Arabidze, V.; Tibaldi, A.; Bonali, F. L. L.; Russo, E.; Pasquaré Mariotto, F.

    2017-12-01

    Nowadays, seismic hazard studies are developed in terms of the calculation of Peak Ground Acceleration (PGA), Spectral Acceleration (SA), Peak Ground Velocity (PGV) and other recorded parameters. In the frame of EMME project PSH were calculated for Georgia using GMPE based on selection criteria. In the frame of Project N 216758 (supported by Shota Rustaveli National Science Foundation (SRNF)) PSH maps were estimated using hybrid- empirical ground motion prediction equation developed for Georgia. Due to the paucity of seismically recorded information, in this work we focused our research on a more robust dataset related to macroseismic data,and attempted to calculate the probabilistic seismic hazard directly in terms of macroseismicintensity. For this reason, we started calculating new intensity prediction equations (IPEs)for Georgia taking into account different sets, belonging to the same new database, as well as distances from the seismic source.With respect to the seismic source, in order to improve the quality of the results, we have also hypothesized the size of faults from empirical relations, and calculated new IPEs also by considering Joyner-Boore and rupture distances in addition to epicentral and hypocentral distances. Finally, site conditions have been included as variables for IPEs calculation Regarding the database, we used a brand new revised set of macroseismic data and instrumental records for the significant earthquakes that struck Georgia between 1900 and 2002.Particularly, a large amount of research and documents related to macroseismic effects of individual earthquakes, stored in the archives of the Institute of Geophysics, were used as sources for the new macroseismic data. The latter are reported in the Medvedev-Sponheuer-Karnikmacroseismic scale (MSK64). For each earthquake the magnitude, the focal depth and the epicenter location are also reported. An online version of the database, with therelated metadata,has been produced for the 69 revised earthquakes and is available online (http://www.enguriproject.unimib.it/; .

  6. An Assessment of Fission Product Scrubbing in Sodium Pools Following a Core Damage Event in a Sodium Cooled Fast Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, M.; Farmer, M.; Grabaskas, D.

    The U.S. Nuclear Regulatory Commission has stated that mechanistic source term (MST) calculations are expected to be required as part of the advanced reactor licensing process. A recent study by Argonne National Laboratory has concluded that fission product scrubbing in sodium pools is an important aspect of an MST calculation for a sodium-cooled fast reactor (SFR). To model the phenomena associated with sodium pool scrubbing, a computational tool, developed as part of the Integral Fast Reactor (IFR) program, was utilized in an MST trial calculation. This tool was developed by applying classical theories of aerosol scrubbing to the decontamination ofmore » gases produced as a result of postulated fuel pin failures during an SFR accident scenario. The model currently considers aerosol capture by Brownian diffusion, inertial deposition, and gravitational sedimentation. The effects of sodium vapour condensation on aerosol scrubbing are also treated. This paper provides details of the individual scrubbing mechanisms utilized in the IFR code as well as results from a trial mechanistic source term assessment led by Argonne National Laboratory in 2016.« less

  7. The importance of geospatial data to calculate the optimal distribution of renewable energies

    NASA Astrophysics Data System (ADS)

    Díaz, Paula; Masó, Joan

    2013-04-01

    Specially during last three years, the renewable energies are revolutionizing the international trade while they are geographically diversifying markets. Renewables are experiencing a rapid growth in power generation. According to REN21 (2012), during last six years, the total renewables capacity installed grew at record rates. In 2011, the EU raised its share of global new renewables capacity till 44%. The BRICS nations (Brazil, Russia, India and China) accounted for about 26% of the total global. Moreover, almost twenty countries in the Middle East, North Africa, and sub-Saharan Africa have currently active markets in renewables. The energy return ratios are commonly used to calculate the efficiency of the traditional energy sources. The Energy Return On Investment (EROI) compares the energy returned for a certain source and the energy used to get it (explore, find, develop, produce, extract, transform, harvest, grow, process, etc.). These energy return ratios have demonstrated a general decrease of efficiency of the fossil fuels and gas. When considering the limitations of the quantity of energy produced by some sources, the energy invested to obtain them and the difficulties of finding optimal locations for the establishment of renewables farms (e.g. due to an ever increasing scarce of appropriate land) the EROI becomes relevant in renewables. A spatialized EROI, which uses variables with spatial distribution, enables the optimal position in terms of both energy production and associated costs. It is important to note that the spatialized EROI can be mathematically formalized and calculated the same way for different locations in a reproducible way. This means that having established a concrete EROI methodology it is possible to generate a continuous map that will highlight the best productive zones for renewable energies in terms of maximum energy return at minimum cost. Relevant variables to calculate the real energy invested are the grid connections between production and consumption, transportation loses and efficiency of the grid. If appropriate, the spatialized EROI analysis could include any indirect costs that the source of energy might produce, such as visual impacts, food market impacts and land price. Such a spatialized study requires GIS tools to compute operations using both spatial relations like distances and frictions, and topological relations like connectivity, not easy to consider in the way that EROI is currently calculated. In a broader perspective, by applying the EROI to various energy sources, a comparative analysis of the efficiency to obtain different source can be done in a quantitative way. The increase in energy investment is also accompanied by the increase of manufactures and policies. Further efforts will be necessary in the coming years to provide energy access through smart grids and to determine the efficient areas in terms of cost of production and energy returned on investment. The authors present the EROI as a reliable solution to address the input and output energy relationship and increase the efficiency in energy investment considering the appropriate geospatial variables. The spatialized EROI can be a useful tool to consider by decision makers when designing energy policies and programming energy funds, because it is an objective demonstration of which energy sources are more convenient in terms of costs and efficiency.

  8. Regulatory Technology Development Plan - Sodium Fast Reactor. Mechanistic Source Term - Trial Calculation. Work Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Jerden, James

    2016-02-01

    The overall objective of the SFR Regulatory Technology Development Plan (RTDP) effort is to identify and address potential impediments to the SFR regulatory licensing process. In FY14, an analysis by Argonne identified the development of an SFR-specific MST methodology as an existing licensing gap with high regulatory importance and a potentially long lead-time to closure. This work was followed by an initial examination of the current state-of-knowledge regarding SFR source term development (ANLART-3), which reported several potential gaps. Among these were the potential inadequacies of current computational tools to properly model and assess the transport and retention of radionuclides duringmore » a metal fuel pool-type SFR core damage incident. The objective of the current work is to determine the adequacy of existing computational tools, and the associated knowledge database, for the calculation of an SFR MST. To accomplish this task, a trial MST calculation will be performed using available computational tools to establish their limitations with regard to relevant radionuclide release/retention/transport phenomena. The application of existing modeling tools will provide a definitive test to assess their suitability for an SFR MST calculation, while also identifying potential gaps in the current knowledge base and providing insight into open issues regarding regulatory criteria/requirements. The findings of this analysis will assist in determining future research and development needs.« less

  9. REVIEW OF VOLATILE ORGANIC COMPOUND SOURCE APPORTIONMENT BY CHEMICAL MASS BALANCE. (R826237)

    EPA Science Inventory

    The chemical mass balance (CMB) receptor model has apportioned volatile organic compounds (VOCs) in more than 20 urban areas, mostly in the United States. These applications differ in terms of the total fraction apportioned, the calculation method, the chemical compounds used ...

  10. Parametrized energy spectrum of cosmic-ray protons with kinetic energies down to 1 GeV

    NASA Technical Reports Server (NTRS)

    Tan, L. C.

    1985-01-01

    A new estimation of the interstellar proton spectrum is made in which the source term of primary protons is taken from shock acceleration theory and the cosmic ray propagation calculation is based on a proposed nonuniform galactic disk model.

  11. Scoping estimates of the LDEF satellite induced radioactivity

    NASA Technical Reports Server (NTRS)

    Armstrong, Tony W.; Colborn, B. L.

    1990-01-01

    The Long Duration Exposure Facility (LDEF) satellite was recovered after almost six years in space. It was well-instrumented with ionizing radiation dosimeters, including thermoluminescent dosimeters, plastic nuclear track detectors, and a variety of metal foil samples for measuring nuclear activation products. The extensive LDEF radiation measurements provide the type of radiation environments and effects data needed to evaluate and help resolve uncertainties in present radiation models and calculational methods. A calculational program was established to aid in LDEF data interpretation and to utilize LDEF data for assessing the accuracy of current models. A summary of the calculational approach is presented. The purpose of the reported calculations is to obtain a general indication of: (1) the importance of different space radiation sources (trapped, galactic, and albedo protons, and albedo neutrons); (2) the importance of secondary particles; and (3) the spatial dependence of the radiation environments and effects expected within the spacecraft. The calculational method uses the High Energy Transport Code (HETC) to estimate the importance of different sources and secondary particles in terms of fluence, absorbed dose in tissue and silicon, and induced radioactivity as a function of depth in aluminum.

  12. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  13. Ancient Glass: A Literature Search and its Role in Waste Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis M.; Pierce, Eric M.

    2010-07-01

    When developing a performance assessment model for the long-term disposal of immobilized low-activity waste (ILAW) glass, it is desirable to determine the durability of glass forms over very long periods of time. However, testing is limited to short time spans, so experiments are performed under conditions that accelerate the key geochemical processes that control weathering. Verification that models currently being used can reliably calculate the long term behavior ILAW glass is a key component of the overall PA strategy. Therefore, Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to evaluate alternative strategies that can be usedmore » for PA source term model validation. One viable alternative strategy is the use of independent experimental data from archaeological studies of ancient or natural glass contained in the literature. These results represent a potential independent experiment that date back to approximately 3600 years ago or 1600 before the current era (bce) in the case of ancient glass and 106 years or older in the case of natural glass. The results of this literature review suggest that additional experimental data may be needed before the result from archaeological studies can be used as a tool for model validation of glass weathering and more specifically disposal facility performance. This is largely because none of the existing data set contains all of the information required to conduct PA source term calculations. For example, in many cases the sediments surrounding the glass was not collected and analyzed; therefore having the data required to compare computer simulations of concentration flux is not possible. This type of information is important to understanding the element release profile from the glass to the surrounding environment and provides a metric that can be used to calibrate source term models. Although useful, the available literature sources do not contain the required information needed to simulate the long-term performance of nuclear waste glasses in a near-surface or deep geologic repositories. The information that will be required include 1) experimental measurements to quantify the model parameters, 2) detailed analyses of altered glass samples, and 3) detailed analyses of the sediment surrounding the ancient glass samples.« less

  14. Effect of second-order exchange in electron-hydrogen scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madison, D.H.; Bray, I.; McCarthy, I.

    1990-05-07

    The electron-hydrogen scattering problem has been a nemesis to theoretical atomic physicists due to the fact that even the most sophisticated of theoretical calculations, both perturbative and nonperturbative, do not agree with experiment. The current opinion is that the perturbative approach cannot be used for this problem since recent second-order calculations are not in agreement with the experimental data and higher-order calculations are deemed impractical. However, these second-order calculations neglected second-order exchange. We have now added exchange to the second-order calculation and have found that the primary source of disagreement between experiment and theory for intermediate energies is attributable notmore » to higher-order terms but to second-order exchange.« less

  15. Integrating diverse forage sources reduces feed gaps on mixed crop-livestock farms.

    PubMed

    Bell, L W; Moore, A D; Thomas, D T

    2017-12-04

    Highly variable climates induce large variability in the supply of forage for livestock and so farmers must manage their livestock systems to reduce the risk of feed gaps (i.e. periods when livestock feed demand exceeds forage supply). However, mixed crop-livestock farmers can utilise a range of feed sources on their farms to help mitigate these risks. This paper reports on the development and application of a simple whole-farm feed-energy balance calculator which is used to evaluate the frequency and magnitude of feed gaps. The calculator matches long-term simulations of variation in forage and metabolisable energy supply from diverse sources against energy demand for different livestock enterprises. Scenarios of increasing the diversity of forage sources in livestock systems is investigated for six locations selected to span Australia's crop-livestock zone. We found that systems relying on only one feed source were prone to higher risk of feed gaps, and hence, would often have to reduce stocking rates to mitigate these risks or use supplementary feed. At all sites, by adding more feed sources to the farm feedbase the continuity of supply of both fresh and carry-over forage was improved, reducing the frequency and magnitude of feed deficits. However, there were diminishing returns from making the feedbase more complex, with combinations of two to three feed sources typically achieving the maximum benefits in terms of reducing the risk of feed gaps. Higher stocking rates could be maintained while limiting risk when combinations of other feed sources were introduced into the feedbase. For the same level of risk, a feedbase relying on a diversity of forage sources could support stocking rates 1.4 to 3 times higher than if they were using a single pasture source. This suggests that there is significant capacity to mitigate both risk of feed gaps at the same time as increasing 'safe' stocking rates through better integration of feed sources on mixed crop-livestock farms across diverse regions and climates.

  16. Determination of near and far field acoustics for advanced propeller configurations

    NASA Technical Reports Server (NTRS)

    Korkan, K. D.; Jaeger, S. M.; Kim, J. H.

    1989-01-01

    A method has been studied for predicting the acoustic field of the SR-3 transonic propfan using flow data generated by two versions of the NASPROP-E computer code. Since the flow fields calculated by the solvers include the shock-wave system of the propeller, the nonlinear quadrupole noise source term is included along with the monopole and dipole noise sources in the calculation of the acoustic near field. Acoustic time histories in the near field are determined by transforming the azimuthal coordinate in the rotating, blade-fixed coordinate system to the time coordinate in a nonrotating coordinate system. Fourier analysis of the pressure time histories is used to obtain the frequency spectra of the near-field noise.

  17. Numerical models analysis of energy conversion process in air-breathing laser propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Yanji; Song Junling; Cui Cunyan

    Energy source was considered as a key essential in this paper to describe energy conversion process in air-breathing laser propulsion. Some secondary factors were ignored when three independent modules, ray transmission module, energy source term module and fluid dynamic module, were established by simultaneous laser radiation transportation equation and fluid mechanics equation. The incidence laser beam was simulated based on ray tracing method. The calculated results were in good agreement with those of theoretical analysis and experiments.

  18. Research in atmospheric chemistry and transport

    NASA Technical Reports Server (NTRS)

    Yung, Y. L.

    1982-01-01

    The carbon monoxide cycle was studied by incorporating the known CO sources and sinks in a tracer model which used the winds generated by a general circulation model. The photochemical production and loss terms, which depended on OH radical concentrations, were calculated in an interactive fashion. Comparison of the computed global distribution and seasonal variations of CO with observations was used to yield constraints on the distribution and magnitude of the sources and sinks of CO, and the abundance of OH radicals in the troposphere.

  19. A Multigroup Method for the Calculation of Neutron Fluence with a Source Term

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.; Clowdsley, M. S.

    1998-01-01

    Current research on the Grant involves the development of a multigroup method for the calculation of low energy evaporation neutron fluences associated with the Boltzmann equation. This research will enable one to predict radiation exposure under a variety of circumstances. Knowledge of radiation exposure in a free-space environment is a necessity for space travel, high altitude space planes and satellite design. This is because certain radiation environments can cause damage to biological and electronic systems involving both short term and long term effects. By having apriori knowledge of the environment one can use prediction techniques to estimate radiation damage to such systems. Appropriate shielding can be designed to protect both humans and electronic systems that are exposed to a known radiation environment. This is the goal of the current research efforts involving the multi-group method and the Green's function approach.

  20. Atomic processes and equation of state of high Z plasmas for EUV sources and their effects on the spatial and temporal evolution of the plasmas

    NASA Astrophysics Data System (ADS)

    Sasaki, Akira; Sunahara, Atushi; Furukawa, Hiroyuki; Nishihara, Katsunobu; Nishikawa, Takeshi; Koike, Fumihiro

    2016-03-01

    Laser-produced plasma (LPP) extreme ultraviolet (EUV) light sources have been intensively investigated due to potential application to next-generation semiconductor technology. Current studies focus on the atomic processes and hydrodynamics of plasmas to develop shorter wavelength sources at λ = 6.x nm as well as to improve the conversion efficiency (CE) of λ = 13.5 nm sources. This paper examines the atomic processes of mid-z elements, which are potential candidates for λ = 6.x nm source using n=3-3 transitions. Furthermore, a method to calculate the hydrodynamics of the plasmas in terms of the initial interaction between a relatively weak prepulse laser is presented.

  1. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  2. Fukushima Daiichi Radionuclide Inventories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoni, Jeffrey N.; Jankovsky, Zachary Kyle

    Radionuclide inventories are generated to permit detailed analyses of the Fukushima Daiichi meltdowns. This is necessary information for severe accident calculations, dose calculations, and source term and consequence analyses. Inventories are calculated using SCALE6 and compared to values predicted by international researchers supporting the OECD/NEA's Benchmark Study on the Accident at Fukushima Daiichi Nuclear Power Station (BSAF). Both sets of inventory information are acceptable for best-estimate analyses of the Fukushima reactors. Consistent nuclear information for severe accident codes, including radionuclide class masses and core decay powers, are also derived from the SCALE6 analyses. Key nuclide activity ratios are calculated asmore » functions of burnup and nuclear data in order to explore the utility for nuclear forensics and support future decommissioning efforts.« less

  3. Beyond the double banana: improved recognition of temporal lobe seizures in long-term EEG.

    PubMed

    Rosenzweig, Ivana; Fogarasi, András; Johnsen, Birger; Alving, Jørgen; Fabricius, Martin Ejler; Scherg, Michael; Neufeld, Miri Y; Pressler, Ronit; Kjaer, Troels W; van Emde Boas, Walter; Beniczky, Sándor

    2014-02-01

    To investigate whether extending the 10-20 array with 6 electrodes in the inferior temporal chain and constructing computed montages increases the diagnostic value of ictal EEG activity originating in the temporal lobe. In addition, the accuracy of computer-assisted spectral source analysis was investigated. Forty EEG samples were reviewed by 7 EEG experts in various montages (longitudinal and transversal bipolar, common average, source derivation, source montage, current source density, and reference-free montages) using 2 electrode arrays (10-20 and the extended one). Spectral source analysis used source montage to calculate density spectral array, defining the earliest oscillatory onset. From this, phase maps were calculated for localization. The reference standard was the decision of the multidisciplinary epilepsy surgery team on the seizure onset zone. Clinical performance was compared with the double banana (longitudinal bipolar montage, 10-20 array). Adding the inferior temporal electrode chain, computed montages (reference free, common average, and source derivation), and voltage maps significantly increased the sensitivity. Phase maps had the highest sensitivity and identified ictal activity at earlier time-point than visual inspection. There was no significant difference concerning specificity. The findings advocate for the use of these digital EEG technology-derived analysis methods in clinical practice.

  4. Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6

    DOE PAGES

    Kulesza, Joel A.; Martz, Roger Lee

    2017-03-01

    Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less

  5. 75 FR 48743 - Mandatory Reporting of Greenhouse Gases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ...EPA is proposing to amend specific provisions in the GHG reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These proposed changes include providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources in a facility, amending data reporting requirements to provide additional clarity on when different types of GHG emissions need to be calculated and reported, clarifying terms and definitions in certain equations, and technical corrections.

  6. 75 FR 79091 - Mandatory Reporting of Greenhouse Gases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ...EPA is amending specific provisions in the greenhouse gas reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These final changes include generally providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources, amending data reporting requirements to provide additional clarity on when different types of greenhouse gas emissions need to be calculated and reported, clarifying terms and definitions in certain equations and other technical corrections and amendments.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeze, R.A.; McWhorter, D.B.

    Many emerging remediation technologies are designed to remove contaminant mass from source zones at DNAPL sites in response to regulatory requirements. There is often concern in the regulated community as to whether mass removal actually reduces risk, or whether the small risk reductions achieved warrant the large costs incurred. This paper sets out a proposed framework for quantifying the degree to which risk is reduced as mass is removed from DNAPL source areas in shallow, saturated, low-permeability media. Risk is defined in terms of meeting an alternate concentration limit (ACL) at a compliance well in an aquifer underlying the sourcemore » zone. The ACL is back-calculated from a carcinogenic health-risk characterization at a downgradient water-supply well. Source-zone mass-removal efficiencies are heavily dependent on the distribution of mass between media (fractures, matrix) and phase (aqueous, sorbed, NAPL). Due to the uncertainties in currently available technology performance data, the scope of the paper is limited to developing a framework for generic technologies rather than making specific risk-reduction calculations for individual technologies. Despite the qualitative nature of the exercise, results imply that very high total mass-removal efficiencies are required to achieve significant long-term risk reduction with technology applications of finite duration. This paper is not an argument for no action at contaminated sites. Rather, it provides support for the conclusions of Cherry et al. (1992) that the primary goal of current remediation should be short-term risk reduction through containment, with the aim to pass on to future generations site conditions that are well-suited to the future applications of emerging technologies with improved mass-removal capabilities.« less

  8. Analysis of accident sequences and source terms at waste treatment and storage facilities for waste generated by U.S. Department of Energy Waste Management Operations, Volume 3: Appendixes C-H

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, C.; Nabelssi, B.; Roglans-Ribas, J.

    1995-04-01

    This report contains the Appendices for the Analysis of Accident Sequences and Source Terms at Waste Treatment and Storage Facilities for Waste Generated by the U.S. Department of Energy Waste Management Operations. The main report documents the methodology, computational framework, and results of facility accident analyses performed as a part of the U.S. Department of Energy (DOE) Waste Management Programmatic Environmental Impact Statement (WM PEIS). The accident sequences potentially important to human health risk are specified, their frequencies are assessed, and the resultant radiological and chemical source terms are evaluated. A personal computer-based computational framework and database have been developedmore » that provide these results as input to the WM PEIS for calculation of human health risk impacts. This report summarizes the accident analyses and aggregates the key results for each of the waste streams. Source terms are estimated and results are presented for each of the major DOE sites and facilities by WM PEIS alternative for each waste stream. The appendices identify the potential atmospheric release of each toxic chemical or radionuclide for each accident scenario studied. They also provide discussion of specific accident analysis data and guidance used or consulted in this report.« less

  9. Quantification of airport community noise impact in terms of noise levels, population density, and human subjective response

    NASA Technical Reports Server (NTRS)

    Deloach, R.

    1981-01-01

    The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.

  10. Coupling Aggressive Mass Removal with Microbial Reductive Dechlorination for Remediation of DNAPL Source Zones: A Review and Assessment

    PubMed Central

    Christ, John A.; Ramsburg, C. Andrew; Abriola, Linda M.; Pennell, Kurt D.; Löffler, Frank E.

    2005-01-01

    The infiltration of dense non-aqueous-phase liquids (DNAPLs) into the saturated subsurface typically produces a highly contaminated zone that serves as a long-term source of dissolved-phase groundwater contamination. Applications of aggressive physical–chemical technologies to such source zones may remove > 90% of the contaminant mass under favorable conditions. The remaining contaminant mass, however, can create a rebounding of aqueous-phase concentrations within the treated zone. Stimulation of microbial reductive dechlorination within the source zone after aggressive mass removal has recently been proposed as a promising staged-treatment remediation technology for transforming the remaining contaminant mass. This article reviews available laboratory and field evidence that supports the development of a treatment strategy that combines aggressive source-zone removal technologies with subsequent promotion of sustained microbial reductive dechlorination. Physical–chemical source-zone treatment technologies compatible with posttreatment stimulation of microbial activity are identified, and studies examining the requirements and controls (i.e., limits) of reductive dechlorination of chlorinated ethenes are investigated. Illustrative calculations are presented to explore the potential effects of source-zone management alternatives. Results suggest that, for the favorable conditions assumed in these calculations (i.e., statistical homogeneity of aquifer properties, known source-zone DNAPL distribution, and successful bioenhancement in the source zone), source longevity may be reduced by as much as an order of magnitude when physical–chemical source-zone treatment is coupled with reductive dechlorination. PMID:15811838

  11. Operational source receptor calculations for large agglomerations

    NASA Astrophysics Data System (ADS)

    Gauss, Michael; Shamsudheen, Semeena V.; Valdebenito, Alvaro; Pommier, Matthieu; Schulz, Michael

    2016-04-01

    For Air quality policy an important question is how much of the air pollution within an urbanized region can be attributed to local sources and how much of it is imported through long-range transport. This is critical information for a correct assessment of the effectiveness of potential emission measures. The ratio between indigenous and long-range transported air pollution for a given region depends on its geographic location, the size of its area, the strength and spatial distribution of emission sources, the time of the year, but also - very strongly - on the current meteorological conditions, which change from day to day and thus make it important to provide such calculations in near-real-time to support short-term legislation. Similarly, long-term analysis over longer periods (e.g. one year), or of specific air quality episodes in the past, can help to scientifically underpin multi-regional agreements and long-term legislation. Within the European MACC projects (Monitoring Atmospheric Composition and Climate) and the transition to the operational CAMS service (Copernicus Atmosphere Monitoring Service) the computationally efficient EMEP MSC-W air quality model has been applied with detailed emission data, comprehensive calculations of chemistry and microphysics, driven by high quality meteorological forecast data (up to 96-hour forecasts), to provide source-receptor calculations on a regular basis in forecast mode. In its current state, the product allows the user to choose among different regions and regulatory pollutants (e.g. ozone and PM) to assess the effectiveness of fictive emission reductions in air pollutant emissions that are implemented immediately, either within the agglomeration or outside. The effects are visualized as bar charts, showing resulting changes in air pollution levels within the agglomeration as a function of time (hourly resolution, 0 to 4 days into the future). The bar charts not only allow assessing the effects of emission reduction measures but they also indicate the relative importance of indigenous versus imported air pollution. The calculations are currently performed weekly by MET Norway for the Paris, London, Berlin, Oslo, Po Valley and Rhine-Ruhr regions and the results are provided free of charge at the MACC website (http://www.gmes-atmosphere.eu/services/aqac/policy_interface/regional_sr/). A proposal to extend this service to all EU capitals on a daily basis within the Copernicus Atmosphere Monitoring Service is currently under review. The tool is an important example illustrating the increased application of scientific tools to operational services that support Air Quality policy. This paper will describe this tool in more detail, focusing on the experimental setup, underlying assumptions, uncertainties, computational demand, and the usefulness for air quality for policy. Options to apply the tool for agglomerations outside the EU will also be discussed (making reference to, e.g., PANDA, which is a European-Chinese collaboration project).

  12. Radiological assessment. A textbook on environmental dose analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Till, J.E.; Meyer, H.R.

    1983-09-01

    Radiological assessment is the quantitative process of estimating the consequences to humans resulting from the release of radionuclides to the biosphere. It is a multidisciplinary subject requiring the expertise of a number of individuals in order to predict source terms, describe environmental transport, calculate internal and external dose, and extrapolate dose to health effects. Up to this time there has been available no comprehensive book describing, on a uniform and comprehensive level, the techniques and models used in radiological assessment. Radiological Assessment is based on material presented at the 1980 Health Physics Society Summer School held in Seattle, Washington. Themore » material has been expanded and edited to make it comprehensive in scope and useful as a text. Topics covered include (1) source terms for nuclear facilities and Medical and Industrial sites; (2) transport of radionuclides in the atmosphere; (3) transport of radionuclides in surface waters; (4) transport of radionuclides in groundwater; (5) terrestrial and aquatic food chain pathways; (6) reference man; a system for internal dose calculations; (7) internal dosimetry; (8) external dosimetry; (9) models for special-case radionuclides; (10) calculation of health effects in irradiated populations; (11) evaluation of uncertainties in environmental radiological assessment models; (12) regulatory standards for environmental releases of radionuclides; (13) development of computer codes for radiological assessment; and (14) assessment of accidental releases of radionuclides.« less

  13. On the numerical calculation of hydrodynamic shock waves in atmospheres by an FCT method

    NASA Astrophysics Data System (ADS)

    Schmitz, F.; Fleck, B.

    1993-11-01

    The numerical calculation of vertically propagating hydrodynamic shock waves in a plane atmosphere by the ETBFCT-version of the Flux Corrected Transport (FCT) method by Boris and Book is discussed. The results are compared with results obtained by a characteristic method with shock fitting. We show that the use of the internal energy density as a dependent variable instead of the total energy density can give very inaccurate results. Consequent discretization rules for the gravitational source terms are derived. The improvement of the results by an additional iteration step is discussed. It appears that the FCT method is an excellent method for the accurate calculation of shock waves in an atmosphere.

  14. Deflection of light to second order in conformal Weyl gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sultana, Joseph, E-mail: joseph.sultana@um.edu.mt

    2013-04-01

    We reexamine the deflection of light in conformal Weyl gravity obtained in Sultana and Kazanas (2010), by extending the calculation based on the procedure by Rindler and Ishak, for the bending angle by a centrally concentrated spherically symmetric matter distribution, to second order in M/R, where M is the mass of the source and R is the impact parameter. It has recently been reported in Bhattacharya et al. (JCAP 09 (2010) 004; JCAP 02 (2011) 028), that when this calculation is done to second order, the term γr in the Mannheim-Kazanas metric, yields again the paradoxical contribution γR (where themore » bending angle is proportional to the impact parameter) obtained by standard formalisms appropriate to asymptotically flat spacetimes. We show that no such contribution is obtained for a second order calculation and the effects of the term γr in the metric are again insignificant as reported in our earlier work.« less

  15. Inverse modeling of April 2013 radioxenon detections

    NASA Astrophysics Data System (ADS)

    Hofman, Radek; Seibert, Petra; Philipp, Anne

    2014-05-01

    Significant concentrations of radioactive xenon isotopes (radioxenon) were detected by the International Monitoring System (IMS) for verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) in April 2013 in Japan. Particularly, three detections of Xe-133 made between 2013-04-07 18:00 UTC and 2013-04-09 06:00 UTC at the station JPX38 are quite notable with respect to the measurement history of the station. Our goal is to analyze the data and perform inverse modeling under different assumptions. This work is useful with respect to nuclear test monitoring as well as for the analysis of and response to nuclear emergencies. Two main scenarios will be pursued: (i) Source location is assumed to be known (DPRK test site). (ii) Source location is considered unknown. We attempt to estimate the source strength and the source strength along with its plausible location compatible with the data in scenario (i) and (ii), respectively. We are considering also the possibility of a vertically distributed source. Calculations of source-receptor sensitivity (SRS) fields and the subsequent inversion are aimed at going beyond routine calculations performed by the CTBTO. For SRS calculations, we employ the Lagrangian particle dispersion model FLEXPART with high resolution ECMWF meteorological data (grid cell sizes of 0.5, 0.25 and ca. 0.125 deg). This is important in situations where receptors or sources are located in complex terrain which is the case of the likely source of detections-the DPRK test site. SRS will be calculated with convection enabled in FLEXPART which will also increase model accuracy. In the variational inversion procedure attention will be paid not only to all significant detections and their uncertainties but also to non-detections which can have a large impact on inversion quality. We try to develop and implement an objective algorithm for inclusion of relevant data where samples from temporal and spatial vicinity of significant detections are added in an iterative manner and the inversion is recalculated in each iteration. This procedure should gradually narrow down the set of hypotheses on the source term, where the source term is here understood as an emission in both spatial and temporal domains. Especially in scenario (ii) we expect a strong impact of non-detections for the reduction of possible solutions. For these and also other purposes like statistical quantification of typical background values, measurements from all IMS noble gas stations north of 30 deg S for a period from January to June 2013 were extracted from vDEC platform. We would like to acknowledge the Preparatory Commission for the CTBTO for kindly providing limited access to the IMS data. This work contains only opinions of the authors, which can not in any case establish legal engagement of the Provisional Technical Secretariat of the CTBTO. This work is partially financed through the project "PREPARE: Innovative integrated tools and platforms for radiological emergency preparedness and post-accident response in Europe" (FP7, Grant 323287).

  16. Numerical modeling of materials processing applications of a pulsed cold cathode electron gun

    NASA Astrophysics Data System (ADS)

    Etcheverry, J. I.; Martínez, O. E.; Mingolo, N.

    1998-04-01

    A numerical study of the application of a pulsed cold cathode electron gun to materials processing is performed. A simple semiempirical model of the discharge is used, together with backscattering and energy deposition profiles obtained by a Monte Carlo technique, in order to evaluate the energy source term inside the material. The numerical computation of the heat equation with the calculated source term is performed in order to obtain useful information on melting and vaporization thresholds, melted radius and depth, and on the dependence of these variables on processing parameters such as operating pressure, initial voltage of the discharge and cathode-sample distance. Numerical results for stainless steel are presented, which demonstrate the need for several modifications of the experimental design in order to achieve a better efficiency.

  17. QCD sum rules study of meson-baryon sigma terms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erkol, Gueray; Oka, Makoto; Turan, Guersevil

    2008-11-01

    The pion-baryon sigma terms and the strange-quark condensates of the octet and the decuplet baryons are calculated by employing the method of QCD sum rules. We evaluate the vacuum-to-vacuum transition matrix elements of two baryon interpolating fields in an external isoscalar-scalar field and use a Monte Carlo-based approach to systematically analyze the sum rules and the uncertainties in the results. We extract the ratios of the sigma terms, which have rather high accuracy and minimal dependence on QCD parameters. We discuss the sources of uncertainties and comment on possible strangeness content of the nucleon and the Delta.

  18. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  19. The effect of nonlinear propagation on heating of tissue: A numerical model of diagnostic ultrasound beams

    NASA Astrophysics Data System (ADS)

    Cahill, Mark D.; Humphrey, Victor F.; Doody, Claire

    2000-07-01

    Thermal safety indices for diagnostic ultrasound beams are calculated under the assumption that the sound propagates under linear conditions. A non-axisymmetric finite difference model is used to solve the KZK equation, and so to model the beam of a diagnostic scanner in pulsed Doppler mode. Beams from both a uniform focused rectangular source and a linear array are considered. Calculations are performed in water, and in attenuating media with tissue-like characteristics. Attenuating media are found to exhibit significant nonlinear effects for finite-amplitude beams. The resulting loss of intensity by the beam is then used as the source term in a model of tissue heating to estimate the maximum temperature rises. These are compared with the thermal indices, derived from the properties of the water-propagated beams.

  20. Management of Ultimate Risk of Nuclear Power Plants by Source Terms - Lessons Learned from the Chernobyl Accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genn Saji

    2006-07-01

    The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessonsmore » learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and deposition mechanisms of fuel particles and fission fragments during the initial phase of the Chernobyl accident. Through this study, it is now possible to generally reconstruct the radiological consequences by using a dispersion calculation technique, combined with the meteorological data at the time of the accident and land contamination densities of {sup 137}Cs measured and reported around the Chernobyl area. Although it is challenging to incorporate lessons learned from the Chernobyl accident into the source term issues, the author has already developed an example of safety goals by incorporating the radiological consequences of the accident. The example provides safety goals by specifying source term releases in a graded approach in combination with probabilities, i.e. risks. The author believes that the future source term specification should be directly linked with safety goals. (author)« less

  1. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A.

    2012-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline ( = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline psi = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.

  2. Electric dipole moments of light nuclei from {chi}EFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higa, Renato

    I present recent calculations of EDMs of light nuclei using chiral effective field theory techniques. At leading-order, we argue that they can be expressed in terms of six CP-violating low-energy constants. With our expressions, eventual non-zero measurements of EDMs of deuteron, helion, and triton can be combined to disentangle the different sources of CP-violation.

  3. Electric dipole moments of light nuclei from χEFT

    NASA Astrophysics Data System (ADS)

    Higa, Renato

    2013-03-01

    I present recent calculations of EDMs of light nuclei using chiral effective field theory techniques. At leading-order, we argue that they can be expressed in terms of six CP-violating low-energy constants. With our expressions, eventual non-zero measurements of EDMs of deuteron, helion, and triton can be combined to disentangle the different sources of CP-violation.

  4. Finite element solution to passive scalar transport behind line sources under neutral and unstable stratification

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Ho; Leung, Dennis Y. C.

    2006-02-01

    This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.

  5. The importance of quadrupole sources in prediction of transonic tip speed propeller noise

    NASA Technical Reports Server (NTRS)

    Hanson, D. B.; Fink, M. R.

    1978-01-01

    A theoretical analysis is presented for the harmonic noise of high speed, open rotors. Far field acoustic radiation equations based on the Ffowcs-Williams/Hawkings theory are derived for a static rotor with thin blades and zero lift. Near the plane of rotation, the dominant sources are the volume displacement and the rho U(2) quadrupole, where u is the disturbance velocity component in the direction blade motion. These sources are compared in both the time domain and the frequency domain using two dimensional airfoil theories valid in the subsonic, transonic, and supersonic speed ranges. For nonlifting parabolic arc blades, the two sources are equally important at speeds between the section critical Mach number and a Mach number of one. However, for moderately subsonic or fully supersonic flow over thin blade sections, the quadrupole term is negligible. It is concluded for thin blades that significant quadrupole noise radiation is strictly a transonic phenomenon and that it can be suppressed with blade sweep. Noise calculations are presented for two rotors, one simulating a helicopter main rotor and the other a model propeller. For the latter, agreement with test data was substantially improved by including the quadrupole source term.

  6. Detailed source term estimation of the atmospheric release for the Fukushima Daiichi Nuclear Power Station accident by coupling simulations of an atmospheric dispersion model with an improved deposition scheme and oceanic dispersion model

    NASA Astrophysics Data System (ADS)

    Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.

    2015-01-01

    Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (Modèle Lagrangien de Dispersion de Particules d'ordre zéro: MLDP0, Hybrid Single Particle Lagrangian Integrated Trajectory Model: HYSPLIT, and Met Office's Numerical Atmospheric-dispersion Modelling Environment: NAME) for regional and global calculations, and the calculated results showed good agreement with observed air concentration and surface deposition of 137Cs in eastern Japan.

  7. Three-dimensional calculations of rotor-airframe interaction in forward flight

    NASA Technical Reports Server (NTRS)

    Zori, Laith A. J.; Mathur, Sanjay R.; Rajagopalan, R. G.

    1992-01-01

    A method for analyzing the mutual aerodynamic interaction between a rotor and an airframe model has been developed. This technique models the rotor implicitly through the source terms of the momentum equations. A three-dimensional, incompressible, laminar, Navier-Stokes solver in cylindrical coordinates was developed for analyzing the rotor/airframe problem. The calculations are performed on a simplified model at an advance ratio of 0.1. The airframe surface pressure predictions are found to be in good agreement with wind tunnel test data. Results are presented for velocity and pressure field distributions in the wake of the rotor.

  8. Laser magnetic resonance in supersonic plasmas - The rotational spectrum of SH(+)

    NASA Technical Reports Server (NTRS)

    Hovde, David C.; Saykally, Richard J.

    1987-01-01

    The rotational spectrum of v = 0 and v = 1X3Sigma(-)SH(+) was measured by laser magnetic resonance. Rotationally cold (Tr = 30 K), vibrationally excited (Tv = 3000 K) ions were generated in a corona excited supersonic expansion. The use of this source to identify ion signals is described. Improved molecular parameters were obtained; term values are presented from which astrophysically important transitions may be calculated. Accurate hyperfine parameters for both vibrational levels were determined and the vibrational dependence of the Fermi contact interaction was resolved. The hyperfine parameters agree well with recent many-body perturbation theory calculations.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benites, J.; Alumno del Posgrado en CBAP, Universidad Autonoma de Nayarit, Carretera Tepic-Compostela km9. C.P. 63780. Xalisco-Nayarit-Mexico; Vega-Carrillo, H. R.

    Neutron spectra and the ambient dose equivalent were calculated inside the bunker of a 15 MV Varian linac model CLINAC iX. Calculations were carried out using Monte Carlo methods. Neutron spectra in the vicinity of isocentre show the presence of evaporation and knock-on neutrons produced by the source term, while epithermal and thermal neutron remain constant regardless the distance respect to isocentre, due to room return. Along the maze neutron spectra becomes softer as the detector moves along the maze. The ambient dose equivalent is decreased but do not follow the 1/r{sup 2} rule due to changes in the neutronmore » spectra.« less

  10. Light-assisted templated self assembly using photonic crystal slabs.

    PubMed

    Mejia, Camilo A; Dutt, Avik; Povinelli, Michelle L

    2011-06-06

    We explore a technique which we term light-assisted templated self-assembly. We calculate the optical forces on colloidal particles over a photonic crystal slab. We show that exciting a guided resonance mode of the slab yields a resonantly-enhanced, attractive optical force. We calculate the lateral optical forces above the slab and predict that stably trapped periodic patterns of particles are dependent on wavelength and polarization. Tuning the wavelength or polarization of the light source may thus allow the formation and reconfiguration of patterns. We expect that this technique may be used to design all-optically reconfigurable photonic devices.

  11. Long-Term Variations of the EOP and ICRF2

    NASA Technical Reports Server (NTRS)

    Zharov, Vladimir; Sazhin, Mikhail; Sementsov, Valerian; Sazhina, Olga

    2010-01-01

    We analyzed the time series of the coordinates of the ICRF radio sources. We show that part of the radio sources, including the defining sources, shows a significant apparent motion. The stability of the celestial reference frame is provided by a no-net-rotation condition applied to the defining sources. In our case this condition leads to a rotation of the frame axes with time. We calculated the effect of this rotation on the Earth orientation parameters (EOP). In order to improve the stability of the celestial reference frame we suggest a new method for the selection of the defining sources. The method consists of two criteria: the first one we call cosmological and the second one kinematical. It is shown that a subset of the ICRF sources selected according to cosmological criteria provides the most stable reference frame for the next decade.

  12. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator.

    PubMed

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-11-01

    In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.

  13. The time variability of Jupiter's synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Bolton, Scott Jay

    1991-02-01

    The time variability of the Jovian synchrotron emission is investigated by analyzing radio observations of Jupiter at decimetric wavelengths. The observations are composed from two distinct sets of measurements addressing both short term (days to weeks) and long term (months to years) variability. The study of long term variations utilizes a set of measurements made several times each month with the NASA Deep Space Network (DNS) antennas operating at 2295 MHz (13.1 cm). The DSN data set, covering 1971 through 1985, is compared with a set of measurements of the solar wind from a number of Earth orbiting spacecraft. The analysis indicates a maximum correlation between the synchrotron emission and the solar wind ram pressure with a two year time lag. Physical mechanisms affecting the synchrotron emission are discussed with an emphasis on radial diffusion. Calculations are performed that suggest the correlation is consistent with inward adiabatic diffusion of solar wind particles driven by Brice's model of ionospheric neutral wind convection (Brice 1972). The implication is that the solar wind could be a source of particles of Jupiter's radiation belts. The investigation of short term variability focuses on a three year Jupiter observing program using the University of California's Hat Creek radio telescope operating at 1400 MHz (21 cm). Measurements are made every two days during the months surrounding opposition. Results from the three year program suggest short term variability near the 10-20 percent level but should be considered inconclusive due to scheduling and observational limitations. A discussion of magneto-spheric processes on short term timescales identifies wave-particle interactions as a candidate source. Further analysis finds that the short term variations could be related to whistler mode wave-particles interactions in the radiation belts associated with atmospheric lightning on Jupiter. However, theoretical calculations on wave particle interactions imply thought if whistler mode waves are to interact with the synchrotron emitting electrons.

  14. #nowplaying Madonna: a large-scale evaluation on estimating similarities between music artists and between movies from microblogs.

    PubMed

    Schedl, Markus

    2012-01-01

    Different term weighting techniques such as [Formula: see text] or BM25 have been used intensely for manifold text-based information retrieval tasks. Their use for modeling term profiles for named entities and subsequent calculation of similarities between these named entities have been studied to a much smaller extent. The recent trend of microblogging made available massive amounts of information about almost every topic around the world. Therefore, microblogs represent a valuable source for text-based named entity modeling. In this paper, we present a systematic and comprehensive evaluation of different term weighting measures , normalization techniques , query schemes , index term sets , and similarity functions for the task of inferring similarities between named entities, based on data extracted from microblog posts . We analyze several thousand combinations of choices for the above mentioned dimensions, which influence the similarity calculation process, and we investigate in which way they impact the quality of the similarity estimates. Evaluation is performed using three real-world data sets: two collections of microblogs related to music artists and one related to movies. For the music collections, we present results of genre classification experiments using as benchmark genre information from allmusic.com. For the movie collection, we present results of multi-class classification experiments using as benchmark categories from IMDb. We show that microblogs can indeed be exploited to model named entity similarity with remarkable accuracy, provided the correct settings for the analyzed aspects are used. We further compare the results to those obtained when using Web pages as data source.

  15. Determination of absorbed dose to water from a miniature kilovoltage x-ray source using a parallel-plate ionization chamber

    NASA Astrophysics Data System (ADS)

    Watson, Peter G. F.; Popovic, Marija; Seuntjens, Jan

    2018-01-01

    Electronic brachytherapy sources are widely accepted as alternatives to radionuclide-based systems. Yet, formal dosimetry standards for these devices to independently complement the dose protocol provided by the manufacturer are lacking. This article presents a formalism for calculating and independently verifying the absorbed dose to water from a kV x-ray source (The INTRABEAM System) measured in a water phantom with an ionization chamber calibrated in terms of air-kerma. This formalism uses a Monte Carlo (MC) calculated chamber conversion factor, CQ , to convert air-kerma in a reference beam to absorbed dose to water in the measurement beam. In this work CQ was determined for a PTW 34013 parallel-plate ionization chamber. Our results show that CQ was sensitive to the chamber plate separation tolerance, with differences of up to 15%. CQ was also found to have a depth dependence which varied with chamber plate separation (0 to 10% variation for the smallest and largest cavity height, over 3 to 30 mm depth). However for all chamber dimensions investigated, CQ was found to be significantly larger than the manufacturer reported value, suggesting that the manufacturer recommended method of dose calculation could be underestimating the dose to water.

  16. Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Jerden, James L.; Ebert, William L.

    The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM calculates the concentration of species generated at any specific time and location from the surface of the fuel. Several options being considered for coupling the RM and MPM are described in the report. Different options have advantages and disadvantages based on the extent of coding that would be required and the ease of use of the final product.« less

  17. Massive parallel 3D PIC simulation of negative ion extraction

    NASA Astrophysics Data System (ADS)

    Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu

    2017-09-01

    The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.

  18. General formulation of characteristic time for persistent chemicals in a multimedia environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.

    1999-02-01

    A simple yet representative method for determining the characteristic time a persistent organic pollutant remains in a multimedia environment is presented. The characteristic time is an important attribute for assessing long-term health and ecological impacts of a chemical. Calculating the characteristic time requires information on decay rates in multiple environmental media as well as the proportion of mass in each environmental medium. The authors explore the premise that using a steady-state distribution of the mass in the environment provides a means to calculate a representative estimate of the characteristic time while maintaining a simple formulation. Calculating the steady-state mass distributionmore » incorporates the effect of advective transport and nonequilibrium effects resulting from the source terms. Using several chemicals, they calculate and compare the characteristic time in a representative multimedia environment for dynamic, steady-state, and equilibrium multimedia models, and also for a single medium model. They demonstrate that formulating the characteristic time based on the steady-state mass distribution in the environment closely approximates the dynamic characteristic time for a range of chemicals and thus can be used in decisions regarding chemical use in the environment.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James; Kuruganti, Teja

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  20. Status of a standard for neutron skyshine calculation and measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westfall, R.M.; Wright, R.Q.; Greenborg, J.

    1990-01-01

    An effort has been under way for several years to prepare a draft standard, ANS-6.6.2, Calculation and Measurement of Direct and Scattered Neutron Radiation from Contained Sources Due to Nuclear Power Operations. At the outset, the work group adopted a three-phase study involving one-dimensional analyses, a measurements program, and multi-dimensional analyses. Of particular interest are the neutron radiation levels associated with dry-fuel storage at reactor sites. The need for dry storage has been investigated for various scenarios of repository and monitored retrievable storage (MRS) facilities availability with the waste stream analysis model. The concern is with long-term integrated, low-level dosesmore » at long distances from a multiplicity of sources. To evaluate the conservatism associated with one-dimensional analyses, the work group has specified a series of simple problems. Sources as a function of fuel exposure were determined for a Westinghouse 17 x 17 pressurized water reactor assembly with the ORIGEN-S module of the SCALE system. The energy degradation of the 35 GWd/ton U sources was determined for two generic designs of dry-fuel storage casks.« less

  1. Pathloss Calculation Using the Transmission Line Matrix and Finite Difference Time Domain Methods With Coarse Grids

    DOE PAGES

    Nutaro, James; Kuruganti, Teja

    2017-02-24

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  2. A nonequilibrium model for a moderate pressure hydrogen microwave discharge plasma

    NASA Technical Reports Server (NTRS)

    Scott, Carl D.

    1993-01-01

    This document describes a simple nonequilibrium energy exchange and chemical reaction model to be used in a computational fluid dynamics calculation for a hydrogen plasma excited by microwaves. The model takes into account the exchange between the electrons and excited states of molecular and atomic hydrogen. Specifically, electron-translation, electron-vibration, translation-vibration, ionization, and dissociation are included. The model assumes three temperatures, translational/rotational, vibrational, and electron, each describing a Boltzmann distribution for its respective energy mode. The energy from the microwave source is coupled to the energy equation via a source term that depends on an effective electric field which must be calculated outside the present model. This electric field must be found by coupling the results of the fluid dynamics and kinetics solution with a solution to Maxwell's equations that includes the effects of the plasma permittivity. The solution to Maxwell's equations is not within the scope of this present paper.

  3. WE-E-18A-05: Bremsstrahlung of Laser-Plasma Interaction at KeV Temperature: Forward Dose and Attenuation Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saez-Beltran, M; Fernandez Gonzalez, F

    2014-06-15

    Purpose: To obtain an analytical empirical formula for the photon dose source term in forward direction from bremsstrahlung generated from laser-plasma accelerated electron beams in aluminum solid targets, with electron-plasma temperatures in the 10–100 keV energy range, and to calculate transmission factors for iron, aluminum, methacrylate, lead and concrete and air, materials most commonly found in vacuum chamber labs. Methods: Bremsstrahlung fluence is calculated from the convolution of thin-target bremsstrahlung spectrum for monoenergetic electrons and the relativistic Maxwell-Juettner energy distribution for the electron-plasma. Unattenuatted dose in tissue is calculated by integrating the photon spectrum with the mass-energy absorption coefficient. Formore » the attenuated dose, energy dependent absorption coefficient, build-up factors and finite shielding correction factors were also taken into account. For the source term we use a modified formula from Hayashi et al., and we fitted the proportionality constant from experiments with the aid of the previously calculated transmission factors. Results: The forward dose has a quadratic dependence on electron-plasma temperature: 1 joule of effective laser energy transferred to the electrons at 1 m in vacuum yields 0,72 Sv per MeV squared of electron-plasma temperature. Air strongly filters the softer part of the photon spectrum and reduce the dose to one tenth in the first centimeter. Exponential higher energy tail of maxwellian spectrum contributes mainly to the transmitted dose. Conclusion: A simple formula for forward photon dose from keV range temperature plasma is obtained, similar to those found in kilovoltage x-rays but with higher dose per dissipated electron energy, due to thin target and absence of filtration.« less

  4. WE-E-18A-06: To Remove Or Not to Remove: Comfort Pads From Beneath Neonates for Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, X; Baad, M; Reiser, I

    2014-06-15

    Purpose: To obtain an analytical empirical formula for the photon dose source term in forward direction from bremsstrahlung generated from laser-plasma accelerated electron beams in aluminum solid targets, with electron-plasma temperatures in the 10–100 keV energy range, and to calculate transmission factors for iron, aluminum, methacrylate, lead and concrete and air, materials most commonly found in vacuum chamber labs. Methods: Bremsstrahlung fluence is calculated from the convolution of thin-target bremsstrahlung spectrum for monoenergetic electrons and the relativistic Maxwell-Juettner energy distribution for the electron-plasma. Unattenuatted dose in tissue is calculated by integrating the photon spectrum with the mass-energy absorption coefficient. Formore » the attenuated dose, energy dependent absorption coefficient, build-up factors and finite shielding correction factors were also taken into account. For the source term we use a modified formula from Hayashi et al., and we fitted the proportionality constant from experiments with the aid of the previously calculated transmission factors. Results: The forward dose has a quadratic dependence on electron-plasma temperature: 1 joule of effective laser energy transferred to the electrons at 1 m in vacuum yields 0,72 Sv per MeV squared of electron-plasma temperature. Air strongly filters the softer part of the photon spectrum and reduce the dose to one tenth in the first centimeter. Exponential higher energy tail of maxwellian spectrum contributes mainly to the transmitted dose. Conclusion: A simple formula for forward photon dose from keV range temperature plasma is obtained, similar to those found in kilovoltage x-rays but with higher dose per dissipated electron energy, due to thin target and absence of filtration.« less

  5. Reducing mortality risk by targeting specific air pollution sources: Suva, Fiji.

    PubMed

    Isley, C F; Nelson, P F; Taylor, M P; Stelcer, E; Atanacio, A J; Cohen, D D; Mani, F S; Maata, M

    2018-01-15

    Health implications of air pollution vary dependent upon pollutant sources. This work determines the value, in terms of reduced mortality, of reducing ambient particulate matter (PM 2.5 : effective aerodynamic diameter 2.5μm or less) concentration due to different emission sources. Suva, a Pacific Island city with substantial input from combustion sources, is used as a case-study. Elemental concentration was determined, by ion beam analysis, for PM 2.5 samples from Suva, spanning one year. Sources of PM 2.5 have been quantified by positive matrix factorisation. A review of recent literature has been carried out to delineate the mortality risk associated with these sources. Risk factors have then been applied for Suva, to calculate the possible mortality reduction that may be achieved through reduction in pollutant levels. Higher risk ratios for black carbon and sulphur resulted in mortality predictions for PM 2.5 from fossil fuel combustion, road vehicle emissions and waste burning that surpass predictions for these sources based on health risk of PM 2.5 mass alone. Predicted mortality for Suva from fossil fuel smoke exceeds the national toll from road accidents in Fiji. The greatest benefit for Suva, in terms of reduced mortality, is likely to be accomplished by reducing emissions from fossil fuel combustion (diesel), vehicles and waste burning. Copyright © 2017. Published by Elsevier B.V.

  6. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.

    PubMed

    Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M

    2009-09-30

    QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.

  7. MIXOPTIM: A tool for the evaluation and the optimization of the electricity mix in a territory

    NASA Astrophysics Data System (ADS)

    Bonin, Bernard; Safa, Henri; Laureau, Axel; Merle-Lucotte, Elsa; Miss, Joachim; Richet, Yann

    2014-09-01

    This article presents a method of calculation of the generation cost of a mixture of electricity sources, by means of a Monte Carlo simulation of the production output taking into account the fluctuations of the demand and the stochastic nature of the availability of the various power sources that compose the mix. This evaluation shows that for a given electricity mix, the cost has a non-linear dependence on the demand level. In the second part of the paper, we develop some considerations on the management of intermittence. We develop a method based on spectral decomposition of the imposed power fluctuations to calculate the minimal amount of the controlled power sources needed to follow these fluctuations. This can be converted into a viability criterion of the mix included in the MIXOPTIM software. In the third part of the paper, the MIXOPTIM cost evaluation method is applied to the multi-criteria optimization of the mix, according to three main criteria: the cost of the mix; its impact on climate in terms of CO2 production; and the security of supply.

  8. Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Casper, Jay H.

    2005-01-01

    The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.

  9. Nonlinear anomalous photocurrents in Weyl semimetals

    NASA Astrophysics Data System (ADS)

    Rostami, Habib; Polini, Marco

    2018-05-01

    We study the second-order nonlinear optical response of a Weyl semimetal (WSM), i.e., a three-dimensional metal with linear band touchings acting as pointlike sources of Berry curvature in momentum space, termed "Weyl-Berry monopoles." We first show that the anomalous second-order photocurrent of WSMs can be elegantly parametrized in terms of Weyl-Berry dipole and quadrupole moments. We then calculate the corresponding charge and node conductivities of WSMs with either broken time-reversal invariance or inversion symmetry. In particular, we predict a dissipationless second-order anomalous node conductivity for WSMs belonging to the TaAs family.

  10. Calculation and Visualization of Atomistic Mechanical Stresses in Nanomaterials and Biomolecules

    PubMed Central

    Gilson, Michael K.

    2014-01-01

    Many biomolecules have machine-like functions, and accordingly are discussed in terms of mechanical properties like force and motion. However, the concept of stress, a mechanical property that is of fundamental importance in the study of macroscopic mechanics, is not commonly applied in the biomolecular context. We anticipate that microscopical stress analyses of biomolecules and nanomaterials will provide useful mechanistic insights and help guide molecular design. To enable such applications, we have developed Calculator of Atomistic Mechanical Stress (CAMS), an open-source software package for computing atomic resolution stresses from molecular dynamics (MD) simulations. The software also enables decomposition of stress into contributions from bonded, nonbonded and Generalized Born potential terms. CAMS reads GROMACS topology and trajectory files, which are easily generated from AMBER files as well; and time-varying stresses may be animated and visualized in the VMD viewer. Here, we review relevant theory and present illustrative applications. PMID:25503996

  11. New solution decomposition and minimization schemes for Poisson-Boltzmann equation in calculation of biomolecular electrostatics

    NASA Astrophysics Data System (ADS)

    Xie, Dexuan

    2014-10-01

    The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.

  12. Calculation and visualization of atomistic mechanical stresses in nanomaterials and biomolecules.

    PubMed

    Fenley, Andrew T; Muddana, Hari S; Gilson, Michael K

    2014-01-01

    Many biomolecules have machine-like functions, and accordingly are discussed in terms of mechanical properties like force and motion. However, the concept of stress, a mechanical property that is of fundamental importance in the study of macroscopic mechanics, is not commonly applied in the biomolecular context. We anticipate that microscopical stress analyses of biomolecules and nanomaterials will provide useful mechanistic insights and help guide molecular design. To enable such applications, we have developed Calculator of Atomistic Mechanical Stress (CAMS), an open-source software package for computing atomic resolution stresses from molecular dynamics (MD) simulations. The software also enables decomposition of stress into contributions from bonded, nonbonded and Generalized Born potential terms. CAMS reads GROMACS topology and trajectory files, which are easily generated from AMBER files as well; and time-varying stresses may be animated and visualized in the VMD viewer. Here, we review relevant theory and present illustrative applications.

  13. An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.

    PubMed

    Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-10-21

    Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.

  14. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.

    PubMed

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-07

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  15. Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe

    2015-07-01

    The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.

  16. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. Themore » code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.« less

  17. Dispersion modeling of polycyclic aromatic hydrocarbons from combustion of biomass and fossil fuels and production of coke in Tianjin, China.

    PubMed

    Tao, Shu; Li, Xinrong; Yang, Yu; Coveney, Raymond M; Lu, Xiaoxia; Chen, Haitao; Shen, Weiran

    2006-08-01

    A USEPA, procedure, ISCLT3 (Industrial Source Complex Long-Term), was applied to model the spatial distribution of polycyclic aromatic hydrocarbons (PAHs) emitted from various sources including coal, petroleum, natural gas, and biomass into the atmosphere of Tianjin, China. Benzo[a]pyrene equivalent concentrations (BaPeq) were calculated for risk assessment. Model results were provisionally validated for concentrations and profiles based on the observed data at two monitoring stations. The dominant emission sources in the area were domestic coal combustion, coke production, and biomass burning. Mainly because of the difference in the emission heights, the contributions of various sources to the average concentrations at receptors differ from proportions emitted. The shares of domestic coal increased from approximately 43% at the sources to 56% at the receptors, while the contributions of coking industry decreased from approximately 23% at the sources to 7% at the receptors. The spatial distributions of gaseous and particulate PAHs were similar, with higher concentrations occurring within urban districts because of domestic coal combustion. With relatively smaller contributions, the other minor sources had limited influences on the overall spatial distribution. The calculated average BaPeq value in air was 2.54 +/- 2.87 ng/m3 on an annual basis. Although only 2.3% of the area in Tianjin exceeded the national standard of 10 ng/m3, 41% of the entire population lives within this area.

  18. Sci-Sat AM: Radiation Dosimetry and Practical Therapy Solutions - 06: Investigation of an absorbed dose to water formalism for a miniature low-energy x-ray source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Peter; Seuntjens, Jan

    Purpose: We present a formalism for calculating the absorbed dose to water from a miniature x-ray source (The INTRABEAM system, Carl Zeiss), using a parallel-plate ionization chamber calibrated in terms of air-kerma. Monte Carlo calculations were performed to derive a chamber conversion factor (C{sub Q}) from reference air-kerma to dose to water for the INTRABEAM. C{sub Q} was investigated as a function of depth in water, and compared with the manufacturer’s reported value. The effect of chamber air cavity dimension tolerance was also investigated. Methods: Air-kerma (A{sub k}) from a reference beam was calculated using the EGSnrc user code cavity.more » Using egs-chamber, a model of a PTW 34013 parallel-plate ionization chamber was created according to manufacturer specifications. The dose to the chamber air cavity (D{sub gas}) was simulated both in-air (with reference beam) and in-water (with INTRABEAM source). Dose to a small water voxel (D{sub w}) was also calculated. C{sub Q} was derived from these quantities. Results: C{sub Q} was found to vary by up to 15% (1.30 vs 1.11) between chamber dimension extremes. The agreement between chamber C{sub Q} was found to improve with increasing depth in water. However, in all cases investigated, C{sub Q} was larger than the manufacturer reported value of 1.054. Conclusions: Our results show that cavity dimension tolerance has a significant effect on C{sub Q}, with differences as large as 15%. In all cases considered, C{sub Q} was found to be larger than the reported value of 1.054. This suggests that the recommended calculation underestimates the dose to water.« less

  19. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  20. Technical Note: On the calculation of stopping-power ratio for stoichiometric calibration in proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ödén, Jakob; Zimmerman, Jens; Nowik, Patrik

    2015-09-15

    Purpose: The quantitative effects of assumptions made in the calculation of stopping-power ratios (SPRs) are investigated, for stoichiometric CT calibration in proton therapy. The assumptions investigated include the use of the Bethe formula without correction terms, Bragg additivity, the choice of I-value for water, and the data source for elemental I-values. Methods: The predictions of the Bethe formula for SPR (no correction terms) were validated against more sophisticated calculations using the SRIM software package for 72 human tissues. A stoichiometric calibration was then performed at our hospital. SPR was calculated for the human tissues using either the assumption of simplemore » Bragg additivity or the Seltzer-Berger rule (as used in ICRU Reports 37 and 49). In each case, the calculation was performed twice: First, by assuming the I-value of water was an experimentally based value of 78 eV (value proposed in Errata and Addenda for ICRU Report 73) and second, by recalculating the I-value theoretically. The discrepancy between predictions using ICRU elemental I-values and the commonly used tables of Janni was also investigated. Results: Errors due to neglecting the correction terms to the Bethe formula were calculated at less than 0.1% for biological tissues. Discrepancies greater than 1%, however, were estimated due to departures from simple Bragg additivity when a fixed I-value for water was imposed. When the I-value for water was calculated in a consistent manner to that for tissue, this disagreement was substantially reduced. The difference between SPR predictions when using Janni’s or ICRU tables for I-values was up to 1.6%. Experimental data used for materials of relevance to proton therapy suggest that the ICRU-derived values provide somewhat more accurate results (root-mean-square-error: 0.8% versus 1.6%). Conclusions: The conclusions from this study are that (1) the Bethe formula can be safely used for SPR calculations without correction terms; (2) simple Bragg additivity can be reasonably assumed for compound materials; (3) if simple Bragg additivity is assumed, then the I-value for water should be calculated in a consistent manner to that of the tissue of interest (rather than using an experimentally derived value); (4) the ICRU Report 37 I-values may provide a better agreement with experiment than Janni’s tables.« less

  1. Estimates of long-term mean-annual nutrient loads considered for use in SPARROW models of the Midcontinental region of Canada and the United States, 2002 base year

    USGS Publications Warehouse

    Saad, David A.; Benoy, Glenn A.; Robertson, Dale M.

    2018-05-11

    Streamflow and nutrient concentration data needed to compute nitrogen and phosphorus loads were compiled from Federal, State, Provincial, and local agency databases and also from selected university databases. The nitrogen and phosphorus loads are necessary inputs to Spatially Referenced Regressions on Watershed Attributes (SPARROW) models. SPARROW models are a way to estimate the distribution, sources, and transport of nutrients in streams throughout the Midcontinental region of Canada and the United States. After screening the data, approximately 1,500 sites sampled by 34 agencies were identified as having suitable data for calculating the long-term mean-annual nutrient loads required for SPARROW model calibration. These final sites represent a wide range in watershed sizes, types of nutrient sources, and land-use and watershed characteristics in the Midcontinental region of Canada and the United States.

  2. Application of the DG-1199 methodology to the ESBWR and ABWR.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinich, Donald A.; Gauntt, Randall O.; Walton, Fotini

    2010-09-01

    Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Populationmore » Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.« less

  3. Estimating the Temporal Domain when the Discount of the Net Evaporation Term Affects the Resulting Net Precipitation Pattern in the Moisture Budget Using a 3-D Lagrangian Approach

    PubMed Central

    Castillo, Rodrigo; Nieto, Raquel; Drumond, Anita; Gimeno, Luis

    2014-01-01

    The Lagrangian FLEXPART model has been used during the last decade to detect moisture sources that affect the climate in different regions of the world. While most of these studies provided a climatological perspective on the atmospheric branch of the hydrological cycle in terms of precipitation, none assessed the minimum temporal domain for which the climatological approach is valid. The methodology identifies the contribution of humidity to the moisture budget in a region by computing the changes in specific humidity along backward (or forward) trajectories of air masses over a period of ten days beforehand (afterwards), thereby allowing the calculation of monthly, seasonal and annual averages. The current study calculates as an example the climatological seasonal mean and variance of the net precipitation for regions in which precipitation exceeds evaporation (E-P<0) for the North Atlantic moisture source region using different time periods, for winter and summer from 1980 to 2000. The results show that net evaporation (E-P>0) can be discounted after when the integration of E-P is done without affecting the general net precipitation patterns when it is discounted in a monthly or longer time scale. PMID:24893002

  4. A dynamic aerodynamic resistance approach to calculate high resolution sensible heat fluxes in urban areas

    NASA Astrophysics Data System (ADS)

    Crawford, Ben; Grimmond, Sue; Kent, Christoph; Gabey, Andrew; Ward, Helen; Sun, Ting; Morrison, William

    2017-04-01

    Remotely sensed data from satellites have potential to enable high-resolution, automated calculation of urban surface energy balance terms and inform decisions about urban adaptations to environmental change. However, aerodynamic resistance methods to estimate sensible heat flux (QH) in cities using satellite-derived observations of surface temperature are difficult in part due to spatial and temporal variability of the thermal aerodynamic resistance term (rah). In this work, we extend an empirical function to estimate rah using observational data from several cities with a broad range of surface vegetation land cover properties. We then use this function to calculate spatially and temporally variable rah in London based on high-resolution (100 m) land cover datasets and in situ meteorological observations. In order to calculate high-resolution QH based on satellite-observed land surface temperatures, we also develop and employ novel methods to i) apply source area-weighted averaging of surface and meteorological variables across the study spatial domain, ii) calculate spatially variable, high-resolution meteorological variables (wind speed, friction velocity, and Obukhov length), iii) incorporate spatially interpolated urban air temperatures from a distributed sensor network, and iv) apply a modified Monte Carlo approach to assess uncertainties with our results, methods, and input variables. Modeled QH using the aerodynamic resistance method is then compared to in situ observations in central London from a unique network of scintillometers and eddy-covariance measurements.

  5. Determination of the spatial resolution required for the HEDR dose code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.; Simpson, J.C.

    1992-12-01

    A series of scoping calculations has been undertaken to evaluate the doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 007) examined the spatial distribution of potential doses resulting from releases in the year 1945. This study builds on the work initiated in the first scoping calculation, of iodine in cow's milk; the third scoping calculation, which added additional pathways; the fifth calculation, which addressed the uncertainty of the dose estimates at a point; and the sixth calculation, which extrapolated the doses throughout the atmospheric transport domain. A projectionmore » of dose to representative individuals throughout the proposed HEDR atmospheric transport domain was prepared on the basis of the HEDR source term. Addressed in this calculation were the contributions to iodine-131 thyroid dose of infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows' milk from-Feeding Regime 1 as described in scoping calculation 001.« less

  6. Computational Fluid Dynamics Simulation of Flows in an Oxidation Ditch Driven by a New Surface Aerator

    PubMed Central

    Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe

    2013-01-01

    Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850

  7. Scattering in infrared radiative transfer: A comparison between the spectrally averaging model JURASSIC and the line-by-line model KOPRA

    NASA Astrophysics Data System (ADS)

    Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold

    2013-09-01

    The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.

  8. Interface extinction and subsurface peaking of the radiation pattern of a line source

    NASA Technical Reports Server (NTRS)

    Engheta, N.; Papas, C. H.; Elachi, C.

    1981-01-01

    The radiation pattern of a line source lying along the plane interface of two dielectric half-spaces is calculated. It is found that the pattern at the interface has a null (interface extinction); that the pattern in the upper half-space, whose index of refraction is taken to be less than that of the lower half-space, has a single lobe with a maximum normal to the interface; and that the pattern in the lower half-space (subsurface region) has two maxima (peaks) straddling symmetrically a minimum. Interpretation of these results in terms of ray optics, Oseen's extinction theorem, and the Cerenkov effect are given.

  9. Gravitational waves from rotating and precessing rigid bodies. 2: General solutions and computationally useful formulae

    NASA Technical Reports Server (NTRS)

    Zimmerman, M.

    1979-01-01

    The classical mechanics results for free precession which are needed in order to calculate the weak field, slow-motion, quadrupole-moment gravitational waves are reviewed. Within that formalism, algorithms are given for computing the exact gravitational power radiated and waveforms produced by arbitrary rigid-body freely-precessing sources. The dominant terms are presented in series expansions of the waveforms for the case of an almost spherical object precessing with a small wobble angle. These series expansions, which retain the precise frequency dependence of the waves, may be useful for gravitational astronomers when freely-precessing sources begin to be observed.

  10. Combining molecular fingerprints with multidimensional scaling analyses to identify the source of spilled oil from highly similar suspected oils.

    PubMed

    Zhou, Peiyu; Chen, Changshu; Ye, Jianjun; Shen, Wenjie; Xiong, Xiaofei; Hu, Ping; Fang, Hongda; Huang, Chuguang; Sun, Yongge

    2015-04-15

    Oil fingerprints have been a powerful tool widely used for determining the source of spilled oil. In most cases, this tool works well. However, it is usually difficult to identify the source if the oil spill accident occurs during offshore petroleum exploration due to the highly similar physiochemical characteristics of suspected oils from the same drilling platform. In this report, a case study from the waters of the South China Sea is presented, and multidimensional scaling analysis (MDS) is introduced to demonstrate how oil fingerprints can be combined with mathematical methods to identify the source of spilled oil from highly similar suspected sources. The results suggest that the MDS calculation based on oil fingerprints and subsequently integrated with specific biomarkers in spilled oils is the most effective method with a great potential for determining the source in terms of highly similar suspected oils. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Fusion neutron source blanket: requirements for calculation accuracy and benchmark experiment precision

    NASA Astrophysics Data System (ADS)

    Zhirkin, A. V.; Alekseev, P. N.; Batyaev, V. F.; Gurevich, M. I.; Dudnikov, A. A.; Kuteev, B. V.; Pavlov, K. V.; Titarenko, Yu. E.; Titarenko, A. Yu.

    2017-06-01

    In this report the calculation accuracy requirements of the main parameters of the fusion neutron source, and the thermonuclear blankets with a DT fusion power of more than 10 MW, are formulated. To conduct the benchmark experiments the technical documentation and calculation models were developed for two blanket micro-models: the molten salt and the heavy water solid-state blankets. The calculations of the neutron spectra, and 37 dosimetric reaction rates that are widely used for the registration of thermal, resonance and threshold (0.25-13.45 MeV) neutrons, were performed for each blanket micro-model. The MCNP code and the neutron data library ENDF/B-VII were used for the calculations. All the calculations were performed for two kinds of neutron source: source I is the fusion source, source II is the source of neutrons generated by the 7Li target irradiated by protons with energy 24.6 MeV. The spectral indexes ratios were calculated to describe the spectrum variations from different neutron sources. The obtained results demonstrate the advantage of using the fusion neutron source in future experiments.

  12. Traveltime delay relative to the maximum energy of the wave train for dispersive tsunamis propagating across the Pacific Ocean: the case of 2010 and 2015 Chilean Tsunamis

    NASA Astrophysics Data System (ADS)

    Poupardin, A.; Heinrich, P.; Hébert, H.; Schindelé, F.; Jamelot, A.; Reymond, D.; Sugioka, H.

    2018-05-01

    This paper evaluates the importance of frequency dispersion in the propagation of recent trans-Pacific tsunamis. Frequency dispersion induces a time delay for the most energetic waves, which increases for long propagation distances and short source dimensions. To calculate this time delay, propagation of tsunamis is simulated and analyzed from spectrograms of time-series at specific gauges in the Pacific Ocean. One- and two-dimensional simulations are performed by solving either shallow water or Boussinesq equations and by considering realistic seismic sources. One-dimensional sensitivity tests are first performed in a constant-depth channel to study the influence of the source width. Two-dimensional tests are then performed in a simulated Pacific Ocean with a 4000-m constant depth and by considering tectonic sources of 2010 and 2015 Chilean earthquakes. For these sources, both the azimuth and the distance play a major role in the frequency dispersion of tsunamis. Finally, simulations are performed considering the real bathymetry of the Pacific Ocean. Multiple reflections, refractions as well as shoaling of waves result in much more complex time series for which the effects of the frequency dispersion are hardly discernible. The main point of this study is to evaluate frequency dispersion in terms of traveltime delays by calculating spectrograms for a time window of 6 hours after the arrival of the first wave. Results of the spectral analysis show that the wave packets recorded by pressure and tide sensors in the Pacific Ocean seem to be better reproduced by the Boussinesq model than the shallow water model and approximately follow the theoretical dispersion relationship linking wave arrival times and frequencies. Additionally, a traveltime delay is determined above which effects of frequency dispersion are considered to be significant in terms of maximum surface elevations.

  13. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  14. Comparisons of sets of electron-neutral scattering cross sections and calculated swarm parameters in Kr and Xe

    NASA Astrophysics Data System (ADS)

    Bordage, M. C.; Hagelaar, G. J. M.; Pitchford, L. C.; Biagi, S. F.; Puech, V.

    2011-10-01

    Xenon is used in a number of application areas ranging from light sources to x-ray detectors for imaging in medicine, border security and high-energy particle physics. There is a correspondingly large body of data available for electron scattering cross sections and swarm parameters in Xe, whereas data for Kr are more limited. In this communication we show intercomparisons of the cross section sets in Xe and Kr presently available on the LXCat site. Swarm parameters calculated using these cross sections sets are compared with experimental data, also available on the LXCat site. As was found for Ar, diffusion coefficients calculated using these cross section data in a 2-term Boltzmann solver are higher than Monte Carlo results by about 30% over a range of E/N from 1 to 100 Td. We find otherwise good agreement in Xe between 2-term and Monte Carlo results and between measured and calculated values of electron mobility, ionization rates and light emission (dimer) at atmospheric pressure. The available cross section data in Kr yield swarm parameters in agreement with the limited experimental data. The cross section compilations and measured swarm parameters used in this work are available on-line at www.lxcat.laplace. univ-tlse.fr.

  15. Comparison of chlorine and ammonia concentration field trial data with calculated results from a Gaussian atmospheric transport and dispersion model.

    PubMed

    Bauer, Timothy J

    2013-06-15

    The Jack Rabbit Test Program was sponsored in April and May 2010 by the Department of Homeland Security Transportation Security Administration to generate source data for large releases of chlorine and ammonia from transport tanks. In addition to a variety of data types measured at the release location, concentration versus time data was measured using sensors at distances up to 500 m from the tank. Release data were used to create accurate representations of the vapor flux versus time for the ten releases. This study was conducted to determine the importance of source terms and meteorological conditions in predicting downwind concentrations and the accuracy that can be obtained in those predictions. Each source representation was entered into an atmospheric transport and dispersion model using simplifying assumptions regarding the source characterization and meteorological conditions, and statistics for cloud duration and concentration at the sensor locations were calculated. A detailed characterization for one of the chlorine releases predicted 37% of concentration values within a factor of two, but cannot be considered representative of all the trials. Predictions of toxic effects at 200 m are relevant to incidents involving 1-ton chlorine tanks commonly used in parts of the United States and internationally. Published by Elsevier B.V.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Operations of Sandia National Laboratories, Nevada (SNL/NV) at the Tonopah Test Range (TTR) resulted in no planned point radiological releases during 1996. Other releases from SNL/NV included diffuse transuranic sources consisting of the three Clean Slate sites. Air emissions from these sources result from wind resuspension of near-surface transuranic contaminated soil particulates. The total area of contamination has been estimated to exceed 20 million square meters. Soil contamination was documented in an aerial survey program in 1977 (EG&G 1979). Surface contamination levels were generally found to be below 400 pCi/g of combined plutonium-238, plutonium-239, plutonium-240, and americium-241 (i.e., transuranic) activity.more » Hot spot areas contain up to 43,000 pCi/g of transuranic activity. Recent measurements confirm the presence of significant levels of transuranic activity in the surface soil. An annual diffuse source term of 0.39 Ci of transuranic material was calculated for the cumulative release from all three Clean Slate sites. A maximally exposed individual dose of 1.1 mrem/yr at the TTR airport area was estimated based on the 1996 diffuse source release amounts and site-specific meteorological data. A population dose of 0.86 person-rem/yr was calculated for the local residents. Both dose values were attributable to inhalation of transuranic contaminated dust.« less

  17. The groundwater budget: A tool for preliminary estimation of the hydraulic connection between neighboring aquifers

    NASA Astrophysics Data System (ADS)

    Viaroli, Stefano; Mastrorillo, Lucia; Lotti, Francesca; Paolucci, Vittorio; Mazza, Roberto

    2018-01-01

    Groundwater management authorities usually use groundwater budget calculations to evaluate the sustainability of withdrawals for different purposes. The groundwater budget calculation does not always provide reliable information, and it must often be supported by further aquifer monitoring in the case of hydraulic connections between neighboring aquifers. The Riardo Plain aquifer is a strategic drinking resource for more than 100,000 people, water storage for 60 km2 of irrigated land, and the source of a mineral water bottling plant. Over a long period, the comparison between the direct recharge and the estimated natural outflow and withdrawals highlights a severe water deficit of approximately 40% of the total groundwater outflow. A groundwater budget deficit should be a clue to the aquifer depletion, but the results of long-term water level monitoring allowed the observation of the good condition of this aquifer. In fact, in the Riardo Plain, the calculated deficit is not comparable to the aquifer monitoring data acquired in the same period (1992-2014). The small oscillations of the groundwater level and the almost stable streambed spring discharge allows the presumption of an additional aquifer recharge source. The confined carbonate aquifer locally mixes with the above volcanic aquifer, providing an externally stable recharge that reduces the effects of the local rainfall variability. The combined approach of the groundwater budget results and long-term aquifer monitoring (spring discharge and/or hydraulic head oscillation) provides information about significant external groundwater exchanges, even if unidentified by field measurements, and supports the stakeholders in groundwater resource management.

  18. Re-framing 'binge drinking' as calculated hedonism: empirical evidence from the UK.

    PubMed

    Szmigin, Isabelle; Griffin, Christine; Mistral, Willm; Bengry-Howell, Andrew; Weale, Louise; Hackley, Chris

    2008-10-01

    Recent debates on 'binge drinking' in the UK have represented the activities of young drinkers in urban areas as a particular source of concern, as constituting a threat to law and order, a drain on public health and welfare services and as a source of risk to their own future health and well being. The discourse of moral panic around young people's 'binge drinking' has pervaded popular media, public policy and academic research, often differentiating the excesses of 'binge drinking' from 'normal' patterns of alcohol consumption, although in practice definitions of 'binge drinking' vary considerably. However, recent research in this area has drawn on the notion of 'calculated hedonism' to refer to a way of 'managing' alcohol consumption that might be viewed as excessive. The paper presents a critical analysis of contemporary discourses around 'binge drinking' in the British context, highlighting contradictory messages about responsibility and self control in relation to the recent liberalisation of licensing laws and the extensive marketing of alcohol to young people. The paper analyses marketing communications which present drinking as a crucial element in 'having fun', and as an important aspect of young people's social lives. The empirical study involves analysis of focus group discussions and individual interviews with young people aged 18-25 in three areas of Britain: a major city in the West Midlands, a seaside town in the South-West of England and a small market town also in the South-West. The initial findings present the varied forms and meanings that socialising and drinking took in these young people's social lives. In particular the results illustrate the ways in which drinking is constituted and managed as a potential source of pleasure. The paper concludes that the term 'calculated hedonism' better describes the behaviour of the young people in this study and in particular the way they manage their pleasure around alcohol, than the emotive term 'binge drinking'.

  19. Global carbon - nitrogen - phosphorus cycle interactions: A key to solving the atmospheric CO2 balance problem?

    NASA Technical Reports Server (NTRS)

    Peterson, B. J.; Mellillo, J. M.

    1984-01-01

    If all biotic sinks of atmospheric CO2 reported were added a value of about 0.4 Gt C/yr would be found. For each category, a very high (non-conservative) estimate was used. This still does not provide a sufficient basis for achieving a balance between the sources and sinks of atmospheric CO2. The bulk of the discrepancy lies in a combination of errors in the major terms, the greatest being in a combination of errors in the major terms, the greatest being in the net biotic release and ocean uptake segments, but smaller errors or biases may exist in calculations of the rate of atmospheric CO2 increase and total fossil fuel use as well. The reason why biotic sinks are not capable of balancing the CO2 increase via nutrient-matching in the short-term is apparent from a comparison of the stoichiometry of the sources and sinks. The burning of fossil fuels and forest biomass releases much more CO2-carbon than is sequestered as organic carbon.

  20. Analysis of streamflow distribution of non-point source nitrogen export from long-term urban-rural catchments to guide watershed management in the Chesapeake Bay watershed

    NASA Astrophysics Data System (ADS)

    Duncan, J. M.; Band, L. E.; Groffman, P.

    2017-12-01

    Discharge, land use, and watershed management practices (stream restoration and stormwater control measures) have been found to be important determinants of nitrogen (N) export to receiving waters. We used long-term water quality stations from the Baltimore Ecosystem Study Long-Term Ecological Research (BES LTER) Site to quantify nitrogen export across streamflow conditions at the small watershed scale. We calculated nitrate and total nitrogen fluxes using methodology that allows for changes over time; weighted regressions on time, discharge, and seasonality. Here we tested the hypotheses that a) while the largest N stream fluxes occur during storm events, there is not a clear relationship between N flux and discharge and b) N export patterns are aseasonal in developed watersheds where sources are larger and retention capacity is lower. The goal is to scale understanding from small watersheds to larger ones. Developing a better understanding of hydrologic controls on nitrogen export is essential for successful adaptive watershed management at societally meaningful spatial scales.

  1. Spurious Behavior of Shock-Capturing Methods: Problems Containing Stiff Source Terms and Discontinuities

    NASA Technical Reports Server (NTRS)

    Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang

    2013-01-01

    The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.

  2. Characterization of the relationship between ceramic pot filter water production and turbidity in source water.

    PubMed

    Salvinelli, Carlo; Elmore, A Curt; Reidmeyer, Mary R; Drake, K David; Ahmad, Khaldoun I

    2016-11-01

    Ceramic pot filters represent a common and effective household water treatment technology in developing countries, but factors impacting water production rate are not well-known. Turbidity of source water may be principal indicator in characterizing the filter's lifetime in terms of water production capacity. A flow rate study was conducted by creating four controlled scenarios with different turbidities, and influent and effluent water samples were tested for total suspended solids and particle size distribution. A relationship between average flow rate and turbidity was identified with a negative linear trend of 50 mLh -1 /NTU. Also, a positive linear relationship was found between the initial flow rate of the filters and average flow rate calculated over the 23 day life of the experiment. Therefore, it was possible to establish a method to estimate the average flow rate given the initial flow rate and the turbidity in the influent water source, and to back calculate the maximum average turbidity that would need to be maintained in order to achieve a specific average flow rate. However, long-term investigations should be conducted to assess how these relationships change over the expected CPF lifetime. CPFs rejected fine suspended particles (below 75 μm), especially particles with diameters between 0.375 μm and 10 μm. The results confirmed that ceramic pot filters are able to effectively reduce turbidity, but pretreatment of influent water should be performed to avoid premature failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Demonstration/Validation of Long-Term Monitoring Using Wells Installed by Direct-Push Technologies

    DTIC Science & Technology

    2008-04-01

    procedures for technology startup , and maintenance are presented in detail in Section 5.2 of this document. 1.6 Calculation of Data Quality...and university statisticians . Results are described as follows: For the Dover and Hanscom sites, the data or log data was tested for normality...to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data

  4. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    PubMed

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4 × 4, 10 × 10, and 20 × 20 cm 2 fields at multiple depths. For the 2D dose distributions calculated in the heterogeneous lung phantom, the 2D gamma pass rate was 100% for 6 and 15 MV beams. The model optimization time was less than 4 min. The proposed source model optimization process accurately produces photon fluence spectra from a linac using valid physical properties, without detailed knowledge of the geometry of the linac head, and with minimal optimization time. © 2018 American Association of Physicists in Medicine.

  5. Isotopic composition and neutronics of the Okelobondo natural reactor

    NASA Astrophysics Data System (ADS)

    Palenik, Christopher Samuel

    The Oklo-Okelobondo and Bangombe uranium deposits, in Gabon, Africa host Earth's only known natural nuclear fission reactors. These 2 billion year old reactors represent a unique opportunity to study used nuclear fuel over geologic periods of time. The reactors in these deposits have been studied as a means by which to constrain the source term of fission product concentrations produced during reactor operation. The source term depends on the neutronic parameters, which include reactor operation duration, neutron flux and the neutron energy spectrum. Reactor operation has been modeled using a point-source computer simulation (Oak Ridge Isotope Generation and Depletion, ORIGEN, code) for a light water reactor. Model results have been constrained using secondary ionization mass spectroscopy (SIMS) isotopic measurements of the fission products Nd and Te, as well as U in uraninite from samples collected in the Okelobondo reactor zone. Based upon the constraints on the operating conditions, the pre-reactor concentrations of Nd (150 ppm +/- 75 ppm) and Te (<1 ppm) in uraninite were estimated. Related to the burnup measured in Okelobondo samples (0.7 to 13.8 GWd/MTU), the final fission product inventories of Nd (90 to 1200 ppm) and Te (10 to 110 ppm) were calculated. By the same means, the ranges of all other fission products and actinides produced during reactor operation were calculated as a function of burnup. These results provide a source term against which the present elemental and decay abundances at the fission reactor can be compared. Furthermore, they provide new insights into the extent to which a "fossil" nuclear reactor can be characterized on the basis of its isotopic signatures. In addition, results from the study of two other natural systems related to the radionuclide and fission product transport are included. A detailed mineralogical characterization of the uranyl mineralogy at the Bangombe uranium deposit in Gabon, Africa was completed to improve geochemical models of the solubility-limiting phase. A study of the competing effects of radiation damage and annealing in a U-bearing crystal of zircon shows that low temperature annealing in actinide-bearing phases is significant in the annealing of radiation damage.

  6. Recommended improvements to the DS02 dosimetry system's calculation of organ doses and their potential advantages for the Radiation Effects Research Foundation.

    PubMed

    Cullings, Harry M

    2012-03-01

    The Radiation Effects Research Foundation (RERF) uses a dosimetry system to calculate radiation doses received by the Japanese atomic bomb survivors based on their reported location and shielding at the time of exposure. The current system, DS02, completed in 2003, calculates detailed doses to 15 particular organs of the body from neutrons and gamma rays, using new source terms and transport calculations as well as some other improvements in the calculation of terrain and structural shielding, but continues to use methods from an older system, DS86, to account for body self-shielding. Although recent developments in models of the human body from medical imaging, along with contemporary computer speed and software, allow for improvement of the calculated organ doses, before undertaking changes to the organ dose calculations, it is important to evaluate the improvements that can be made and their potential contribution to RERF's research. The analysis provided here suggests that the most important improvements can be made by providing calculations for more organs or tissues and by providing a larger series of age- and sex-specific models of the human body from birth to adulthood, as well as fetal models.

  7. The evolution of methods for noise prediction of high speed rotors and propellers in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1986-01-01

    Linear wave equation models which have been used over the years at NASA Langley for describing noise emissions from high speed rotating blades are summarized. The noise sources are assumed to lie on a moving surface, and analysis of the situation has been based on the Ffowcs Williams-Hawkings (FW-H) equation. Although the equation accounts for two surface and one volume source, the NASA analyses have considered only the surface terms. Several variations on the FW-H model are delineated for various types of applications, noting the computational benefits of removing the frequency dependence of the calculations. Formulations are also provided for compact and noncompact sources, and features of Long's subsonic integral equation and Farassat's high speed integral equation are discussed. The selection of subsonic or high speed models is dependent on the Mach number of the blade surface where the source is located.

  8. A previously unreported type of seismic source in the firn layer of the East Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    Lough, Amanda C.; Barcheck, C. Grace; Wiens, Douglas A.; Nyblade, Andrew; Anandakrishnan, Sridhar

    2015-11-01

    We identify a unique type of seismic source in the uppermost part of the East Antarctic Ice Sheet recorded by temporary broadband seismic arrays in East Antarctica. These sources, termed "firnquakes," are characterized by dispersed surface wave trains with frequencies of 1-10 Hz detectable at distances up to 1000 km. Events show strong dispersed Rayleigh wave trains and an absence of observable body wave arrivals; most events also show weaker Love waves. Initial events were discovered by standard detection schemes; additional events were then detected with a correlation scanner using the initial arrivals as templates. We locate sources by determining the L2 misfit for a grid of potential source locations using Rayleigh wave arrival times and polarization directions. We then perform a multiple-filter analysis to calculate the Rayleigh wave group velocity dispersion and invert the group velocity for shear velocity structure. The resulting velocity structure is used as an input model to calculate synthetic seismograms. Inverting the dispersion curves yields ice velocity structures consistent with a low-velocity firn layer ~100 m thick and show that velocity structure is laterally variable. The absence of observable body wave phases and the relative amplitudes of Rayleigh waves and noise constrain the source depth to be less than 20 m. The presence of Love waves for most events suggests the source is not isotropic. We propose the events are linked to the formation of small crevasses in the firn, and several events correlate with shallow crevasse fields mapped in satellite imagery.

  9. Area estimation of environmental phenomena from NOAA-n satellite data. [TIROS N satellite

    NASA Technical Reports Server (NTRS)

    Tappan, G. (Principal Investigator); Miller, G. E.

    1982-01-01

    A technique for documenting changes in size of NOAA-n pixels in order to calibrate the data for use in performing area calculations is described. Based on Earth-satellite geometry, a function for calculating the effective pixel size, measured in terms of ground area, on any given pixel was derived. The equation is an application of the law of sines plus an arclength formula. Effective pixel dimensions for NOAA 6 and 7 satellites for all pixels between nadir and the extreme view angles are presented. The NOAA 6 data were used to estimate the areas of several lakes, with an accuracy within 5%. Sources of error are discussed.

  10. Unsteady Flow Dynamics and Acoustics of Two-Outlet Centrifugal Fan Design

    NASA Astrophysics Data System (ADS)

    Wong, I. Y. W.; Leung, R. C. K.; Law, A. K. Y.

    2011-09-01

    In this study, a centrifugal fan design with two flow outlets is investigated. This design aims to provide high mass flow rate but low noise performance. Two dimensional unsteady flow simulation with CFD code (FLUENT 6.3) is carried out to analyze the fan flow dynamics and its acoustics. The calculations were done using the unsteady Reynolds averaged Navier Stokes (URANS) approach in which effects of turbulence were accounted for using κ-ɛ model. This work aims to provide an insight how the dominant noise source mechanisms vary with a key fan geometrical paramters, namely, the ratio between cutoff distance and the radius of curvature of the fan housing. Four new fan designs were calculated. Simulation results show that the unsteady flow-induced forces on the fan blades are found to be the main noise sources. The blade force coefficients are then used to build the dipole source terms in Ffowcs Williams and Hawkings (FW-H) Equation for estimating their noise effects. It is found that one design is able to deliver a mass flow 34% more, but with sound pressure level (SPL) 10 dB lower, than the existing design .

  11. Shielding calculations for the National Synchrotron Light Source-II experimental beamlines

    NASA Astrophysics Data System (ADS)

    Job, Panakkal K.; Casey, William R.

    2013-01-01

    Brookhaven National Laboratory is in the process of building a new Electron storage ring for scientific research using synchrotron radiation. This facility, called the "National Synchrotron Light Source II" (NSLS-II), will provide x-ray radiation of ultra-high brightness and exceptional spatial and energy resolution. It will also provide advanced insertion devices, optics, detectors, and robotics, designed to maximize the scientific output of the facility. The project scope includes the design of an electron storage ring and the experimental beamlines, which stores a maximum of 500 mA electron beam current at an energy of 3.0 GeV. When fully built there will be at least 58 beamlines using synchrotron radiation for experimental programs. It is planned to operate the facility primarily in a top-off mode, thereby maintaining the maximum variation in the synchrotron radiation flux to <1%. Because of the very demanding requirements for synchrotron radiation brilliance for the experiments, each of the 58 beamlines will be unique in terms of the source properties and experimental configuration. This makes the shielding configuration of each of the beamlines unique. The shielding calculation methodology and the results for five representative beamlines of NSLS-II, have been presented in this paper.

  12. The effects of shared information on semantic calculations in the gene ontology.

    PubMed

    Bible, Paul W; Sun, Hong-Wei; Morasso, Maria I; Loganantharaj, Rasiah; Wei, Lai

    2017-01-01

    The structured vocabulary that describes gene function, the gene ontology (GO), serves as a powerful tool in biological research. One application of GO in computational biology calculates semantic similarity between two concepts to make inferences about the functional similarity of genes. A class of term similarity algorithms explicitly calculates the shared information (SI) between concepts then substitutes this calculation into traditional term similarity measures such as Resnik, Lin, and Jiang-Conrath. Alternative SI approaches, when combined with ontology choice and term similarity type, lead to many gene-to-gene similarity measures. No thorough investigation has been made into the behavior, complexity, and performance of semantic methods derived from distinct SI approaches. We apply bootstrapping to compare the generalized performance of 57 gene-to-gene semantic measures across six benchmarks. Considering the number of measures, we additionally evaluate whether these methods can be leveraged through ensemble machine learning to improve prediction performance. Results showed that the choice of ontology type most strongly influenced performance across all evaluations. Combining measures into an ensemble classifier reduces cross-validation error beyond any individual measure for protein interaction prediction. This improvement resulted from information gained through the combination of ontology types as ensemble methods within each GO type offered no improvement. These results demonstrate that multiple SI measures can be leveraged for machine learning tasks such as automated gene function prediction by incorporating methods from across the ontologies. To facilitate future research in this area, we developed the GO Graph Tool Kit (GGTK), an open source C++ library with Python interface (github.com/paulbible/ggtk).

  13. Metal lost and found: dissipative uses and releases of copper in the United States 1975-2000.

    PubMed

    Lifset, Reid J; Eckelman, Matthew J; Harper, E M; Hausfather, Zeke; Urbina, Gonzalo

    2012-02-15

    Metals are used in a variety of ways, many of which lead to dissipative releases to the environment. Such releases are relevant from both a resource use and an environmental impact perspective. We present a historical analysis of copper dissipative releases in the United States from 1975 to 2000. We situate all dissipative releases in copper's life cycle and introduce a conceptual framework by which copper dissipative releases may be categorized in terms of intentionality of use and release. We interpret our results in the context of larger trends in production and consumption and government policies that have served as drivers of intentional copper releases from the relevant sources. Intentional copper releases are found to be both significant in quantity and highly variable. In 1975, for example, the largest source of intentional releases was from the application of copper-based pesticides, and this decreased more than 50% over the next 25 years; all other sources of intentional releases increased during that period. Overall, intentional copper releases decreased by approximately 15% from 1975 to 2000. Intentional uses that are unintentionally released such as copper from roofing, increased by the same percentage. Trace contaminant sources such as fossil fuel combustion, i.e., sources where both the use and the release are unintended, increased by nearly 50%. Intentional dissipative uses are equivalent to 60% of unintentional copper dissipative releases and more than five times that from trace sources. Dissipative copper releases are revealed to be modest when compared to bulk copper flows in the economy, and we introduce a metric, the dissipation index, which may be considered an economy-wide measure of resource efficiency for a particular substance. We assess the importance of dissipative releases in the calculation of recycling rates, concluding that the inclusion of dissipation in recycling rate calculations has a small, but discernible, influence, and should be included in such calculations. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Determination of the spatial resolution required for the HEDR dose code. Hanford Environmental Dose Reconstruction Project: Dose code recovery activities, Calculation 007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.; Simpson, J.C.

    1992-12-01

    A series of scoping calculations has been undertaken to evaluate the doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 007) examined the spatial distribution of potential doses resulting from releases in the year 1945. This study builds on the work initiated in the first scoping calculation, of iodine in cow`s milk; the third scoping calculation, which added additional pathways; the fifth calculation, which addressed the uncertainty of the dose estimates at a point; and the sixth calculation, which extrapolated the doses throughout the atmospheric transport domain. A projectionmore » of dose to representative individuals throughout the proposed HEDR atmospheric transport domain was prepared on the basis of the HEDR source term. Addressed in this calculation were the contributions to iodine-131 thyroid dose of infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows` milk from-Feeding Regime 1 as described in scoping calculation 001.« less

  15. Turbulent transport and production/destruction of ozone in a boundary layer over complex terrain

    NASA Technical Reports Server (NTRS)

    Greenhut, Gary K.; Jochum, Anne M.; Neininger, Bruno

    1994-01-01

    The first Intensive Observation Period (IOP) of the Swiss air pollution experiment POLLUMET took place in 1990 in the Aare River Valley between Bern and Zurich. During the IOP, fast response measurements of meteorological variables and ozone concentration were made within the boundary layer aboard a motorglider. In addition, mean values of meteorological variables and the concentrations of ozone and other trace species were measured using other aircraft, pilot balloons, tethersondes, and ground stations. Turbulent flux profiles of latent and sensible heat and ozone are calculated from the fast response data. Terms in the ozone mean concentration budget (time rate of change of mean concentration, horizontal advection, and flux divergence) are calculated for stationary time periods both before and after the passage of a cold front. The source/sink term is calculated as a residual in the budget, and its sign and magnitude are related to the measured concentrations of reactive trace species within the boundary layer. Relationships between concentration ratios of trace species and ozone concentration are determined in order to understand the influence of complex terrain on the processes that produce and destroy ozone.

  16. Second order kinetic theory of parallel momentum transport in collisionless drift wave turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang, E-mail: lyang13@mails.tsinghua.edu.cn; Southwestern Institute of Physics, Chengdu 610041; Gao, Zhe

    A second order kinetic model for turbulent ion parallel momentum transport is presented. A new nonresonant second order parallel momentum flux term is calculated. The resonant component of the ion parallel electrostatic force is the momentum source, while the nonresonant component of the ion parallel electrostatic force compensates for that of the nonresonant second order parallel momentum flux. The resonant component of the kinetic momentum flux can be divided into three parts, including the pinch term, the diffusive term, and the residual stress. By reassembling the pinch term and the residual stress, the residual stress can be considered as amore » pinch term of parallel wave-particle resonant velocity, and, therefore, may be called as “resonant velocity pinch” term. Considering the resonant component of the ion parallel electrostatic force is the transfer rate between resonant ions and waves (or, equivalently, nonresonant ions), a conservation equation of the parallel momentum of resonant ions and waves is obtained.« less

  17. Improved Nitrogen Removal Effect In Continuous Flow A2/O Process Using Typical Extra Carbon Source

    NASA Astrophysics Data System (ADS)

    Wu, Haiyan; Gao, Junyan; Yang, Dianhai; Zhou, Qi; Cai, Bijing

    2010-11-01

    In order to provide a basis for optimal selection of carbon source, three typical external carbon sources (i.e. methanol, sodium acetate and leachate) were applied to examine nitrogen removal efficiency of continuous flow A2/O system with the influent from the effluent of grit chamber in the second Kunming wastewater treatment plant. The best dosage was determined, and the specific nitrogen removal rate and carbon consumption rate were calculated with regard to individual external carbon source in A2/O system. Economy and technology analysis was also conducted to select the suitable carbon source with a low operation cost. Experimental results showed that the external typical carbon source caused a remarkable enhancement of system nitrate degradation ability. In comparison with the blank test, the average TN and NH3-N removal efficiency of system with different dosing quantities of external carbon source was improved by 15.2% and 34.2%, respectively. The optimal dosage of methanol, sodium acetate and leachate was respectively up to 30 mg/L, 40 mg/L and 100 mg COD/L in terms of a high nitrogen degradation effect. The highest removal efficiency of COD, TN and NH3-N reached respectively 92.3%, 73.9% and 100% with methanol with a dosage of 30 mg/L. The kinetic analysis and calculation revealed that the greatest denitrification rate was 0.0107 mg TN/mg MLVSSṡd with sodium acetate of 60 mg/L. As to carbon consumption rate, however, the highest value occurred in the blank test with a rate of 0.1955 mg COD/mg MLVSSṡd. Also, further economic analysis proved leachate to be pragmatic external carbon source whose cost was far cheaper than methanol.

  18. Long-term particulate matter modeling for health effect studies in California - Part 2: Concentrations and sources of ultrafine organic aerosols

    NASA Astrophysics Data System (ADS)

    Hu, Jianlin; Jathar, Shantanu; Zhang, Hongliang; Ying, Qi; Chen, Shu-Hua; Cappa, Christopher D.; Kleeman, Michael J.

    2017-04-01

    Organic aerosol (OA) is a major constituent of ultrafine particulate matter (PM0. 1). Recent epidemiological studies have identified associations between PM0. 1 OA and premature mortality and low birth weight. In this study, the source-oriented UCD/CIT model was used to simulate the concentrations and sources of primary organic aerosols (POA) and secondary organic aerosols (SOA) in PM0. 1 in California for a 9-year (2000-2008) modeling period with 4 km horizontal resolution to provide more insights about PM0. 1 OA for health effect studies. As a related quality control, predicted monthly average concentrations of fine particulate matter (PM2. 5) total organic carbon at six major urban sites had mean fractional bias of -0.31 to 0.19 and mean fractional errors of 0.4 to 0.59. The predicted ratio of PM2. 5 SOA / OA was lower than estimates derived from chemical mass balance (CMB) calculations by a factor of 2-3, which suggests the potential effects of processes such as POA volatility, additional SOA formation mechanism, and missing sources. OA in PM0. 1, the focus size fraction of this study, is dominated by POA. Wood smoke is found to be the single biggest source of PM0. 1 OA in winter in California, while meat cooking, mobile emissions (gasoline and diesel engines), and other anthropogenic sources (mainly solvent usage and waste disposal) are the most important sources in summer. Biogenic emissions are predicted to be the largest PM0. 1 SOA source, followed by mobile sources and other anthropogenic sources, but these rankings are sensitive to the SOA model used in the calculation. Air pollution control programs aiming to reduce the PM0. 1 OA concentrations should consider controlling solvent usage, waste disposal, and mobile emissions in California, but these findings should be revisited after the latest science is incorporated into the SOA exposure calculations. The spatial distributions of SOA associated with different sources are not sensitive to the choice of SOA model, although the absolute amount of SOA can change significantly. Therefore, the spatial distributions of PM0. 1 POA and SOA over the 9-year study period provide useful information for epidemiological studies to further investigate the associations with health outcomes.

  19. A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)

    1996-01-01

    The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.

  20. A new aerodynamic integral equation based on an acoustic formula in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.

    1984-01-01

    An aerodynamic integral equation for bodies moving at transonic and supersonic speeds is presented. Based on a time-dependent acoustic formula for calculating the noise emanating from the outer portion of a propeller blade travelling at high speed (the Ffowcs Williams-Hawking formulation), the loading terms and a conventional thickness source terms are retained. Two surface and three line integrals are employed to solve an equation for the loading noise. The near-field term is regularized using the collapsing sphere approach to obtain semiconvergence on the blade surface. A singular integral equation is thereby derived for the unknown surface pressure, and is amenable to numerical solutions using Galerkin or collocation methods. The technique is useful for studying the nonuniform inflow to the propeller.

  1. Modeling of Turbulence Generated Noise in Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2004-01-01

    A numerically calculated Green's function is used to predict jet noise spectrum and its far-field directivity. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. In this paper, contributions from the so-called self- and shear-noise source terms will be discussed. A Reynolds-averaged Navier-Stokes solution yields the required mean flow as well as time- and length scales of a noise-generating turbulent eddy. A non-compact source, with exponential temporal and spatial functions, is used to describe the turbulence velocity correlation tensors. It is shown that while an exact non-causal Green's function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at moderate Mach numbers, at high subsonic and supersonic acoustic Mach numbers the polar directivity of radiated sound is not entirely captured by this Green's function. Results presented for Mach 0.5 and 0.9 isothermal jets, as well as a Mach 0.8 hot jet conclude that near the peak radiation angle a different source/Green's function convolution integral may be required in order to capture the peak observed directivity of jet noise.

  2. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hep, J.; Konecna, A.; Krysl, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less

  3. Exact Fourier expansion in cylindrical coordinates for the three-dimensional Helmholtz Green function

    NASA Astrophysics Data System (ADS)

    Conway, John T.; Cohl, Howard S.

    2010-06-01

    A new method is presented for Fourier decomposition of the Helmholtz Green function in cylindrical coordinates, which is equivalent to obtaining the solution of the Helmholtz equation for a general ring source. The Fourier coefficients of the Green function are split into their half advanced + half retarded and half advanced-half retarded components, and closed form solutions for these components are then obtained in terms of a Horn function and a Kampé de Fériet function respectively. Series solutions for the Fourier coefficients are given in terms of associated Legendre functions, Bessel and Hankel functions and a hypergeometric function. These series are derived either from the closed form 2-dimensional hypergeometric solutions or from an integral representation, or from both. A simple closed form far-field solution for the general Fourier coefficient is derived from the Hankel series. Numerical calculations comparing different methods of calculating the Fourier coefficients are presented. Fourth order ordinary differential equations for the Fourier coefficients are also given and discussed briefly.

  4. Hydroperoxides as Hydrogen Bond Donors

    NASA Astrophysics Data System (ADS)

    Møller, Kristian H.; Tram, Camilla M.; Hansen, Anne S.; Kjaergaard, Henrik G.

    2016-06-01

    Hydroperoxides are formed in the atmosphere following autooxidation of a wide variety of volatile organics emitted from both natural and anthropogenic sources. This raises the question of whether they can form hydrogen bonds that facilitate aerosol formation and growth. Using a combination of Fourier transform infrared spectroscopy, FT-IR, and ab initio calculations, we have compared the gas phase hydrogen bonding ability of tert-butylhydroperoxide (tBuOOH) to that of tert-butanol (tBuOH) for a series of bimolecular complexes with different acceptors. The hydrogen bond acceptor atoms studied are nitrogen, oxygen, phosphorus and sulphur. Both in terms of calculated redshifts and binding energies (BE), our results suggest that hydroperoxides are better hydrogen bond donors than the corresponding alcohols. In terms of hydrogen bond acceptor ability, we find that nitrogen is a significantly better acceptor than the other three atoms, which are of similar strength. We observe a similar trend in hydrogen bond acceptor ability with other hydrogen bond donors including methanol and dimethylamine.

  5. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  6. Finite-element solutions for geothermal systems

    NASA Technical Reports Server (NTRS)

    Chen, J. C.; Conel, J. E.

    1977-01-01

    Vector potential and scalar potential are used to formulate the governing equations for a single-component and single-phase geothermal system. By assuming an initial temperature field, the fluid velocity can be determined which, in turn, is used to calculate the convective heat transfer. The energy equation is then solved by considering convected heat as a distributed source. Using the resulting temperature to compute new source terms, the final results are obtained by iterations of the procedure. Finite-element methods are proposed for modeling of realistic geothermal systems; the advantages of such methods are discussed. The developed methodology is then applied to a sample problem. Favorable agreement is obtained by comparisons with a previous study.

  7. Electroweak baryogenesis in the exceptional supersymmetric standard model

    DOE PAGES

    Chao, Wei

    2015-08-28

    Here, we study electroweak baryogenesis in the E 6 inspired exceptional supersymmetric standard model (E 6SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E 6SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.

  8. Electric Dipole Moments of Light Nuclei From Chiral Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Higa, R.

    2013-08-01

    Recent calculations of EDMs of light nuclei in the framework of chiral effective field theory are presented. We argue that they can be written in terms of the leading six low-energy constants encoding CP-violating physics. EDMs of the deuteron, triton, and helion are explicitly given in order to corroborate our claim. An eventual non-zero measurement of these EDMs can be used to disentangle the different sources and strengths of CP-violation.

  9. Species and temperature predictions in a semi-industrial MILD furnace using a non-adiabatic conditional source-term estimation formulation

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey William; Devaud, Cecile

    2017-05-01

    A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.

  10. Correlation of recent fission product release data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, T.S.; Lorenz, R.A.; Nakamura, T.

    For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less

  11. Precipitation isoscapes for New Zealand: enhanced temporal detail using precipitation-weighted daily climatology.

    PubMed

    Baisden, W Troy; Keller, Elizabeth D; Van Hale, Robert; Frew, Russell D; Wassenaar, Leonard I

    2016-01-01

    Predictive understanding of precipitation δ(2)H and δ(18)O in New Zealand faces unique challenges, including high spatial variability in precipitation amounts, alternation between subtropical and sub-Antarctic precipitation sources, and a compressed latitudinal range of 34 to 47 °S. To map the precipitation isotope ratios across New Zealand, three years of integrated monthly precipitation samples were acquired from >50 stations. Conventional mean-annual precipitation δ(2)H and δ(18)O maps were produced by regressions using geographic and annual climate variables. Incomplete data and short-term variation in climate and precipitation sources limited the utility of this approach. We overcome these difficulties by calculating precipitation-weighted monthly climate parameters using national 5-km-gridded daily climate data. This data plus geographic variables were regressed to predict δ(2)H, δ(18)O, and d-excess at all sites. The procedure yields statistically-valid predictions of the isotope composition of precipitation (long-term average root mean square error (RMSE) for δ(18)O = 0.6 ‰; δ(2)H = 5.5 ‰); and monthly RMSE δ(18)O = 1.9 ‰, δ(2)H = 16 ‰. This approach has substantial benefits for studies that require the isotope composition of precipitation during specific time intervals, and may be further improved by comparison to daily and event-based precipitation samples as well as the use of back-trajectory calculations.

  12. QUENCH: A software package for the determination of quenching curves in Liquid Scintillation counting.

    PubMed

    Cassette, Philippe

    2016-03-01

    In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ(2) minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Large-scale fluctuations in the cosmic ionizing background: the impact of beamed source emission

    NASA Astrophysics Data System (ADS)

    Suarez, Teresita; Pontzen, Andrew

    2017-12-01

    When modelling the ionization of gas in the intergalactic medium after reionization, it is standard practice to assume a uniform radiation background. This assumption is not always appropriate; models with radiative transfer show that large-scale ionization rate fluctuations can have an observable impact on statistics of the Lyman α forest. We extend such calculations to include beaming of sources, which has previously been neglected but which is expected to be important if quasars dominate the ionizing photon budget. Beaming has two effects: first, the physical number density of ionizing sources is enhanced relative to that directly observed; and secondly, the radiative transfer itself is altered. We calculate both effects in a hard-edged beaming model where each source has a random orientation, using an equilibrium Boltzmann hierarchy in terms of spherical harmonics. By studying the statistical properties of the resulting ionization rate and H I density fields at redshift z ∼ 2.3, we find that the two effects partially cancel each other; combined, they constitute a maximum 5 per cent correction to the power spectrum P_{H I}(k) at k = 0.04 h Mpc-1. On very large scales (k < 0.01 h Mpc-1) the source density renormalization dominates; it can reduce, by an order of magnitude, the contribution of ionizing shot noise to the intergalactic H I power spectrum. The effects of beaming should be considered when interpreting future observational data sets.

  14. Evaluating Decoupling Process in OECD Countries: Case Study of Turkey

    NASA Astrophysics Data System (ADS)

    An, Nazan; Şengün Ucal, Meltem; Kurnaz, M. Levent

    2017-04-01

    Climate change is at the top of the present and future problems facing humanity. Climate change is now largely attributed to human activities and economic activities are the source of human activities that cause climate change by creating pressure on the environment. Providing the sustainability of resources for the future seems possible by reducing the pressure of these economic activities on the environment. Given the increasing population pressure and growth-focused economies, it is possible to say that achieving decoupling is not so easy on a global basis. It is known that there are some problems in developing countries especially in terms of accessing reliable data in transition and implementation process of decoupling. Developed countries' decoupling practices and proper calculation methods can also be a guide for developing countries. In this study, we tried to calculate the comparative decoupling index for OECD countries and Turkey in terms of data suitability, and we showed the differences between them. We tried to indicate the level of decoupling (weak, stable, strong) for each country. We think that the comparison of Turkey can be an example in terms of developing countries. Acknowledgement: This research has been supported by Bogazici University Research Fund Grant Number 12220.

  15. A large eddy simulation scheme for turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.

  16. A highly sensitive search strategy for clinical trials in Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) was developed.

    PubMed

    Manríquez, Juan J

    2008-04-01

    Systematic reviews should include as many articles as possible. However, many systematic reviews use only databases with high English language content as sources of trials. Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) is an underused source of trials, and there is not a validated strategy for searching clinical trials to be used in this database. The objective of this study was to develop a sensitive search strategy for clinical trials in LILACS. An analytical survey was performed. Several single and multiple-term search strategies were tested for their ability to retrieve clinical trials in LILACS. Sensitivity, specificity, and accuracy of each single and multiple-term strategy were calculated using the results of a hand-search of 44 Chilean journals as gold standard. After combining the most sensitive, specific, and accurate single and multiple-term search strategy, a strategy with a sensitivity of 97.75% (95% confidence interval [CI]=95.98-99.53) and a specificity of 61.85 (95% CI=61.19-62.51) was obtained. LILACS is a source of trials that could improve systematic reviews. A new highly sensitive search strategy for clinical trials in LILACS has been developed. It is hoped this search strategy will improve and increase the utilization of LILACS in future systematic reviews.

  17. Efficient calculation of cosmological neutrino clustering in the non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archidiacono, Maria; Hannestad, Steen, E-mail: archi@phys.au.dk, E-mail: sth@phys.au.dk

    2016-06-01

    We study in detail how neutrino perturbations can be followed in linear theory by using only terms up to l =2 in the Boltzmann hierarchy. We provide a new approximation to the third moment and demonstrate that the neutrino power spectrum can be calculated to a precision of better than ∼ 5% for masses up to ∼ 1 eV and k ∼< 10 h /Mpc. The matter power spectrum can be calculated far more precisely and typically at least a factor of a few better than with existing approximations. We then proceed to study how the neutrino power spectrum canmore » be reliably calculated even in the non-linear regime by using the non-linear gravitational potential, sourced by dark matter overdensities, as it is derived from semi-analytic methods based on N -body simulations in the Boltzmann evolution hierarchy. Our results agree extremely well with results derived from N -body simulations that include cold dark matter and neutrinos as independent particles with different properties.« less

  18. Emission of Sound from Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  19. Emission of Sound From Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  20. Groundwater vulnerability and risk mapping using GIS, modeling and a fuzzy logic tool.

    PubMed

    Nobre, R C M; Rotunno Filho, O C; Mansur, W J; Nobre, M M M; Cosenza, C A N

    2007-12-07

    A groundwater vulnerability and risk mapping assessment, based on a source-pathway-receptor approach, is presented for an urban coastal aquifer in northeastern Brazil. A modified version of the DRASTIC methodology was used to map the intrinsic and specific groundwater vulnerability of a 292 km(2) study area. A fuzzy hierarchy methodology was adopted to evaluate the potential contaminant source index, including diffuse and point sources. Numerical modeling was performed for delineation of well capture zones, using MODFLOW and MODPATH. The integration of these elements provided the mechanism to assess groundwater pollution risks and identify areas that must be prioritized in terms of groundwater monitoring and restriction on use. A groundwater quality index based on nitrate and chloride concentrations was calculated, which had a positive correlation with the specific vulnerability index.

  1. Aeolian controls of soil geochemistry and weathering fluxes in high-elevation ecosystems of the Rocky Mountains, Colorado

    USGS Publications Warehouse

    Lawrence, Corey R.; Reynolds, Richard L.; Kettterer, Michael E.; Neff, Jason C.

    2013-01-01

    When dust inputs are large or have persisted for long periods of time, the signature of dust additions are often apparent in soils. The of dust will be greatest where the geochemical composition of dust is distinct from local sources of soil parent material. In this study the influence of dust accretion on soil geochemistry is quantified for two different soils from the San Juan Mountains of southwestern Colorado, USA. At both study sites, dust is enriched in several trace elements relative to local rock, especially Cd, Cu, Pb, and Zn. Mass-balance calculations that do not explicitly account for dust inputs indicate the accumulation of some elements in soil beyond what can be explained by weathering of local rock. Most observed elemental enrichments are explained by accounting for the long-term accretion of dust, based on modern isotopic and geochemical estimates. One notable exception is Pb, which based on mass-balance calculations and isotopic measurements may have an additional source at one of the study sites. These results suggest that dust is a major factor influencing the development of soil in these settings and is also an important control of soil weathering fluxes. After accounting for dust inputs in mass-balance calculations, Si weathering fluxes from San Juan Mountain soils are within the range observed for other temperate systems. Comparing dust inputs with mass-balanced based flux estimates suggests dust could account for as much as 50–80% of total long-term chemical weathering fluxes. These results support the notion that dust inputs may sustain chemical weathering fluxes even in relatively young continental settings. Given the widespread input of far-traveled dust, the weathering of dust is likely and important and underappreciated aspect of the global weathering engine.

  2. Inner Magnetospheric Superthermal Electron Transport: Photoelectron and Plasma Sheet Electron Sources

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Liemohn, M. W.; Kozyra, J. U.; Moore, T. E.

    1998-01-01

    Two time-dependent kinetic models of superthermal electron transport are combined to conduct global calculations of the nonthermal electron distribution function throughout the inner magnetosphere. It is shown that the energy range of validity for this combined model extends down to the superthermal-thermal intersection at a few eV, allowing for the calculation of the en- tire distribution function and thus an accurate heating rate to the thermal plasma. Because of the linearity of the formulas, the source terms are separated to calculate the distributions from the various populations, namely photoelectrons (PEs) and plasma sheet electrons (PSEs). These distributions are discussed in detail, examining the processes responsible for their formation in the various regions of the inner magnetosphere. It is shown that convection, corotation, and Coulomb collisions are the dominant processes in the formation of the PE distribution function and that PSEs are dominated by the interplay between the drift terms. Of note is that the PEs propagate around the nightside in a narrow channel at the edge of the plasmasphere as Coulomb collisions reduce the fluxes inside of this and convection compresses the flux tubes inward. These distributions are then recombined to show the development of the total superthermal electron distribution function in the inner magnetosphere and their influence on the thermal plasma. PEs usually dominate the dayside heating, with integral energy fluxes to the ionosphere reaching 10(exp 10) eV/sq cm/s in the plasmasphere, while heating from the PSEs typically does not exceed 10(exp 8) eV/sq cm/s. On the nightside, the inner plasmasphere is usually unheated by superthermal electrons. A feature of these combined spectra is that the distribution often has upward slopes with energy, particularly at the crossover from PE to PSE dominance, indicating that instabilities are possible.

  3. Aeolian controls of soil geochemistry and weathering fluxes in high-elevation ecosystems of the Rocky Mountains, Colorado

    NASA Astrophysics Data System (ADS)

    Lawrence, Corey R.; Reynolds, Richard L.; Ketterer, Michael E.; Neff, Jason C.

    2013-04-01

    When dust inputs are large or have persisted for long periods of time, the signature of dust additions are often apparent in soils. The of dust will be greatest where the geochemical composition of dust is distinct from local sources of soil parent material. In this study the influence of dust accretion on soil geochemistry is quantified for two different soils from the San Juan Mountains of southwestern Colorado, USA. At both study sites, dust is enriched in several trace elements relative to local rock, especially Cd, Cu, Pb, and Zn. Mass-balance calculations that do not explicitly account for dust inputs indicate the accumulation of some elements in soil beyond what can be explained by weathering of local rock. Most observed elemental enrichments are explained by accounting for the long-term accretion of dust, based on modern isotopic and geochemical estimates. One notable exception is Pb, which based on mass-balance calculations and isotopic measurements may have an additional source at one of the study sites. These results suggest that dust is a major factor influencing the development of soil in these settings and is also an important control of soil weathering fluxes. After accounting for dust inputs in mass-balance calculations, Si weathering fluxes from San Juan Mountain soils are within the range observed for other temperate systems. Comparing dust inputs with mass-balanced based flux estimates suggests dust could account for as much as 50-80% of total long-term chemical weathering fluxes. These results support the notion that dust inputs may sustain chemical weathering fluxes even in relatively young continental settings. Given the widespread input of far-traveled dust, the weathering of dust is likely and important and underappreciated aspect of the global weathering engine.

  4. Comparison of TG-43 and TG-186 in breast irradiation using a low energy electronic brachytherapy source.

    PubMed

    White, Shane A; Landry, Guillaume; Fonseca, Gabriel Paiva; Holt, Randy; Rusch, Thomas; Beaulieu, Luc; Verhaegen, Frank; Reniers, Brigitte

    2014-06-01

    The recently updated guidelines for dosimetry in brachytherapy in TG-186 have recommended the use of model-based dosimetry calculations as a replacement for TG-43. TG-186 highlights shortcomings in the water-based approach in TG-43, particularly for low energy brachytherapy sources. The Xoft Axxent is a low energy (<50 kV) brachytherapy system used in accelerated partial breast irradiation (APBI). Breast tissue is a heterogeneous tissue in terms of density and composition. Dosimetric calculations of seven APBI patients treated with Axxent were made using a model-based Monte Carlo platform for a number of tissue models and dose reporting methods and compared to TG-43 based plans. A model of the Axxent source, the S700, was created and validated against experimental data. CT scans of the patients were used to create realistic multi-tissue/heterogeneous models with breast tissue segmented using a published technique. Alternative water models were used to isolate the influence of tissue heterogeneity and backscatter on the dose distribution. Dose calculations were performed using Geant4 according to the original treatment parameters. The effect of the Axxent balloon applicator used in APBI which could not be modeled in the CT-based model, was modeled using a novel technique that utilizes CAD-based geometries. These techniques were validated experimentally. Results were calculated using two dose reporting methods, dose to water (Dw,m) and dose to medium (Dm,m), for the heterogeneous simulations. All results were compared against TG-43-based dose distributions and evaluated using dose ratio maps and DVH metrics. Changes in skin and PTV dose were highlighted. All simulated heterogeneous models showed a reduced dose to the DVH metrics that is dependent on the method of dose reporting and patient geometry. Based on a prescription dose of 34 Gy, the average D90 to PTV was reduced by between ~4% and ~40%, depending on the scoring method, compared to the TG-43 result. Peak skin dose is also reduced by 10%-15% due to the absence of backscatter not accounted for in TG-43. The balloon applicator also contributed to the reduced dose. Other ROIs showed a difference depending on the method of dose reporting. TG-186-based calculations produce results that are different from TG-43 for the Axxent source. The differences depend strongly on the method of dose reporting. This study highlights the importance of backscatter to peak skin dose. Tissue heterogeneities, applicator, and patient geometries demonstrate the need for a more robust dose calculation method for low energy brachytherapy sources.

  5. Underwater Threat Source Localization: Processing Sensor Network TDOAs with a Terascale Optical Core Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barhen, Jacob; Imam, Neena

    2007-01-01

    Revolutionary computing technologies are defined in terms of technological breakthroughs, which leapfrog over near-term projected advances in conventional hardware and software to produce paradigm shifts in computational science. For underwater threat source localization using information provided by a dynamical sensor network, one of the most promising computational advances builds upon the emergence of digital optical-core devices. In this article, we present initial results of sensor network calculations that focus on the concept of signal wavefront time-difference-of-arrival (TDOA). The corresponding algorithms are implemented on the EnLight processing platform recently introduced by Lenslet Laboratories. This tera-scale digital optical core processor is optimizedmore » for array operations, which it performs in a fixed-point-arithmetic architecture. Our results (i) illustrate the ability to reach the required accuracy in the TDOA computation, and (ii) demonstrate that a considerable speed-up can be achieved when using the EnLight 64a prototype processor as compared to a dual Intel XeonTM processor.« less

  6. Design and implementation of wireless dose logger network for radiological emergency decision support system.

    PubMed

    Gopalakrishnan, V; Baskaran, R; Venkatraman, B

    2016-08-01

    A decision support system (DSS) is implemented in Radiological Safety Division, Indira Gandhi Centre for Atomic Research for providing guidance for emergency decision making in case of an inadvertent nuclear accident. Real time gamma dose rate measurement around the stack is used for estimating the radioactive release rate (source term) by using inverse calculation. Wireless gamma dose logging network is designed, implemented, and installed around the Madras Atomic Power Station reactor stack to continuously acquire the environmental gamma dose rate and the details are presented in the paper. The network uses XBee-Pro wireless modules and PSoC controller for wireless interfacing, and the data are logged at the base station. A LabView based program is developed to receive the data, display it on the Google Map, plot the data over the time scale, and register the data in a file to share with DSS software. The DSS at the base station evaluates the real time source term to assess radiation impact.

  7. Possible consequences of severe accidents at the Lubiatowo site, Poland

    NASA Astrophysics Data System (ADS)

    Seibert, Petra; Philipp, Anne; Hofman, Radek; Gufler, Klaus; Sholly, Steven

    2014-05-01

    The construction of a nuclear power plant is under consideration in Poland. One of the sites under discussion is near Lubiatowo, located on the cost of the Baltic Sea northwest of Gdansk. An assessment of possible environmental consequences is carried out for 88 real meteorological cases with the Lagrangian particle dispersion model FLEXPART. Based on literature research, three reactor designs (ABWR, EPR, AP 1000) were identified as being under discussion in Poland. For each of the designs, a set of accident scenarios was evaluated and two source terms per reactor design were selected for analysis. One of the selected source terms was a relatively large release while the second one was a severe accident with an intact containment. Considered endpoints of the calculations are ground contamination with Cs-137 and time-integrated concentrations of I-131 in air as well as committed doses. They are evaluated on a grid of ca. 3 km mesh size covering eastern Central Europe.

  8. Design and implementation of wireless dose logger network for radiological emergency decision support system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopalakrishnan, V.; Baskaran, R.; Venkatraman, B.

    A decision support system (DSS) is implemented in Radiological Safety Division, Indira Gandhi Centre for Atomic Research for providing guidance for emergency decision making in case of an inadvertent nuclear accident. Real time gamma dose rate measurement around the stack is used for estimating the radioactive release rate (source term) by using inverse calculation. Wireless gamma dose logging network is designed, implemented, and installed around the Madras Atomic Power Station reactor stack to continuously acquire the environmental gamma dose rate and the details are presented in the paper. The network uses XBee–Pro wireless modules and PSoC controller for wireless interfacing,more » and the data are logged at the base station. A LabView based program is developed to receive the data, display it on the Google Map, plot the data over the time scale, and register the data in a file to share with DSS software. The DSS at the base station evaluates the real time source term to assess radiation impact.« less

  9. The generation of gravitational waves. I - Weak-field sources

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Kovacs, S. J.

    1975-01-01

    This paper derives and summarizes a 'plug-in-and-grind' formalism for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, the formalism reduces to standard 'linearized theory'. Independent of the effects of gravity on the motions, the formalism reduces to the standard 'quadrupole-moment formalism' if the motions are slow and internal stresses are weak. In the general case, the formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime and breaks the Green's function integral into five easily understood pieces: direct radiation, produced directly by the motions of the source; whump radiation, produced by the 'gravitational stresses' of the source; transition radiation, produced by a time-changing time delay ('Shapiro effect') in the propagation of the nonradiative 1/r field of the source; focusing radiation, produced when one portion of the source focuses, in a time-dependent way, the nonradiative field of another portion of the source; and tail radiation, produced by 'back-scatter' of the nonradiative field in regions of focusing.

  10. The generation of gravitational waves. 1. Weak-field sources: A plug-in-and-grind formalism

    NASA Technical Reports Server (NTRS)

    Thorne, K. S.; Kovacs, S. J.

    1974-01-01

    A plug-in-and-grind formalism is derived for calculating the gravitational waves emitted by any system with weak internal gravitational fields. If the internal fields have negligible influence on the system's motions, then the formalism reduces to standard linearized theory. Whether or not gravity affects the motions, if the motions are slow and internal stresses are weak, then the new formalism reduces to the standard quadrupole-moment formalism. In the general case the new formalism expresses the radiation in terms of a retarded Green's function for slightly curved spacetime, and then breaks the Green's-function integral into five easily understood pieces: direct radiation, produced directly by the motions of the sources; whump radiation, produced by the the gravitational stresses of the source; transition radiation, produced by a time-changing time delay (Shapiro effect) in the propagation of the nonradiative, 1/r field of the source; focussing radiation produced when one portion of the source focusses, in a time-dependent way, the nonradiative field of another portion of the source, and tail radiation, produced by backscatter of the nonradiative field in regions of focussing.

  11. Monte Carlo dose calculations of beta-emitting sources for intravascular brachytherapy: a comparison between EGS4, EGSnrc, and MCNP.

    PubMed

    Wang, R; Li, X A

    2001-02-01

    The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.

  12. The multiplier effect of the health education-risk reduction program in 28 states and 1 territory.

    PubMed Central

    Kreuter, M W; Christensen, G M; Divincenzo, A

    1982-01-01

    The multiplier effect of the Health Education-Risk Reduction (HE-RR) Grants Program funded by the Public Health Service is examined to identify outcomes for the period 1978-81. Responses to a questionnaire from the directors of health education of 28 States and 1 Territory supplied the information concerning new health promotion activities generated by the program. The directors were asked to identify and give cost estimates of new activities that resulted from State-level and local intervention projects. A method for calculating the extent to which the HE-RR program influenced new health promotion activities that were funded by alternate sources was devised. The calculation, termed the new activity rate, was applied to the survey data. Rates calculated for the HE-RR program revealed that it generated nearly $4 million in new health promotion activities, most of them funded by the private and voluntary segments of society. PMID:7146300

  13. Calculational note for the radiological and toxicological effects of a UO{sub 3} release from the T-hopper storage pad

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, H.J.

    1998-06-18

    UO{sub 3} powder is stored at the T-hopper storage area associated with the 2714-U building in the 200 west area. The T-hopper containers and 13 drums containing this material are used to store the powder on pads immediately north of the building. An interim safety basis document (WHC,1996) was issued in 1996 for the UO{sub 3} powder storage area. In this document the isotope {sup 99}Tc was not included in the source term used to calculate the radiological consequences of a postulated release of the powder. A calculations note (HNF, 1998) was issued to remedy that deficiency. The present documentmore » is a revision to that document to reflect updated data concerning the solubility of UO{sub 3} in simulated lung fluid and to utilize more realistic powder release fractions.« less

  14. Shock-drift particle acceleration in superluminal shocks - A model for hot spots in extragalactic radio sources

    NASA Technical Reports Server (NTRS)

    Begelman, Mitchell C.; Kirk, John G.

    1990-01-01

    Shock-drift acceleration at relativistic shock fronts is investigated using a fully relativistic treatment of both the microphysics of the shock-drift acceleration and the macrophysics of the shock front. By explicitly tracing particle trajectories across shocks, it is shown how the adiabatic invariance of a particle's magnetic moment breaks down as the upstream shock speed becomes relativistic, and is recovered at subrelativistic velocities. These calculations enable the mean increase in energy of a particle which encounters the shock with a given pitch angle to be calculated. The results are used to construct the downstream electron distribution function in terms of the incident distribution function and the bulk properties of the shock. The synchrotron emissivity of the transmitted distribution is calculated, and it is demonstrated that amplification factors are easily obtained which are more than adequate to explain the observed constrasts in surface brightness between jets and hot spots.

  15. Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms

    DOE PAGES

    Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary

    2013-04-01

    The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less

  16. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4

    NASA Astrophysics Data System (ADS)

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-01

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential Geant4, when running the same simulation code for both. The developed VSM for 6 MV/10 MV beams widely used, is a general concept easy to adapt in order to reconstruct comparable beam qualities for various linac configurations, facilitating its integration for MC treatment planning purposes.

  17. Solute source depletion control of forward and back diffusion through low-permeability zones

    NASA Astrophysics Data System (ADS)

    Yang, Minjune; Annable, Michael D.; Jawitz, James W.

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.

  18. Solute source depletion control of forward and back diffusion through low-permeability zones.

    PubMed

    Yang, Minjune; Annable, Michael D; Jawitz, James W

    2016-10-01

    Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Repeat immigration: A previously unobserved source of heterogeneity?

    PubMed

    Aradhya, Siddartha; Scott, Kirk; Smith, Christopher D

    2017-07-01

    Register data allow for nuanced analyses of heterogeneities between sub-groups which are not observable in other data sources. One heterogeneity for which register data is particularly useful is in identifying unique migration histories of immigrant populations, a group of interest across disciplines. Years since migration is a commonly used measure of integration in studies seeking to understand the outcomes of immigrants. This study constructs detailed migration histories to test whether misclassified migrations may mask important heterogeneities. In doing so, we identify a previously understudied group of migrants called repeat immigrants, and show that they differ systematically from permanent immigrants. In addition, we quantify the degree to which migration information is misreported in the registers. The analysis is carried out in two steps. First, we estimate income trajectories for repeat immigrants and permanent immigrants to understand the degree to which they differ. Second, we test data validity by cross-referencing migration information with changes in income to determine whether there are inconsistencies indicating misreporting. From the first part of the analysis, the results indicate that repeat immigrants systematically differ from permanent immigrants in terms of income trajectories. Furthermore, income trajectories differ based on the way in which years since migration is calculated. The second part of the analysis suggests that misreported migration events, while present, are negligible. Repeat immigrants differ in terms of income trajectories, and may differ in terms of other outcomes as well. Furthermore, this study underlines that Swedish registers provide a reliable data source to analyze groups which are unidentifiable in other data sources.

  20. An alternative approach to probabilistic seismic hazard analysis in the Aegean region using Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Weatherill, Graeme; Burton, Paul W.

    2010-09-01

    The Aegean is the most seismically active and tectonically complex region in Europe. Damaging earthquakes have occurred here throughout recorded history, often resulting in considerable loss of life. The Monte Carlo method of probabilistic seismic hazard analysis (PSHA) is used to determine the level of ground motion likely to be exceeded in a given time period. Multiple random simulations of seismicity are generated to calculate, directly, the ground motion for a given site. Within the seismic hazard analysis we explore the impact of different seismic source models, incorporating both uniform zones and distributed seismicity. A new, simplified, seismic source model, derived from seismotectonic interpretation, is presented for the Aegean region. This is combined into the epistemic uncertainty analysis alongside existing source models for the region, and models derived by a K-means cluster analysis approach. Seismic source models derived using the K-means approach offer a degree of objectivity and reproducibility into the otherwise subjective approach of delineating seismic sources using expert judgment. Similar review and analysis is undertaken for the selection of peak ground acceleration (PGA) attenuation models, incorporating into the epistemic analysis Greek-specific models, European models and a Next Generation Attenuation model. Hazard maps for PGA on a "rock" site with a 10% probability of being exceeded in 50 years are produced and different source and attenuation models are compared. These indicate that Greek-specific attenuation models, with their smaller aleatory variability terms, produce lower PGA hazard, whilst recent European models and Next Generation Attenuation (NGA) model produce similar results. The Monte Carlo method is extended further to assimilate epistemic uncertainty into the hazard calculation, thus integrating across several appropriate source and PGA attenuation models. Site condition and fault-type are also integrated into the hazard mapping calculations. These hazard maps are in general agreement with previous maps for the Aegean, recognising the highest hazard in the Ionian Islands, Gulf of Corinth and Hellenic Arc. Peak Ground Accelerations for some sites in these regions reach as high as 500-600 cm s -2 using European/NGA attenuation models, and 400-500 cm s -2 using Greek attenuation models.

  1. Electroweak baryogenesis in the exceptional supersymmetric standard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Wei, E-mail: chao@physics.umass.edu

    2015-08-01

    We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.

  2. Electroweak baryogenesis in the exceptional supersymmetric standard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Wei

    2015-08-28

    We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.

  3. BETR Global - A geographically explicit global-scale multimedia contaminant fate model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macleod, M.; Waldow, H. von; Tay, P.

    2011-04-01

    We present two new software implementations of the BETR Global multimedia contaminant fate model. The model uses steady-state or non-steady-state mass-balance calculations to describe the fate and transport of persistent organic pollutants using a desktop computer. The global environment is described using a database of long-term average monthly conditions on a 15{sup o} x 15{sup o} grid. We demonstrate BETR Global by modeling the global sources, transport, and removal of decamethylcyclopentasiloxane (D5).

  4. International Conference on Numerical Ship Hydrodynamics (5th) Held in Hiroshima, Japan on 24-28 September 1989

    DTIC Science & Technology

    1989-09-28

    Introduction source. The near field part N has an integrand which is in terms of the higher order derived exponential integral func- For a number of...Methods for potential produced improved results near the flow calculations including first and stern, but none of them could accura- higher order theories ...method Naghdi method applied to the nonlinear free- in laminar boundary layer theory . I think the surface flow problems. higher theory Green-Naghdi

  5. Calibration of Photon Sources for Brachytherapy

    NASA Astrophysics Data System (ADS)

    Rijnders, Alex

    Source calibration has to be considered an essential part of the quality assurance program in a brachytherapy department. Not only it will ensure that the source strength value used for dose calculation agrees within some predetermined limits to the value stated on the source certificate, but also it will ensure traceability to international standards. At present calibration is most often still given in terms of reference air kerma rate, although calibration in terms of absorbed dose to water would be closer to the users interest. It can be expected that in a near future several standard laboratories will be able to offer this latter service, and dosimetry protocols will have to be adapted in this way. In-air measurement using ionization chambers (e.g. a Baldwin—Farmer ionization chamber for 192Ir high dose rate HDR or pulsed dose rate PDR sources) is still considered the method of choice for high energy source calibration, but because of their ease of use and reliability well type chambers are becoming more popular and are nowadays often recommended as the standard equipment. For low energy sources well type chambers are in practice the only equipment available for calibration. Care should be taken that the chamber is calibrated at the standard laboratory for the same source type and model as used in the clinic, and using the same measurement conditions and setup. Several standard laboratories have difficulties to provide these calibration facilities, especially for the low energy seed sources (125I and 103Pd). Should a user not be able to obtain properly calibrated equipment to verify the brachytherapy sources used in his department, then at least for sources that are replaced on a regular basis, a consistency check program should be set up to ensure a minimal level of quality control before these sources are used for patient treatment.

  6. A new method to calculate external mechanical work using force-platform data in ecological situations in humans: Application to Parkinson's disease.

    PubMed

    Gigot, Vincent; Van Wymelbeke, Virginie; Laroche, Davy; Mouillot, Thomas; Jacquin-Piques, Agnès; Rossé, Matthieu; Tavan, Michel; Brondel, Laurent

    2016-07-01

    To accurately quantify the cost of physical activity and to evaluate the different components of energy expenditure in humans, it is necessary to evaluate external mechanical work (WEXT). Large platform systems surpass other currently used techniques. Here, we describe a calculation method for force-platforms to calculate long-term WEXT. Each force-platform (2.46×1.60m and 3.80×2.48m) rests on 4 piezoelectric sensors. During long periods of recording, a drift in the speed of displacement of the center of mass (necessary to calculate WEXT) is generated. To suppress this drift, wavelet decomposition is used to low-pass filter the source signal. By using wavelet decomposition coefficients, the source signal can be recovered. To check the validity of WEXT calculations after signal processing, an oscillating pendulum system was first used; then, 10 healthy subjects performed a standardized exercise (squatting exercise). A medical application is also reported in eight Parkinsonian patients during the timed "get-up and go" test and compared with the same test in ten healthy subjects. Values of WEXT with the oscillating pendulum showed that the system was accurate and reliable. During the squatting exercise, the average measured WEXT was 0.4% lower than theoretical work. WEXT and mechanical work efficiency during the "get-up and go" test in Parkinson's disease patients in comparison with that of healthy subjects were very coherent. This method has numerous applications for studying physical activity and mechanical work efficiency in physiological and pathological conditions. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Integrated Disposal Facility FY2010 Glass Testing Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Eric M.; Bacon, Diana H.; Kerisit, Sebastien N.

    2010-09-30

    Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to provide the technical basis for estimating radionuclide release from the engineered portion of the disposal facility (e.g., source term). Vitrifying the low-activity waste at Hanford is expected to generate over 1.6 × 105 m3 of glass (Puigh 1999). The volume of immobilized low-activity waste (ILAW) at Hanford is the largest in the DOE complex and is one of the largest inventories (approximately 0.89 × 1018 Bq total activity) of long-lived radionuclides, principally 99Tc (t1/2 = 2.1 × 105), planned for disposal in a low-level waste (LLW) facility.more » Before the ILAW can be disposed, DOE must conduct a performance assessement (PA) for the Integrated Disposal Facility (IDF) that describes the long-term impacts of the disposal facility on public health and environmental resources. As part of the ILAW glass testing program PNNL is implementing a strategy, consisting of experimentation and modeling, in order to provide the technical basis for estimating radionuclide release from the glass waste form in support of future IDF PAs. The purpose of this report is to summarize the progress made in fiscal year (FY) 2010 toward implementing the strategy with the goal of developing an understanding of the long-term corrosion behavior of low-activity waste glasses. The emphasis in FY2010 was the completing an evaluation of the most sensitive kinetic rate law parameters used to predict glass weathering, documented in Bacon and Pierce (2010), and transitioning from the use of the Subsurface Transport Over Reactive Multi-phases to Subsurface Transport Over Multiple Phases computer code for near-field calculations. The FY2010 activities also consisted of developing a Monte Carlo and Geochemical Modeling framework that links glass composition to alteration phase formation by 1) determining the structure of unreacted and reacted glasses for use as input information into Monte Carlo calculations, 2) compiling the solution data and alteration phases identified from accelerated weathering tests conducted with ILAW glass by PNNL and Viteous State Laboratory/Catholic University of America as well as other literature sources for use in geochemical modeling calculations, and 3) conducting several initial calculations on glasses that contain the four major components of ILAW-Al2O3, B2O3, Na2O, and SiO2.« less

  8. Dosimetric characterization of two radium sources for retrospective dosimetry studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candela-Juan, C., E-mail: ccanjuan@gmail.com; Karlsson, M.; Lundell, M.

    2015-05-15

    Purpose: During the first part of the 20th century, {sup 226}Ra was the most used radionuclide for brachytherapy. Retrospective accurate dosimetry, coupled with patient follow up, is important for advancing knowledge on long-term radiation effects. The purpose of this work was to dosimetrically characterize two {sup 226}Ra sources, commonly used in Sweden during the first half of the 20th century, for retrospective dose–effect studies. Methods: An 8 mg {sup 226}Ra tube and a 10 mg {sup 226}Ra needle, used at Radiumhemmet (Karolinska University Hospital, Stockholm, Sweden), from 1925 to the 1960s, were modeled in two independent Monte Carlo (MC) radiationmore » transport codes: GEANT4 and MCNP5. Absorbed dose and collision kerma around the two sources were obtained, from which the TG-43 parameters were derived for the secular equilibrium state. Furthermore, results from this dosimetric formalism were compared with results from a MC simulation with a superficial mould constituted by five needles inside a glass casing, placed over a water phantom, trying to mimic a typical clinical setup. Calculated absorbed doses using the TG-43 formalism were also compared with previously reported measurements and calculations based on the Sievert integral. Finally, the dose rate at large distances from a {sup 226}Ra point-like-source placed in the center of 1 m radius water sphere was calculated with GEANT4. Results: TG-43 parameters [including g{sub L}(r), F(r, θ), Λ, and s{sub K}] have been uploaded in spreadsheets as additional material, and the fitting parameters of a mathematical curve that provides the dose rate between 10 and 60 cm from the source have been provided. Results from TG-43 formalism are consistent within the treatment volume with those of a MC simulation of a typical clinical scenario. Comparisons with reported measurements made with thermoluminescent dosimeters show differences up to 13% along the transverse axis of the radium needle. It has been estimated that the uncertainty associated to the absorbed dose within the treatment volume is 10%–15%, whereas uncertainty of absorbed dose to distant organs is roughly 20%–25%. Conclusions: The results provided here facilitate retrospective dosimetry studies of {sup 226}Ra using modern treatment planning systems, which may be used to improve knowledge on long term radiation effects. It is surely important for the epidemiologic studies to be aware of the estimated uncertainty provided here before extracting their conclusions.« less

  9. Development of atmospheric N2O isotopomers model based on a chemistry-coupled atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Ishijima, K.; Toyoda, S.; Sudo, K.; Yoshikawa, C.; Nanbu, S.; Aoki, S.; Nakazawa, T.; Yoshida, N.

    2009-12-01

    It is well known that isotopic information is useful to qualitatively understand cycles and constrain sources of some atmospheric species, but so far there has been no study to model N2O isotopomers throughout the atmosphere from the troposphere to the stratosphere, including realistic surface N2O isotopomers emissions. We have started to develop a model to simulate spatiotemporal variations of the atmospheric N2O isotopomers in both the troposphere and the stratosphere, based on a chemistry-coupled atmospheric general circulation model, in order to obtain more accurate quantitative understanding of the global N2O cycle. For surface emissions of the isotopomers, combination of EDGAR-based anthropogenic and soil fluxes and monthly varying GEIA oceanic fluxes are factored, using isotopic values of global total sources estimated from firn-air analyses based long-term trend of the atmospheric N2O isotopomers. Isotopic fractionations in chemical reactions are considered for photolysis and photo-oxidation of N2O in the stratosphere. The isotopic fractionation coefficients have been employed from studies based on laboratory experiments, but we also will test the coefficients determined by theoretical calculations. In terms of the global N2O isotopomer budgets, precise quantification of the sources is quite challenging, because even the spatiotemporal variabilities of N2O sources have never been adequately estimated. Therefore, we have firstly started validation of simulated isotopomer results in the stratosphere, by using the isotopomer profiles obtained by balloon observations. N2O concentration profiles are mostly well reproduced, partly because of realistic reproduction of dynamical processes by nudging with reanalysis meteorological data. However, the concentration in the polar vortex tends to be overestimated, probably due to relatively coarse wave-length resolution in photolysis calculation. Such model features also appear in the isotopomers results, which are almost underestimated, relative to the balloon observations, although the concentration is well simulated. The tendency has been somewhat improved by incorporating another photolysis scheme with slightly higher wave-length resolution into the model. From another point of view, these facts indicate that N2O isotopomers can be used for validation of the stratospheric photochemical calculations in model, because of very high sensitivity of the isotopomer ratio values to some settings such as the wave-length resolution in the photochemical scheme.Therefore, N2O isotopomers modeling seems to be not only useful for validation of the fractionation coefficients and of isotopic characterization of sources, but also have the possibility to be an index especially for precision in the stratospheric photolysis in model.

  10. SU-F-T-06: Development of a Formalism for Practical Dose Measurements in Brachytherapy in the German Standard DIN 6803

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, F; Chofor, N; Schoenfeld, A

    2016-06-15

    Purpose: In the steep dose gradients in the vicinity of a radiation source and due to the properties of the changing photon spectra, dose measurements in Brachytherapy usually have large uncertainties. Working group DIN 6803-3 is presently discussing recommendations for practical brachytherapy dosimetry incorporating recent theoretical developments in the description of brachytherapy radiation fields as well as new detectors and phantom materials. The goal is to prepare methods and instruments to verify dose calculation algorithms and for clinical dose verification with reduced uncertainties. Methods: After analysis of the distance dependent spectral changes of the radiation field surrounding brachytherapy sources, themore » energy dependent response of typical brachytherapy detectors was examined with Monte Carlo simulations. A dosimetric formalism was developed allowing the correction of their energy dependence as function of source distance for a Co-60 calibrated detector. Water equivalent phantom materials were examined with Monte Carlo calculations for their influence on brachytherapy photon spectra and for their water equivalence in terms of generating equivalent distributions of photon spectra and absorbed dose to water. Results: The energy dependence of a detector in the vicinity of a brachytherapy source can be described by defining an energy correction factor kQ for brachytherapy in the same manner as in existing dosimetry protocols which incorporates volume averaging and radiation field distortion by the detector. Solid phantom materials were identified which allow precise positioning of a detector together with small correctable deviations from absorbed dose to water. Recommendations for the selection of detectors and phantom materials are being developed for different measurements in brachytherapy. Conclusion: The introduction of kQ for brachytherapy sources may allow more systematic and comparable dose measurements. In principle, the corrections can be verified or even determined by measurement in a water phantom and comparison with dose distributions calculated using the TG43 dosimetry formalism. Project is supported by DIN Deutsches Institut fuer Normung.« less

  11. Dosimetric comparison between model 9011 and 6711 sources in prostate implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hualin, E-mail: zhang248@iupui.edu; Arizona Oncology Services, Phoenix, AZ; Beyer, David

    2013-07-01

    The purpose of this work is to evaluate the model 9011 iodine-125 ({sup 125}I) in prostate implants by comparing dosimetric coverage provided by the 6711 vs 9011 source implants. Postimplant dosimetry was performed in 18 consecutively implanted patients with prostate cancer. Two were implanted with the 9011 source and 16 with the 6711 source. For purposes of comparison, each implant was then recalculated assuming use of the other source. The same commercially available planning system was used and the specific source data for both 6711 and 9011 products were entered. The results of these calculations are compared side by sidemore » in the terms of the isodose values covering 100% (D100) and 90% (D90) of prostate volume, and the percentages of volumes of prostate, bladder, rectum, and urethra covered by 200% (V200), 150% (V150), 100% (V100), 50% (V50), and 20% (V20) of the prescribed dose as well. The 6711 source data overestimate coverage by 6.4% (ranging from 4.9% to 6.9%; median 6.6%) at D100 and by 6.6% (ranging from 6.2% to 6.8%; median 6.6%) at D90 compared with actual 9011 data. Greater discrepancies of up to 67% are seen at higher dose levels: average reduction for V100 is 2.7% (ranging from 0.6% to 7.7%; median 2.3%), for V150 is 14.6% (ranging from 6.1% to 20.5%; median 15.3%), for V200 is 14.9% (ranging from 4.8% to 19.1%; median 16%); similarly seen in bladder, rectal, and urethral coverage. This work demonstrates a clear difference in dosimetric behavior between the 9011 and 6711 sources. Using the 6711 source data for 9011 source implants would create a pronounced error in dose calculation. This study provides evidence that the 9011 source can provide the same dosimetric quality as the 6711 source, if properly used; however, the 6711 source data should not be considered as a surrogate for the 9011 source implants.« less

  12. Generalized design of a zero-geometric-loss, astigmatism-free, modified four-objective multipass matrix system.

    PubMed

    Guo, Yin; Sun, LiQun; Yang, Zheng; Liu, Zilong

    2016-02-20

    During this study we constructed a generalized parametric modified four-objective multipass matrix system (MMS). We used an optical system comprising four asymmetrical spherical mirrors to improve the alignment process. The use of a paraxial equation for the design of the front transfer optics yielded the initial condition for modeling our MMS. We performed a ray tracing simulation to calculate the significant aberration of the system (astigmatism). Based on the calculated meridional and sagittal focus positions, the complementary focusing mirror was easily designed to provide an output beam free of astigmatism. We have presented an example of a 108-transit multipass system (5×7 matrix arrangement) with a relatively larger numerical aperture source (xenon light source). The whole system exhibits zero theoretical geometrical loss when simulated with Zemax software. The MMS construction strategy described in this study provides an anastigmatic output beam and the generalized approach to design a controllable matrix spot pattern on the field mirrors. Asymmetrical reflective mirrors aid in aligning the whole system with high efficiency. With the generalized design strategy in terms of optics configuration and asymmetrical fabrication method in this paper, other kinds of multipass matrix system coupled with different sources and detector systems also can be achieved.

  13. Optimization of a mirror-based neutron source using differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Yurov, D. V.; Prikhodko, V. V.

    2016-12-01

    This study is dedicated to the assessment of capabilities of gas-dynamic trap (GDT) and gas-dynamic multiple-mirror trap (GDMT) as potential neutron sources for subcritical hybrids. In mathematical terms the problem of the study has been formulated as determining the global maximum of fusion gain (Q pl), the latter represented as a function of trap parameters. A differential evolution method has been applied to perform the search. Considered in all calculations has been a configuration of the neutron source with 20 m long distance between the mirrors and 100 MW heating power. It is important to mention that the numerical study has also taken into account a number of constraints on plasma characteristics so as to provide physical credibility of searched-for trap configurations. According to the results obtained the traps considered have demonstrated fusion gain up to 0.2, depending on the constraints applied. This enables them to be used either as neutron sources within subcritical reactors for minor actinides incineration or as material-testing facilities.

  14. Excitation of Love waves in a thin film layer by a line source.

    NASA Technical Reports Server (NTRS)

    Tuan, H.-S.; Ponamgi, S. R.

    1972-01-01

    The excitation of a Love surface wave guided by a thin film layer deposited on a semiinfinite substrate is studied in this paper. Both the thin film and the substrate are considered to be elastically isotropic. Amplitudes of the surface wave in the thin film region and the substrate are found in terms of the strength of a line source vibrating in a direction transverse to the propagating wave. In addition to the surface wave, the bulk shear wave excited by the source is also studied. Analytical expressions for the bulk wave amplitude as a function of the direction of propagation, the acoustic powers transported by the surface and bulk waves, and the efficiency of surface wave excitation are obtained. A numerical example is given to show how the bulk wave radiation pattern depends upon the source frequency, the film thickness and other important parameters of the problem. The efficiency of surface wave excitation is also calculated for various parameter values.

  15. Potential sources of precipitation in Lake Baikal basin

    NASA Astrophysics Data System (ADS)

    Shukurov, K. A.; Mokhov, I. I.

    2017-11-01

    Based on the data of long-term measurements at 23 meteorological stations in the Russian part of the Lake Baikal basin the probabilities of daily precipitation with different intensity and their contribution to the total precipitation are estimated. Using the trajectory model HYSPLIT_4 for each meteorological station for the period 1948-2016 the 10-day backward trajectories of air parcels, the height of these trajectories and distribution of specific humidity along the trajectories are calculated. The average field of power of potential sources of daily precipitation (less than 10 mm) for all meteorological stations in the Russian part of the Lake Baikal basin was obtained using the CWT (concentration weighted trajectory) method. The areas have been identified from which within 10 days water vapor can be transported to the Lake Baikal basin, as well as regions of the most and least powerful potential sources. The fields of the mean height of air parcels trajectories and the mean specific humidity along the trajectories are compared with the field of mean power of potential sources.

  16. Benchmarking kinetic calculations of resistive wall mode stability

    NASA Astrophysics Data System (ADS)

    Berkery, J. W.; Liu, Y. Q.; Wang, Z. R.; Sabbagh, S. A.; Logan, N. C.; Park, J.-K.; Manickam, J.; Betti, R.

    2014-05-01

    Validating the calculations of kinetic resistive wall mode (RWM) stability is important for confidently predicting RWM stable operating regions in ITER and other high performance tokamaks for disruption avoidance. Benchmarking the calculations of the Magnetohydrodynamic Resistive Spectrum—Kinetic (MARS-K) [Y. Liu et al., Phys. Plasmas 15, 112503 (2008)], Modification to Ideal Stability by Kinetic effects (MISK) [B. Hu et al., Phys. Plasmas 12, 057301 (2005)], and Perturbed Equilibrium Nonambipolar Transport PENT) [N. Logan et al., Phys. Plasmas 20, 122507 (2013)] codes for two Solov'ev analytical equilibria and a projected ITER equilibrium has demonstrated good agreement between the codes. The important particle frequencies, the frequency resonance energy integral in which they are used, the marginally stable eigenfunctions, perturbed Lagrangians, and fluid growth rates are all generally consistent between the codes. The most important kinetic effect at low rotation is the resonance between the mode rotation and the trapped thermal particle's precession drift, and MARS-K, MISK, and PENT show good agreement in this term. The different ways the rational surface contribution was treated historically in the codes is identified as a source of disagreement in the bounce and transit resonance terms at higher plasma rotation. Calculations from all of the codes support the present understanding that RWM stability can be increased by kinetic effects at low rotation through precession drift resonance and at high rotation by bounce and transit resonances, while intermediate rotation can remain susceptible to instability. The applicability of benchmarked kinetic stability calculations to experimental results is demonstrated by the prediction of MISK calculations of near marginal growth rates for experimental marginal stability points from the National Spherical Torus Experiment (NSTX) [M. Ono et al., Nucl. Fusion 40, 557 (2000)].

  17. Sulfur activation at the Little Boy-Comet Critical Assembly: a replica of the Hiroshima bomb

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, G.D.; Emery, J.F.; Pace, J.V. III

    1985-04-01

    Studies have been completed on the activation of sulfur by fast neutrons from the Little Boy-Comet Critical Assembly which replicates the general features of the Hiroshima bomb. The complex effects of the bomb's design and construction on leakage of sulfur-activation neutrons were investigated both experimentally and theoretically. Our sulfur activation studies were performed as part of a larger program to provide benchmark data for testing of methods used in recent source-term calculations for the Hiroshima bomb. Source neutrons capable of activating sulfur play an important role in determining neutron doses in Hiroshima at a kilometer or more from the pointmore » of explosion. 37 refs., 5 figs., 6 tabs.« less

  18. Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.

    NASA Astrophysics Data System (ADS)

    Gavazza, Sergio

    Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.

  19. Experiments with Lasers and Frequency Doublers

    NASA Technical Reports Server (NTRS)

    Bachor, H.-A.; Taubman, M.; White, A. G.; Ralph, T.; McClelland, D. E.

    1996-01-01

    Solid state laser sources, such as diode-pumped Nd:YAG lasers, have given us CW laser light of high power with unprecedented stability and low noise performance. In these lasers most of the technical sources of noise can be eliminated allowing them to be operated close to the theoretical noise limit set by the quantum properties of light. The next step of reducing the noise below the standard limit is known as squeezing. We present experimental progress in generating reliably squeezed light using the process of frequency doubling. We emphasize the long term stability that makes this a truly practical source of squeezed light. Our experimental results match noise spectra calculated with our recently developed models of coupled systems which include the noise generated inside the laser and its interaction with the frequency doubler. We conclude with some observations on evaluating quadrature squeezed states of light.

  20. Reciprocity relationships in vector acoustics and their application to vector field calculations.

    PubMed

    Deal, Thomas J; Smith, Kevin B

    2017-08-01

    The reciprocity equation commonly stated in underwater acoustics relates pressure fields and monopole sources. It is often used to predict the pressure measured by a hydrophone for multiple source locations by placing a source at the hydrophone location and calculating the field everywhere for that source. A similar equation that governs the orthogonal components of the particle velocity field is needed to enable this computational method to be used for acoustic vector sensors. This paper derives a general reciprocity equation that accounts for both monopole and dipole sources. This vector-scalar reciprocity equation can be used to calculate individual components of the received vector field by altering the source type used in the propagation calculation. This enables a propagation model to calculate the received vector field components for an arbitrary number of source locations with a single model run for each vector field component instead of requiring one model run for each source location. Application of the vector-scalar reciprocity principle is demonstrated with analytic solutions for a range-independent environment and with numerical solutions for a range-dependent environment using a parabolic equation model.

  1. Multiyear greenhouse gas balances at a rewetted temperate peatland.

    PubMed

    Wilson, David; Farrell, Catherine A; Fallon, David; Moser, Gerald; Müller, Christoph; Renou-Wilson, Florence

    2016-12-01

    Drained peat soils are a significant source of greenhouse gas (GHG) emissions to the atmosphere. Rewetting these soils is considered an important climate change mitigation tool to reduce emissions and create suitable conditions for carbon sequestration. Long-term monitoring is essential to capture interannual variations in GHG emissions and associated environmental variables and to reduce the uncertainty linked with GHG emission factor calculations. In this study, we present GHG balances: carbon dioxide (CO 2 ), methane (CH 4 ) and nitrous oxide (N 2 O) calculated for a 5-year period at a rewetted industrial cutaway peatland in Ireland (rewetted 7 years prior to the start of the study); and compare the results with an adjacent drained area (2-year data set), and with ten long-term data sets from intact (i.e. undrained) peatlands in temperate and boreal regions. In the rewetted site, CO 2 exchange (or net ecosystem exchange (NEE)) was strongly influenced by ecosystem respiration (R eco ) rather than gross primary production (GPP). CH 4 emissions were related to soil temperature and either water table level or plant biomass. N 2 O emissions were not detected in either drained or rewetted sites. Rewetting reduced CO 2 emissions in unvegetated areas by approximately 50%. When upscaled to the ecosystem level, the emission factors (calculated as 5-year mean of annual balances) for the rewetted site were (±SD) -104 ± 80 g CO 2 -C m -2  yr -1 (i.e. CO 2 sink) and 9 ± 2 g CH 4 -C m -2  yr -1 (i.e. CH 4 source). Nearly a decade after rewetting, the GHG balance (100-year global warming potential) had reduced noticeably (i.e. less warming) in comparison with the drained site but was still higher than comparative intact sites. Our results indicate that rewetted sites may be more sensitive to interannual changes in weather conditions than their more resilient intact counterparts and may switch from an annual CO 2 sink to a source if triggered by slightly drier conditions. © 2016 John Wiley & Sons Ltd.

  2. Use of Online Sources of Information by Dental Practitioners: Findings from The Dental Practice-Based Research Network

    PubMed Central

    Funkhouser, Ellen; Agee, Bonita S.; Gordan, Valeria V.; Rindal, D. Brad; Fellows, Jeffrey L.; Qvist, Vibeke; McClelland, Jocelyn; Gilbert, Gregg H.

    2013-01-01

    Objectives Estimate the proportion of dental practitioners who use online sources of information for practice guidance. Methods From a survey of 657 dental practitioners in The Dental Practice Based Research Network, four indicators of online use for practice guidance were calculated: read journals online, obtained continuing education (CDE) through online sources, rated an online source as most influential, and reported frequently using an online source for guidance. Demographics, journals read, and use of various sources of information for practice guidance in terms of frequency and influence were ascertained for each. Results Overall, 21% (n=138) were classified into one of the four indicators of online use: 14% (n=89) rated an online source as most influential and 13% (n=87) reported frequently using an online source for guidance; few practitioners (5%, n=34) read journals online, fewer (3%, n=17) obtained CDE through online sources. Use of online information sources varied considerably by region and practice characteristics. In general, the 4 indicators represented practitioners with as many differences as similarities to each other and to offline users. Conclusion A relatively small proportion of dental practitioners use information from online sources for practice guidance. Variation exists regarding practitioners’ use of online source resources and how they rate the value of offline information sources for practice guidance. PMID:22994848

  3. Thermoelastic stress in oceanic lithosphere due to hotspot reheating

    NASA Technical Reports Server (NTRS)

    Zhu, Anning; Wiens, Douglas A.

    1991-01-01

    The effect of hotspot reheating on the intraplate stress field is investigated by modeling the three-dimensional thermal stress field produced by nonuniform temperature changes in an elastic plate. Temperature perturbations are calculated assuming that the lithosphere is heated by a source in the lower part of the thermal lithosphere. A thermal stress model for the elastic lithosphere is calculated by superposing the stress fields resulting from temperature changes in small individual elements. The stress in an elastic plate resulting from a temperature change in each small element is expressed as an infinite series, wherein each term is a source or an image modified from a closed-from half-space solution. The thermal stress solution is applied to midplate swells in oceanic lithosphere with various thermal structures and plate velocities. The results predict a stress field with a maximum deviatoric stress on the order of 100 MPa covering a broad area around the hotspot plume. The predicted principal stress orientations show a complicated geographical pattern, with horizontal extension perpendicular to the hotspot track at shallow depths and compression along the track near the bottom of the elastic lithosphere.

  4. Modeling the transport of PCDD/F compounds in a contaminated river and the possible influence of restoration dredging on calculated fluxes.

    PubMed

    Malve, Olli; Salo, Simo; Verta, Matti; Forsius, John

    2003-08-01

    River Kymijoki, the fourth largest river in Finland, has been heavily polluted by pulp mill effluents as well as by chemical industry. Loading has been reduced considerably, although remains of past emissions still exist in river sediments. The sediments are highly contaminated with polychlorinated dibenzo-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs), polychlorinated diphenyl ethers (PCDEs), and mercury originating from production of the chlorophenolic wood preservative (Ky-5) and other sources. The objective of this study was to simulate the transport of these PCDD/F compounds with a one-dimensional flow and transport model and to assess the impact of restoration dredging. Using the estimated trend in PCDD/F loading, downstream concentrations were calculated until 2020. If contaminated sediments are removed by dredging, the temporary increase of PCDD/F concentrations in downstream water and surface sediments will be within acceptable limits. Long-term predictions indicated only a minor decrease in surface sediment concentrations but a major decrease if the most contaminated sediments close to the emission source were removed. A more detailed assessment of the effects is suggested.

  5. Calculations of cosmogenic nuclide production rates in the Earth's atmosphere and their inventories

    NASA Technical Reports Server (NTRS)

    Obrien, K.

    1986-01-01

    The production rates of cosmogenic isotopes in the Earth's atmosphere and their resulting terrestrial abundances have been calculated, taking into account both geomagnetic and solar-modulatory effects. The local interstellar flux was assumed to be that of Garcia-Munoz, et al. Solar modulation was accounted for using the heliocentric potential model and expressed in terms of the Deep River neutron monitor count rates. The geomagnetic field was presented by vertical cutoffs calculated by Shea and Smart and the non-vertical cutoffs calculated using ANGRI. The local interstellar particle flux was first modulated using the heliocentric potential field. The modulated cosmic-ray fluxes reaching the earth's orbit then interacted with the geomagnetic field as though it were a high-pass filter. The interaction of the cosmic radiation with the Earth's atmosphere was calculated utilizing the Bolztmann transport equation. Spallation cross sections for isotope production were calculated using the formalism of Silberberg and Tsao and other cross sections were taken from standard sources. Inventories were calculated by accounting from the variation in solar modulation and geomagnetic field strength with time. Results for many isotope, including C-14, Be-7 and Be-10 are in generally good agreement with existing data. The C-14 inventory, for instance, amounts to 1.75/sq cm(e)/s, in excellent agreement with direct estimates.

  6. Direct design of aspherical lenses for extended non-Lambertian sources in three-dimensional rotational geometry

    PubMed Central

    Wu, Rengmao; Hua, Hong

    2016-01-01

    Illumination design used to redistribute the spatial energy distribution of light source is a key technique in lighting applications. However, there is still no effective illumination design method for extended sources, especially for extended non-Lambertian sources. What we present here is to our knowledge the first direct method for extended non-Lambertian sources in three-dimensional (3D) rotational geometry. In this method, both meridional rays and skew rays of the extended source are taken into account to tailor the lens profile in the meridional plane. A set of edge rays and interior rays emitted from the extended source which will take a given direction after the refraction of the aspherical lens are found by the Snell’s law, and the output intensity at this direction is then calculated to be the integral of the luminance function of the outgoing rays at this direction. This direct method is effective for both extended non-Lambertian sources and extended Lambertian sources in 3D rotational symmetry, and can directly find a solution to the prescribed design problem without cumbersome iterative illuminance compensation. Two examples are presented to demonstrate the effectiveness of the proposed method in terms of performance and capacity for tackling complex designs. PMID:26832484

  7. Analysis and optimization of minor actinides transmutation blankets with regards to neutron and gamma sources

    NASA Astrophysics Data System (ADS)

    Kooymana, Timothée; Buiron, Laurent; Rimpault, Gérald

    2017-09-01

    Heterogeneous loading of minor actinides in radial blankets is a potential solution to implement minor actinides transmutation in fast reactors. However, to compensate for the lower flux level experienced by the blankets, the fraction of minor actinides to be loaded in the blankets must be increased to maintain acceptable performances. This severely increases the decay heat and neutron source of the blanket assemblies, both before and after irradiation, by more than an order of magnitude in the case of neutron source for instance. We propose here to implement an optimization methodology of the blankets design with regards to various parameters such as the local spectrum or the mass to be loaded, with the objective of minimizing the final neutron source of the spent assembly while maximizing the transmutation performances of the blankets. In a first stage, an analysis of the various contributors to long and short term neutron and gamma source is carried out while in a second stage, relevant estimators are designed for use in the effective optimization process, which is done in the last step. A comparison with core calculations is finally done for completeness and validation purposes. It is found that the use of a moderated spectrum in the blankets can be beneficial in terms of final neutron and gamma source without impacting minor actinides transmutation performances compared to more energetic spectrum that could be achieved using metallic fuel for instance. It is also confirmed that, if possible, the use of hydrides as moderating material in the blankets is a promising option to limit the total minor actinides inventory in the fuel cycle. If not, it appears that focus should be put upon an increased residence time for the blankets rather than an increase in the acceptable neutron source for handling and reprocessing.

  8. A variable ULX and possible IMBH candidate in M51a

    NASA Astrophysics Data System (ADS)

    Earnshaw, Hannah M.; Roberts, Timothy P.; Heil, Lucy M.; Mezcua, Mar; Walton, Dominic J.; Done, Chris; Harrison, Fiona A.; Lansbury, George B.; Middleton, Matthew J.; Sutton, Andrew D.

    2016-03-01

    Ultraluminous X-ray source (ULX)-7, in the northern spiral arm of M51, demonstrates unusual behaviour for an ULX, with a hard X-ray spectrum but very high short-term variability. This suggests that it is not in a typical ultraluminous state. We analyse the source using archival data from XMM-Newton, Chandra and NuSTAR, and by examining optical and radio data from HST and Very Large Array. Our X-ray spectral analysis shows that the source has a hard power-law spectral shape with a photon index Γ ˜ 1.5, which persists despite the source's X-ray luminosity varying by over an order of magnitude. The power spectrum of the source features a break at 6.5^{+0.5}_{-1.1} × 10-3 Hz, from a low-frequency spectral index of α _1={-}0.1^{+0.5}_{-0.2} to a high-frequency spectral index of α _2=6.5^{+0.05}_{-0.14}, making it analogous to the low-frequency break found in the power spectra of low/hard state black holes (BHs). We can take a lower frequency limit for a corresponding high-frequency break to calculate a BH mass upper limit of 1.6 × 103 M⊙. Using the X-ray/radio Fundamental Plane, we calculate another upper limit to the BH mass of 3.5 × 104 M⊙ for a BH in the low/hard state. The hard spectrum, high rms variability and mass limits are consistent with ULX-7 being an intermediate-mass BH; however we cannot exclude other interpretations of this source's interesting behaviour, most notably a neutron star with an extreme accretion rate.

  9. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  10. Monte Carlo calculation of skyshine'' neutron dose from ALS (Advanced Light Source)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moin-Vasiri, M.

    1990-06-01

    This report discusses the following topics on skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations.

  11. Dosimetric parameters of three new solid core I‐125 brachytherapy sources

    PubMed Central

    Solberg, Timothy D.; DeMarco, John J.; Hugo, Geoffrey; Wallace, Robert E.

    2002-01-01

    Monte Carlo calculations and TLD measurements have been performed for the purpose of characterizing dosimetric properties of new commercially available brachytherapy sources. All sources tested consisted of a solid core, upon which a thin layer of I125 has been adsorbed, encased within a titanium housing. The PharmaSeed BT‐125 source manufactured by Syncor is available in silver or palladium core configurations while the ADVANTAGE source from IsoAid has silver only. Dosimetric properties, including the dose rate constant, radial dose function, and anisotropy characteristics were determined according to the TG‐43 protocol. Additionally, the geometry function was calculated exactly using Monte Carlo and compared with both the point and line source approximations. The 1999 NIST standard was followed in determining air kerma strength. Dose rate constants were calculated to be 0.955±0.005,0.967±0.005, and 0.962±0.005 cGyh−1U−1 for the PharmaSeed BT‐125‐1, BT‐125‐2, and ADVANTAGE sources, respectively. TLD measurements were in excellent agreement with Monte Carlo calculations. Radial dose function, g(r), calculated to a distance of 10 cm, and anisotropy function F(r, θ), calculated for radii from 0.5 to 7.0 cm, were similar among all source configurations. Anisotropy constants, ϕ¯an, were calculated to be 0.941, 0.944, and 0.960 for the three sources, respectively. All dosimetric parameters were found to be in close agreement with previously published data for similar source configurations. The MCNP Monte Carlo code appears to be ideally suited to low energy dosimetry applications. PACS number(s): 87.53.–j PMID:11958652

  12. Fourth-order self-energy contribution to the Lamb shift

    NASA Astrophysics Data System (ADS)

    Mallampalli, S.; Sapirstein, J.

    1998-03-01

    Two-loop self-energy contributions to the fourth-order Lamb shift of ground-state hydrogenic ions are treated to all orders in Zα by using exact Dirac-Coulomb propagators. A rearrangement of the calculation into four ultraviolet finite parts, the M, P, F, and perturbed orbital (PO) terms, is made. Reference-state singularities present in the M and P terms are shown to cancel. The most computationally intensive part of the calculation, the M term, is evaluated for hydrogenlike uranium and bismuth, the F term is evaluated for a range of Z values, but the P term is left for a future calculation. For hydrogenlike uranium, previous calculations of the PO term give -0.971 eV: the contributions from the M and F terms calculated here sum to -0.325 eV.

  13. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.

  14. Model 'zero-age' lunar thermal profiles resulting from electrical induction

    NASA Technical Reports Server (NTRS)

    Herbert, F.; Sonett, C. P.; Wiskerchen, M. J.

    1977-01-01

    Thermal profiles for the moon are calculated under the assumption that a pre-main-sequence T-Tauri-like solar wind excites both transverse magnetic and transverse electric induction while the moon is accreting. A substantial initial temperature rise occurs, possibly of sufficient magnitude to cause subsequent early extensive melting throughout the moon in conjunction with nominal long-lived radioactives. In these models, accretion is an unimportant direct source of thermal energy but is important because even small temperature rises from accretion cause significant changes in bulk electrical conductivity. Induction depends upon the radius of the moon, which we take to be accumulating while it is being heated electrically. The 'zero-age' profiles calculated in this paper are proposed as initial conditions for long-term thermal evolution of the moon.

  15. Detailed source term estimation of atmospheric release during the Fukushima Dai-ichi nuclear power plant accident by coupling atmospheric and oceanic dispersion models

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Chino, Masamichi; Terada, Hiroaki; Kobayashi, Takuya; Ota, Masakazu; Nagai, Haruyasu; Kajino, Mizuo

    2014-05-01

    Temporal variations of release amounts of radionuclides during the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident and their dispersion process are essential to evaluate the environmental impacts and resultant radiological doses to the public. Here, we estimated a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data and coupling atmospheric and oceanic dispersion simulations by WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN developed by the authors. New schemes for wet, dry, and fog depositions of radioactive iodine gas (I2 and CH3I) and other particles (I-131, Te-132, Cs-137, and Cs-134) were incorporated into WSPEEDI-II. The deposition calculated by WSPEEDI-II was used as input data of ocean dispersion calculations by SEA-GEARN. The reverse estimation method based on the simulation by both models assuming unit release rate (1 Bq h-1) was adopted to estimate the source term at the FNPP1 using air dose rate, and air sea surface concentrations. The results suggested that the major release of radionuclides from the FNPP1 occurred in the following periods during March 2011: afternoon on the 12th when the venting and hydrogen explosion occurred at Unit 1, morning on the 13th after the venting event at Unit 3, midnight on the 14th when several openings of SRV (steam relief valve) were conducted at Unit 2, morning and night on the 15th, and morning on the 16th. The modified WSPEEDI-II using the newly estimated source term well reproduced local and regional patterns of air dose rate and surface deposition of I-131 and Cs-137 obtained by airborne observations. Our dispersion simulations also revealed that the highest radioactive contamination areas around FNPP1 were created from 15th to 16th March by complicated interactions among rainfall (wet deposition), plume movements, and phase properties (gas or particle) of I-131 and release rates associated with reactor pressure variations in Units 2 and 3.

  16. Analytical modeling of operating characteristics of premixing-prevaporizing fuel-air mixing passages. Volume 1: Analysis and results

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Chiappetta, L. M.; Edwards, D. E.; Mcvey, J. B.

    1982-01-01

    A model for predicting the distribution of liquid fuel droplets and fuel vapor in premixing-prevaporizing fuel-air mixing passages of the direct injection type is reported. This model consists of three computer programs; a calculation of the two dimensional or axisymmetric air flow field neglecting the effects of fuel; a calculation of the three dimensional fuel droplet trajectories and evaporation rates in a known, moving air flow; a calculation of fuel vapor diffusing into a moving three dimensional air flow with source terms dependent on the droplet evaporation rates. The fuel droplets are treated as individual particle classes each satisfying Newton's law, a heat transfer, and a mass transfer equation. This fuel droplet model treats multicomponent fuels and incorporates the physics required for the treatment of elastic droplet collisions, droplet shattering, droplet coalescence and droplet wall interactions. The vapor diffusion calculation treats three dimensional, gas phase, turbulent diffusion processes. The analysis includes a model for the autoignition of the fuel air mixture based upon the rate of formation of an important intermediate chemical species during the preignition period.

  17. Photon migration in non-scattering tissue and the effects on image reconstruction

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Delpy, D. T.; Arridge, S. R.

    1999-12-01

    Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.

  18. 9 CFR 124.20 - Patent term extension calculation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Patent term extension calculation. 124... OF AGRICULTURE VIRUSES, SERUMS, TOXINS, AND ANALOGOUS PRODUCTS; ORGANISMS AND VECTORS PATENT TERM RESTORATION Regulatory Review Period § 124.20 Patent term extension calculation. (a) As provided in 37 CFR 1...

  19. 9 CFR 124.20 - Patent term extension calculation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false Patent term extension calculation. 124... OF AGRICULTURE VIRUSES, SERUMS, TOXINS, AND ANALOGOUS PRODUCTS; ORGANISMS AND VECTORS PATENT TERM RESTORATION Regulatory Review Period § 124.20 Patent term extension calculation. (a) As provided in 37 CFR 1...

  20. 9 CFR 124.20 - Patent term extension calculation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Patent term extension calculation. 124... OF AGRICULTURE VIRUSES, SERUMS, TOXINS, AND ANALOGOUS PRODUCTS; ORGANISMS AND VECTORS PATENT TERM RESTORATION Regulatory Review Period § 124.20 Patent term extension calculation. (a) As provided in 37 CFR 1...

  1. 9 CFR 124.20 - Patent term extension calculation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Patent term extension calculation. 124... OF AGRICULTURE VIRUSES, SERUMS, TOXINS, AND ANALOGOUS PRODUCTS; ORGANISMS AND VECTORS PATENT TERM RESTORATION Regulatory Review Period § 124.20 Patent term extension calculation. (a) As provided in 37 CFR 1...

  2. 9 CFR 124.20 - Patent term extension calculation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Patent term extension calculation. 124... OF AGRICULTURE VIRUSES, SERUMS, TOXINS, AND ANALOGOUS PRODUCTS; ORGANISMS AND VECTORS PATENT TERM RESTORATION Regulatory Review Period § 124.20 Patent term extension calculation. (a) As provided in 37 CFR 1...

  3. Modeling and measurements of XRD spectra of extended solids under high pressure

    NASA Astrophysics Data System (ADS)

    Batyrev, I. G.; Coleman, S. P.; Stavrou, E.; Zaug, J. M.; Ciezak-Jenkins, J. A.

    2017-06-01

    We present results of evolutionary simulations based on density functional calculations of various extended solids: N-Si and N-H using variable and fixed concentration methods of USPEX. Predicted from the evolutionary simulations structures were analyzed in terms of thermo-dynamical stability and agreement with experimental X-ray diffraction spectra. Stability of the predicted system was estimated from convex-hull plots. X-ray diffraction spectra were calculated using a virtual diffraction algorithm which computes kinematic diffraction intensity in three-dimensional reciprocal space before being reduced to a two-theta line profile. Calculations of thousands of XRD spectra were used to search for a structure of extended solids at certain pressures with best fits to experimental data according to experimental XRD peak position, peak intensity and theoretically calculated enthalpy. Comparison of Raman and IR spectra calculated for best fitted structures with available experimental data shows reasonable agreement for certain vibration modes. Part of this work was performed by LLNL, Contract DE-AC52-07NA27344. We thank the Joint DoD / DOE Munitions Technology Development Program, the HE C-II research program at LLNL and Advanced Light Source, supported by BES DOE, Contract No. DE-AC02-05CH112.

  4. Precalculus teachers' perspectives on using graphing calculators: an example from one curriculum

    NASA Astrophysics Data System (ADS)

    Karadeniz, Ilyas; Thompson, Denisse R.

    2018-01-01

    Graphing calculators are hand-held technological tools currently used in mathematics classrooms. Teachers' perspectives on using graphing calculators are important in terms of exploring what teachers think about using such technology in advanced mathematics courses, particularly precalculus courses. A descriptive intrinsic case study was conducted to analyse the perspectives of 11 teachers using graphing calculators with potential Computer Algebra System (CAS) capability while teaching Functions, Statistics, and Trigonometry, a precalculus course for 11th-grade students developed by the University of Chicago School Mathematics Project. Data were collected from multiple sources as part of a curriculum evaluation study conducted during the 2007-2008 school year. Although all teachers were using the same curriculum that integrated CAS into the instructional materials, teachers had mixed views about the technology. Graphing calculator features were used much more than CAS features, with many teachers concerned about the use of CAS because of pressures from external assessments. In addition, several teachers found it overwhelming to learn a new technology at the same time they were learning a new curriculum. The results have implications for curriculum developers and others working with teachers to update curriculum and the use of advanced technologies simultaneously.

  5. Organ Dose-Rate Calculations for Small Mammals at Maralinga, the Nevada Test Site, Hanford and Fukushima: A Comparison of Ellipsoidal and Voxelized Dosimetric Methodologies.

    PubMed

    Caffrey, Emily A; Johansen, Mathew P; Higley, Kathryn A

    2015-10-01

    Radiological dosimetry for nonhuman biota typically relies on calculations that utilize the Monte Carlo simulations of simple, ellipsoidal geometries with internal radioactivity distributed homogeneously throughout. In this manner it is quick and easy to estimate whole-body dose rates to biota. Voxel models are detailed anatomical phantoms that were first used for calculating radiation dose to humans, which are now being extended to nonhuman biota dose calculations. However, if simple ellipsoidal models provide conservative dose-rate estimates, then the additional labor involved in creating voxel models may be unnecessary for most scenarios. Here we show that the ellipsoidal method provides conservative estimates of organ dose rates to small mammals. Organ dose rates were calculated for environmental source terms from Maralinga, the Nevada Test Site, Hanford and Fukushima using both the ellipsoidal and voxel techniques, and in all cases the ellipsoidal method yielded more conservative dose rates by factors of 1.2-1.4 for photons and 5.3 for beta particles. Dose rates for alpha-emitting radionuclides are identical for each method as full energy absorption in source tissue is assumed. The voxel procedure includes contributions to dose from organ-to-organ irradiation (shown here to comprise 2-50% of total dose from photons and 0-93% of total dose from beta particles) that is not specifically quantified in the ellipsoidal approach. Overall, the voxel models provide robust dosimetry for the nonhuman mammals considered in this study, and though the level of detail is likely extraneous to demonstrating regulatory compliance today, voxel models may nevertheless be advantageous in resolving ongoing questions regarding the effects of ionizing radiation on wildlife.

  6. Burden Calculator: a simple and open analytical tool for estimating the population burden of injuries.

    PubMed

    Bhalla, Kavi; Harrison, James E

    2016-04-01

    Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  8. Study of dose calculation on breast brachytherapy using prism TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fendriani, Yoza; Haryanto, Freddy

    2015-09-30

    PRISM is one of non-commercial Treatment Planning System (TPS) and is developed at the University of Washington. In Indonesia, many cancer hospitals use expensive commercial TPS. This study aims to investigate Prism TPS which been applied to the dose distribution of brachytherapy by taking into account the effect of source position and inhomogeneities. The results will be applicable for clinical Treatment Planning System. Dose calculation has been implemented for water phantom and CT scan images of breast cancer using point source and line source. This study used point source and line source and divided into two cases. On the firstmore » case, Ir-192 seed source is located at the center of treatment volume. On the second case, the source position is gradually changed. The dose calculation of every case performed on a homogeneous and inhomogeneous phantom with dimension 20 × 20 × 20 cm{sup 3}. The inhomogeneous phantom has inhomogeneities volume 2 × 2 × 2 cm{sup 3}. The results of dose calculations using PRISM TPS were compared to literature data. From the calculation of PRISM TPS, dose rates show good agreement with Plato TPS and other study as published by Ramdhani. No deviations greater than ±4% for all case. Dose calculation in inhomogeneous and homogenous cases show similar result. This results indicate that Prism TPS is good in dose calculation of brachytherapy but not sensitive for inhomogeneities. Thus, the dose calculation parameters developed in this study were found to be applicable for clinical treatment planning of brachytherapy.« less

  9. Long-term chloride concentrations in North American and European freshwater lakes

    PubMed Central

    Dugan, Hilary A.; Summers, Jamie C.; Skaff, Nicholas K.; Krivak-Tetley, Flora E.; Doubek, Jonathan P.; Burke, Samantha M.; Bartlett, Sarah L.; Arvola, Lauri; Jarjanazi, Hamdi; Korponai, János; Kleeberg, Andreas; Monet, Ghislaine; Monteith, Don; Moore, Karen; Rogora, Michela; Hanson, Paul C.; Weathers, Kathleen C.

    2017-01-01

    Anthropogenic sources of chloride in a lake catchment, including road salt, fertilizer, and wastewater, can elevate the chloride concentration in freshwater lakes above background levels. Rising chloride concentrations can impact lake ecology and ecosystem services such as fisheries and the use of lakes as drinking water sources. To analyze the spatial extent and magnitude of increasing chloride concentrations in freshwater lakes, we amassed a database of 529 lakes in Europe and North America that had greater than or equal to ten years of chloride data. For each lake, we calculated climate statistics of mean annual total precipitation and mean monthly air temperatures from gridded global datasets. We also quantified land cover metrics, including road density and impervious surface, in buffer zones of 100 to 1,500 m surrounding the perimeter of each lake. This database represents the largest global collection of lake chloride data. We hope that long-term water quality measurements in areas outside Europe and North America can be added to the database as they become available in the future. PMID:28786983

  10. Long-term chloride concentrations in North American and European freshwater lakes.

    PubMed

    Dugan, Hilary A; Summers, Jamie C; Skaff, Nicholas K; Krivak-Tetley, Flora E; Doubek, Jonathan P; Burke, Samantha M; Bartlett, Sarah L; Arvola, Lauri; Jarjanazi, Hamdi; Korponai, János; Kleeberg, Andreas; Monet, Ghislaine; Monteith, Don; Moore, Karen; Rogora, Michela; Hanson, Paul C; Weathers, Kathleen C

    2017-08-08

    Anthropogenic sources of chloride in a lake catchment, including road salt, fertilizer, and wastewater, can elevate the chloride concentration in freshwater lakes above background levels. Rising chloride concentrations can impact lake ecology and ecosystem services such as fisheries and the use of lakes as drinking water sources. To analyze the spatial extent and magnitude of increasing chloride concentrations in freshwater lakes, we amassed a database of 529 lakes in Europe and North America that had greater than or equal to ten years of chloride data. For each lake, we calculated climate statistics of mean annual total precipitation and mean monthly air temperatures from gridded global datasets. We also quantified land cover metrics, including road density and impervious surface, in buffer zones of 100 to 1,500 m surrounding the perimeter of each lake. This database represents the largest global collection of lake chloride data. We hope that long-term water quality measurements in areas outside Europe and North America can be added to the database as they become available in the future.

  11. NSRD-15:Computational Capability to Substantiate DOE-HDBK-3010 Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louie, David; Bignell, John; Dingreville, Remi Philippe Michel

    Safety basis analysts throughout the U.S. Department of Energy (DOE) complex rely heavily on the information provided in the DOE Handbook, DOE-HDBK-3010, Airborne Release Fractions/Rates and Respirable Fractions for Nonreactor Nuclear Facilities, to determine radionuclide source terms from postulated accident scenarios. In calculating source terms, analysts tend to use the DOE Handbook’s bounding values on airborne release fractions (ARFs) and respirable fractions (RFs) for various categories of insults (representing potential accident release categories). This is typically due to both time constraints and the avoidance of regulatory critique. Unfortunately, these bounding ARFs/RFs represent extremely conservative values. Moreover, they were derived frommore » very limited small-scale bench/laboratory experiments and/or from engineered judgment. Thus, the basis for the data may not be representative of the actual unique accident conditions and configurations being evaluated. The goal of this research is to develop a more accurate and defensible method to determine bounding values for the DOE Handbook using state-of-art multi-physics-based computer codes.« less

  12. System alignment using the Talbot effect

    NASA Astrophysics Data System (ADS)

    Chevallier, Raymond; Le Falher, Eric; Heggarty, Kevin

    1990-08-01

    The Talbot effect is utilized to correct an alignment problem related to a neural network used for image recognition, which required the alignment of a spatial light modulator (SLM) with the input module. A mathematical model which employs the Fresnel diffraction theory is presented to describe the method. The calculation of the diffracted amplitude describes the wavefront sphericity and the original object transmittance function in order to qualify the lateral shift of the Talbot image. Another explanation is set forth in terms of plane-wave illumination in the neural network. Using a Fourier series and by describing planes where all the harmonics are in phase, the reconstruction of Talbot images is explained. The alignment is effective when the lenslet array is aligned on the even Talbot images of the SLM pixels and the incident wave is a plane wave. The alignment is evaluated in terms of source and periodicity errors, tilt of the incident plane waves, and finite object dimensions. The effects of the error sources are concluded to be negligible, the lenslet array is shown to be successfully aligned with the SLM, and other alignment applications are shown to be possible.

  13. Validation of a virtual source model of medical linac for Monte Carlo dose calculation using multi-threaded Geant4.

    PubMed

    Aboulbanine, Zakaria; El Khayati, Naïma

    2018-04-13

    The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, [Formula: see text] [Formula: see text], [Formula: see text] [Formula: see text], and [Formula: see text] [Formula: see text] for squared fields, and [Formula: see text] [Formula: see text] for an asymmetric rectangular field. Good agreement in terms of [Formula: see text] formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM's precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential Geant4, when running the same simulation code for both. The developed VSM for 6 MV/10 MV beams widely used, is a general concept easy to adapt in order to reconstruct comparable beam qualities for various linac configurations, facilitating its integration for MC treatment planning purposes.

  14. Transition From Ideal To Viscous Mach Cones In A Partonic Transport Model

    NASA Astrophysics Data System (ADS)

    Bouras, I.; El, A.; Fochler, O.; Niemi, H.; Xu, Z.; Greiner, C.

    2013-09-01

    Using a partonic transport model we investigate the evolution of conical structures in ultrarelativistic matter. Using two different source terms and varying the transport properties of the matter we study the formation of Mach Cones. Furthermore, in an additional study we extract the two-particle correlations from the numerical calculations and compare them to an analytical approximation. The influence of the viscosity to the shape of Mach Cones and the corresponding two-particle correlations is studied by adjusting the cross section of the medium.

  15. SEMI-ANALYTIC CALCULATION OF THE TEMPERATURE DISTRIBUTION IN A PERFORATED CIRCLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, J.M.; Fowler, J.K.

    The flow of heat in a tube-in-shell fuel element is closely related to the two-dimensional heat flow in a circular region perforated by a number of circular holes. Mathematical expressions for the two-dimensional temperature distribution were obtained in terms of sources and sinks of increasing complexity located within the holes and beyond the outer circle. A computer program, TINS, which solves the temperature problem for an array of one or two rings of holes, with or without a center hole, is also described. (auth)

  16. Swell Across the Continental Shelf

    DTIC Science & Technology

    2001-09-01

    Arlington, VA The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense ...le terme de source, tandis que les effets de la réfraction et du levage causés par les variations de profondeur aux échelles sous-maille sont...précisement pris en compte grâce aux rayons pré- calculés. Ainsi ce modèle peut être appliqué à de vastes zones côtières avec des maillages

  17. Monte Carlo dose calculation in dental amalgam phantom

    PubMed Central

    Aziz, Mohd. Zahri Abdul; Yusoff, A. L.; Osman, N. D.; Abdullah, R.; Rabaie, N. A.; Salikin, M. S.

    2015-01-01

    It has become a great challenge in the modern radiation treatment to ensure the accuracy of treatment delivery in electron beam therapy. Tissue inhomogeneity has become one of the factors for accurate dose calculation, and this requires complex algorithm calculation like Monte Carlo (MC). On the other hand, computed tomography (CT) images used in treatment planning system need to be trustful as they are the input in radiotherapy treatment. However, with the presence of metal amalgam in treatment volume, the CT images input showed prominent streak artefact, thus, contributed sources of error. Hence, metal amalgam phantom often creates streak artifacts, which cause an error in the dose calculation. Thus, a streak artifact reduction technique was applied to correct the images, and as a result, better images were observed in terms of structure delineation and density assigning. Furthermore, the amalgam density data were corrected to provide amalgam voxel with accurate density value. As for the errors of dose uncertainties due to metal amalgam, they were reduced from 46% to as low as 2% at d80 (depth of the 80% dose beyond Zmax) using the presented strategies. Considering the number of vital and radiosensitive organs in the head and the neck regions, this correction strategy is suggested in reducing calculation uncertainties through MC calculation. PMID:26500401

  18. An Algorithm for the Calculation of Exact Term Discrimination Values.

    ERIC Educational Resources Information Center

    Willett, Peter

    1985-01-01

    Reports algorithm for calculation of term discrimination values that is sufficiently fast in operation to permit use of exact values. Evidence is presented to show that relationship between term discrimination and term frequency is crucially dependent upon type of inter-document similarity measure used for calculation of discrimination values. (13…

  19. Long-term monitoring of persistent organic pollutants (POPs) at the Norwegian Troll station in Dronning Maud Land, Antarctica

    NASA Astrophysics Data System (ADS)

    Kallenborn, R.; Breivik, K.; Eckhardt, S.; Lunder, C. R.; Manø, S.; Schlabach, M.; Stohl, A.

    2013-07-01

    A first long-term monitoring of selected persistent organic pollutants (POPs) in Antarctic air has been conducted at the Norwegian research station Troll (Dronning Maud Land). As target contaminants 32 PCB congeners, α- and γ-hexachlorocyclohexane (HCH), trans- and cis-chlordane, trans- and cis-nonachlor, p,p'- and o,p-DDT, DDD, DDE as well as hexachlorobenzene (HCB) were selected. The monitoring program with weekly samples taken during the period 2007-2010 was coordinated with the parallel program at the Norwegian Arctic monitoring site (Zeppelin mountain, Ny-Ålesund, Svalbard) in terms of priority compounds, sampling schedule as well as analytical methods. The POP concentration levels found in Antarctica were considerably lower than Arctic atmospheric background concentrations. Similar to observations for Arctic samples, HCB is the predominant POP compound, with levels of around 22 pg m-3 throughout the entire monitoring period. In general, the following concentration distribution was found for the Troll samples analyzed: HCB > Sum HCH > Sum PCB > Sum DDT > Sum chlordanes. Atmospheric long-range transport was identified as a major contamination source for POPs in Antarctic environments. Several long-range transport events with elevated levels of pesticides and/or compounds with industrial sources were identified based on retroplume calculations with a Lagrangian particle dispersion model (FLEXPART).

  20. Useful integral function and its application in thermal radiation calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, S.L.; Rhee, K.T.

    1983-07-01

    In applying the Planck formula for computing the energy radiated from an isothermal source, the emissivity of the source must be found. This emissivity is expressed in terms of its spectral emissivity. This spectral emissivity of an isothermal volume with a given optical length containing radiating gases and/or soot, is computed through a relation (Sparrow and Cess, 1978) that contains the optical length and the spectral volume absorption coefficient. An exact solution is then offered to the equation that results from introducing the equation for the spectral emissivity into the equation for the emissivity. The function obtained is shown tomore » be useful in computing the spectral emissivity of an isothermal volume containing either soot or gaseous species, or both. Examples are presented.« less

  1. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    NASA Astrophysics Data System (ADS)

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  2. Theoretical and experimental aspects of laser cutting with a direct diode laser

    NASA Astrophysics Data System (ADS)

    Costa Rodrigues, G.; Pencinovsky, J.; Cuypers, M.; Duflou, J. R.

    2014-10-01

    Recent developments in beam coupling techniques have made it possible to scale up the power of diode lasers with a laser beam quality suitable for laser cutting of metal sheets. In this paper a prototype of a Direct Diode Laser (DDL) source (BPP of 22 mm-mrad) is analyzed in terms of efficiency and cut performance and compared with two established technologies, CO2 and fiber lasers. An analytical model based on absorption calculations is used to predict the performance of the studied laser source with a good agreement with experimental results. Furthermore results of fusion cutting of stainless steel and aluminium alloys as well as oxygen cutting of structural steel are presented, demonstrating that industrial relevant cutting speeds with high cutting quality can now be achieved with DDL.

  3. Probabilistic seismic hazard analysis for a nuclear power plant site in southeast Brazil

    NASA Astrophysics Data System (ADS)

    de Almeida, Andréia Abreu Diniz; Assumpção, Marcelo; Bommer, Julian J.; Drouet, Stéphane; Riccomini, Claudio; Prates, Carlos L. M.

    2018-05-01

    A site-specific probabilistic seismic hazard analysis (PSHA) has been performed for the only nuclear power plant site in Brazil, located 130 km southwest of Rio de Janeiro at Angra dos Reis. Logic trees were developed for both the seismic source characterisation and ground-motion characterisation models, in both cases seeking to capture the appreciable ranges of epistemic uncertainty with relatively few branches. This logic-tree structure allowed the hazard calculations to be performed efficiently while obtaining results that reflect the inevitable uncertainty in long-term seismic hazard assessment in this tectonically stable region. An innovative feature of the study is an additional seismic source zone added to capture the potential contributions of characteristics earthquake associated with geological faults in the region surrounding the coastal site.

  4. Regional-Scale Differential Time Tomography Methods: Development and Application to the Sichuan, China, Dataset

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Thurber, C.; Wang, W.; Roecker, S. W.

    2008-12-01

    We extended our recent development of double-difference seismic tomography [Zhang and Thurber, BSSA, 2003] to the use of station-pair residual differences in addition to event-pair residual differences. Tomography using station- pair residual differences is somewhat akin to teleseismic tomography but with the sources contained within the model region. Synthetic tests show that the inversion using both event- and station-pair residual differences has advantages in terms of more accurately recovering higher-resolution structure in both the source and receiver regions. We used the Spherical-Earth Finite-Difference (SEFD) travel time calculation method in the tomographic system. The basic concept is the extension of a standard Cartesian FD travel time algorithm [Vidale, 1990] to the spherical case by developing a mesh in radius, co-latitude, and longitude, expressing the FD derivatives in a form appropriate to the spherical mesh, and constructing"stencil" to calculate extrapolated travel times. The SEFD travel time calculation method is more advantageous in dealing with heterogeneity and sphericity of the Earth than the simple Earth flattening transformation and the"sphere-in-a-bo" approach [Flanagan et al., 2007]. We applied this method to the Sichuan, China data set for the period of 2001 to 2004. The Vp, Vs and Vp/Vs models show that there is a clear contrast across the Longmenshan Fault, where the 2008 M8 Wenchuan earthquake initiated.

  5. On the development of a comprehensive MC simulation model for the Gamma Knife Perfexion radiosurgery unit

    NASA Astrophysics Data System (ADS)

    Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.

    2016-02-01

    This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819  ±  0.004 and 0.8941  ±  0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.

  6. Modeling the Capacitive Deionization Process in Dual-Porosity Electrodes

    DOE PAGES

    Gabitto, Jorge; Tsouris, Costas

    2016-04-28

    In many areas of the world, there is a need to increase water availability. Capacitive deionization (CDI) is an electrochemical water treatment process that can be a viable alternative for treating water and for saving energy. A model is presented to simulate the CDI process in heterogeneous porous media comprising two different pore sizes. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without Faradaic reactions or specific adsorption of ions. A two steps volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. A one-equationmore » model based on the principle of local equilibrium is derived. The constraints determining the range of application of the one-equation model are presented. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. The source terms that appear in the average equations are calculated using theoretical derivations. The global diffusivity is calculated by solving the closure problem.« less

  7. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE PAGES

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie; ...

    2016-10-18

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  8. Long-term variability in sugarcane bagasse feedstock compositional methods: Sources and magnitude of analytical variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, David W.; Sluiter, Justin B.; Sluiter, Amie

    In an effort to find economical, carbon-neutral transportation fuels, biomass feedstock compositional analysis methods are used to monitor, compare, and improve biofuel conversion processes. These methods are empirical, and the analytical variability seen in the feedstock compositional data propagates into variability in the conversion yields, component balances, mass balances, and ultimately the minimum ethanol selling price (MESP). We report the average composition and standard deviations of 119 individually extracted National Institute of Standards and Technology (NIST) bagasse [Reference Material (RM) 8491] run by seven analysts over 7 years. Two additional datasets, using bulk-extracted bagasse (containing 58 and 291 replicates each),more » were examined to separate out the effects of batch, analyst, sugar recovery standard calculation method, and extractions from the total analytical variability seen in the individually extracted dataset. We believe this is the world's largest NIST bagasse compositional analysis dataset and it provides unique insight into the long-term analytical variability. Understanding the long-term variability of the feedstock analysis will help determine the minimum difference that can be detected in yield, mass balance, and efficiency calculations. The long-term data show consistent bagasse component values through time and by different analysts. This suggests that the standard compositional analysis methods were performed consistently and that the bagasse RM itself remained unchanged during this time period. The long-term variability seen here is generally higher than short-term variabilities. It is worth noting that the effect of short-term or long-term feedstock compositional variability on MESP is small, about $0.03 per gallon. The long-term analysis variabilities reported here are plausible minimum values for these methods, though not necessarily average or expected variabilities. We must emphasize the importance of training and good analytical procedures needed to generate this data. As a result, when combined with a robust QA/QC oversight protocol, these empirical methods can be relied upon to generate high-quality data over a long period of time.« less

  9. CASSIA--a dynamic model for predicting intra-annual sink demand and interannual growth variation in Scots pine.

    PubMed

    Schiestl-Aalto, Pauliina; Kulmala, Liisa; Mäkinen, Harri; Nikinmaa, Eero; Mäkelä, Annikki

    2015-04-01

    The control of tree growth vs environment by carbon sources or sinks remains unresolved although it is widely studied. This study investigates growth of tree components and carbon sink-source dynamics at different temporal scales. We constructed a dynamic growth model 'carbon allocation sink source interaction' (CASSIA) that calculates tree-level carbon balance from photosynthesis, respiration, phenology and temperature-driven potential structural growth of tree organs and dynamics of stored nonstructural carbon (NSC) and their modifying influence on growth. With the model, we tested hypotheses that sink demand explains the intra-annual growth dynamics of the meristems, and that the source supply is further needed to explain year-to-year growth variation. The predicted intra-annual dimensional growth of shoots and needles and the number of cells in xylogenesis phases corresponded with measurements, whereas NSC hardly limited the growth, supporting the first hypothesis. Delayed GPP influence on potential growth was necessary for simulating the yearly growth variation, indicating also at least an indirect source limitation. CASSIA combines seasonal growth and carbon balance dynamics with long-term source dynamics affecting growth and thus provides a first step to understanding the complex processes regulating intra- and interannual growth and sink-source dynamics. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  10. Neutron dose measurements of Varian and Elekta linacs by TLD600 and TLD700 dosimeters and comparison with MCNP calculations

    PubMed Central

    Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel

    2014-01-01

    High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by 241Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy-1 for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy-1 achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production. PMID:24600167

  11. Neutron dose measurements of Varian and Elekta linacs by TLD600 and TLD700 dosimeters and comparison with MCNP calculations.

    PubMed

    Nedaie, Hassan Ali; Darestani, Hoda; Banaee, Nooshin; Shagholi, Negin; Mohammadi, Kheirollah; Shahvar, Arjang; Bayat, Esmaeel

    2014-01-01

    High-energy linacs produce secondary particles such as neutrons (photoneutron production). The neutrons have the important role during treatment with high energy photons in terms of protection and dose escalation. In this work, neutron dose equivalents of 18 MV Varian and Elekta accelerators are measured by thermoluminescent dosimeter (TLD) 600 and TLD700 detectors and compared with the Monte Carlo calculations. For neutron and photon dose discrimination, first TLDs were calibrated separately by gamma and neutron doses. Gamma calibration was carried out in two procedures; by standard 60Co source and by 18 MV linac photon beam. For neutron calibration by (241)Am-Be source, irradiations were performed in several different time intervals. The Varian and Elekta linac heads and the phantom were simulated by the MCNPX code (v. 2.5). Neutron dose equivalent was calculated in the central axis, on the phantom surface and depths of 1, 2, 3.3, 4, 5, and 6 cm. The maximum photoneutron dose equivalents which calculated by the MCNPX code were 7.06 and 2.37 mSv.Gy(-1) for Varian and Elekta accelerators, respectively, in comparison with 50 and 44 mSv.Gy(-1) achieved by TLDs. All the results showed more photoneutron production in Varian accelerator compared to Elekta. According to the results, it seems that TLD600 and TLD700 pairs are not suitable dosimeters for neutron dosimetry inside the linac field due to high photon flux, while MCNPX code is an appropriate alternative for studying photoneutron production.

  12. Source-receptor matrix calculation with a Source-receptor matrix calculation with a backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2003-08-01

    The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, ...). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  13. Comparison of TLD calibration methods for  192Ir dosimetry

    PubMed Central

    Butler, Duncan J.; Wilfert, Lisa; Ebert, Martin A.; Todd, Stephen P.; Hayton, Anna J.M.; Kron, Tomas

    2013-01-01

    For the purpose of dose measurement using a high‐dose rate  192Ir source, four methods of thermoluminescent dosimeter (TLD) calibration were investigated. Three of the four calibration methods used the  192Ir source. Dwell times were calculated to deliver 1 Gy to the TLDs irradiated either in air or water. Dwell time calculations were confirmed by direct measurement using an ionization chamber. The fourth method of calibration used 6 MV photons from a medical linear accelerator, and an energy correction factor was applied to account for the difference in sensitivity of the TLDs in  192Ir and 6 M V. The results of the four TLD calibration methods are presented in terms of the results of a brachytherapy audit where seven Australian centers irradiated three sets of TLDs in a water phantom. The results were in agreement within estimated uncertainties when the TLDs were calibrated with the  192Ir source. Calibrating TLDs in a phantom similar to that used for the audit proved to be the most practical method and provided the greatest confidence in measured dose. When calibrated using 6 MV photons, the TLD results were consistently higher than the  192Ir−calibrated TLDs, suggesting this method does not fully correct for the response of the TLDs when irradiated in the audit phantom. PACS number: 87 PMID:23318392

  14. Reconstructing Seasonal Range Expansion of the Tropical Butterfly, Heliconius charithonia, into Texas Using Historical Records

    PubMed Central

    Cardoso, Márcio Zikán

    2010-01-01

    While butterfly responses to climate change are well studied, detailed analyses of the seasonal dynamics of range expansion are few. Therefore, the seasonal range expansion of the butterfly Heliconius charithonia L. (Lepidoptera: Nymphalidae) was analyzed using a database of sightings and collection records dating from 1884 to 1992 from Texas. First and last sightings for each year were noted, and residency time calculated, for each collection locality. To test whether sighting dates were a consequence of distance from source (defined as the southernmost location of permanent residence), the distance between source and other locations was calculated. Additionally, consistent directional change over time of arrival dates was tested in a well-sampled area (San Antonio). Also, correlations between temperature, rainfall, and butterfly distribution were tested to determine whether butterfly sightings were influenced by climate. Both arrival date and residency interval were influenced by distance from source: butterflies arrived later and residency time was shorter at more distant locations. Butterfly occurrence was correlated with temperature but not rainfall. Residency time was also correlated with temperature but not rainfall. Since temperature follows a north-south gradient this may explain the inverse relationship between residency and distance from entry point. No long-term directional change in arrival dates was found in San Antonio. The biological meaning of these findings is discussed suggesting that naturalist notes can be a useful tool in reconstructing spatial dynamics. PMID:20672989

  15. Very Large Array OH Zeeman Observations of the Star-forming Region S88B

    NASA Astrophysics Data System (ADS)

    Sarma, A. P.; Brogan, C. L.; Bourke, T. L.; Eftimova, M.; Troland, T. H.

    2013-04-01

    We present observations of the Zeeman effect in OH thermal absorption main lines at 1665 and 1667 MHz taken with the Very Large Array toward the star-forming region S88B. The OH absorption profiles toward this source are complicated, and contain several blended components toward a number of positions. Almost all of the OH absorbing gas is located in the eastern parts of S88B, toward the compact continuum source S88B-2 and the eastern parts of the extended continuum source S88B-1. The ratio of 1665/1667 MHz OH line intensities indicates the gas is likely highly clumped, in agreement with other molecular emission line observations in the literature. S88-B appears to present a similar geometry to the well-known star-forming region M17, in that there is an edge-on eastward progression from ionized to molecular gas. The detected magnetic fields appear to mirror this eastward transition; we detected line-of-sight magnetic fields ranging from 90 to 400 μG, with the lowest values of the field to the southwest of the S88B-1 continuum peak, and the highest values to its northeast. We used the detected fields to assess the importance of the magnetic field in S88B by a number of methods; we calculated the ratio of thermal to magnetic pressures, we calculated the critical field necessary to completely support the cloud against self-gravity and compared it to the observed field, and we calculated the ratio of mass to magnetic flux in terms of the critical value of this parameter. All these methods indicated that the magnetic field in S88B is dynamically significant, and should provide an important source of support against gravity. Moreover, the magnetic energy density is in approximate equipartition with the turbulent energy density, again pointing to the importance of the magnetic field in this region.

  16. Tracing oxidative weathering from the Andes to the lowland Amazon Basin using dissoved rhenium

    NASA Astrophysics Data System (ADS)

    Dellinger, M.; Hilton, R. G.; West, A. J.; Torres, M.; Burton, K. W.; Clark, K. E.; Baronas, J. J.

    2016-12-01

    Over long timescales (>105 yrs), the abundance of carbon dioxide (CO2) in the atmosphere is determined by the balance of the major carbon sources and sinks. Among the major carbon sources, the oxidation of organic carbon contained within sedimentary rocks ("petrogenic" carbon, or OCpetro) is thought to result in CO2 emission of similar magnitude to that released by volcanism. Rhenium (Re) has been proposed as a proxy for tracing OCpetro oxidation. Here we investigate the source, behavior and flux of dissolved and particulate rhenium (Re) in the Madre de Dios watershed (a major Andean tributary of the Amazon River) and the lowlands, aiming to characterize the behavior of Re in river water and quantify the flux of CO2 released by OCpetro oxidation. Measured Re concentrations in Andean rivers range from 0.07 to 1.55 ppt. In the Andes, Re concentration do not change significantly with water discharge, whereas in the lowlands, Re concentration decrease at high water discharge. Mass balance calculation show that more than 70% of the dissolved Re is sourced from the oxidation of OCpetro the Andes-floodplain system. We calculate dissolved Re flux over a hydrological year to estimate the rates of oxidative weathering, and the associated CO2 release from OCpetro. Rates are high in the Andean headwaters, consistent with estimates from other mountain rivers with similar rates of physical erosion. We find evidence that a significant amount of additional oxidation (Re flux) happens during floodplain transport. These results have important implications for improving our understanding of the source and processes controlling Re in rivers, and allowing us to quantify long-term OCpetro cycling in large river basins.

  17. A multiphase interfacial model for the dissolution of spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Jerden, James L.; Frey, Kurt; Ebert, William

    2015-07-01

    The Fuel Matrix Dissolution Model (FMDM) is an electrochemical reaction/diffusion model for the dissolution of spent uranium oxide fuel. The model was developed to provide radionuclide source terms for use in performance assessment calculations for various types of geologic repositories. It is based on mixed potential theory and consists of a two-phase fuel surface made up of UO2 and a noble metal bearing fission product phase in contact with groundwater. The corrosion potential at the surface of the dissolving fuel is calculated by balancing cathodic and anodic reactions occurring at the solution interfaces with UO2 and NMP surfaces. Dissolved oxygen and hydrogen peroxide generated by radiolysis of the groundwater are the major oxidizing agents that promote fuel dissolution. Several reactions occurring on noble metal alloy surfaces are electrically coupled to the UO2 and can catalyze or inhibit oxidative dissolution of the fuel. The most important of these is the oxidation of hydrogen, which counteracts the effects of oxidants (primarily H2O2 and O2). Inclusion of this reaction greatly decreases the oxidation of U(IV) and slows fuel dissolution significantly. In addition to radiolytic hydrogen, large quantities of hydrogen can be produced by the anoxic corrosion of steel structures within and near the fuel waste package. The model accurately predicts key experimental trends seen in literature data, the most important being the dramatic depression of the fuel dissolution rate by the presence of dissolved hydrogen at even relatively low concentrations (e.g., less than 1 mM). This hydrogen effect counteracts oxidation reactions and can limit fuel degradation to chemical dissolution, which results in radionuclide source term values that are four or five orders of magnitude lower than when oxidative dissolution processes are operative. This paper presents the scientific basis of the model, the approach for modeling used fuel in a disposal system, and preliminary calculations to demonstrate the application and value of the model.

  18. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.

  19. Occurrence and risk assessment of potentially toxic elements and typical organic pollutants in contaminated rural soils.

    PubMed

    Xu, Yongfeng; Dai, Shixiang; Meng, Ke; Wang, Yuting; Ren, Wenjie; Zhao, Ling; Christie, Peter; Teng, Ying

    2018-07-15

    The residual levels and risk assessment of several potentially toxic elements (PTEs), phthalate esters (PAEs) and polycyclic aromatic hydrocarbons (PAHs) in rural soils near different types of pollution sources in Tianjin, China, were studied. The soils were found to be polluted to different extents with PTEs, PAEs and PAHs from different pollution sources. The soil concentrations of chromium (Cr), nickel (Ni), di-n-butyl phthalate (DnBP), acenaphthylene (Any) and acenaphthene (Ane) were higher than their corresponding regulatory reference limits. The health risk assessment model used to calculate human exposure indicates that both non-carcinogenic and carcinogenic risks from selected pollutants were generally acceptable or close to acceptable. Different types of pollution sources and soil physicochemical properties substantially affected the soil residual concentrations of and risks from these pollutants. PTEs in soils collected from agricultural lands around industrial and residential areas and organic pollutants (PAEs and PAHs) in soils collected from agricultural areas around livestock breeding were higher than those from other types of pollution sources and merit long-term monitoring. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. NASA Glenn Coefficients for Calculating Thermodynamic Properties of Individual Species

    NASA Technical Reports Server (NTRS)

    McBride, Bonnie J.; Zehe, Michael J.; Gordon, Sanford

    2002-01-01

    This report documents the library of thermodynamic data used with the NASA Glenn computer program CEA (Chemical Equilibrium with Applications). This library, containing data for over 2000 solid, liquid, and gaseous chemical species for temperatures ranging from 200 to 20,000 K, is available for use with other computer codes as well. The data are expressed as least-squares coefficients to a seven-term functional form for C((sup o)(sub p)) (T) / R with integration constants for H (sup o) (T) / RT and S(sup o) (T) / R. The NASA Glenn computer program PAC (Properties and Coefficients) was used to calculate thermodynamic functions and to generate the least-squares coefficients. PAC input was taken from a variety of sources. A complete listing of the database is given along with a summary of thermodynamic properties at 0 and 298.15 K.

  1. Infrared spectra of rotating protostars

    NASA Technical Reports Server (NTRS)

    Adams, F. C.; Shu, F. H.

    1986-01-01

    Earlier calculations of the infrared emission expected from stars in the process of being made are corrected to include the most important observable effects of rotation and generalized. An improved version of the spherical model of a previous paper is developed, and the corresponding emergent spectral energy distributions are calculated for the theoretically expected mass infall rate in the cores of cool and quiescent molecular clouds. The dust grain opacity model and the temperature profile parameterization are improved. It is shown that the infrared spectrum of the IRAS source 04264+2426, which is associated with a Herbig-Haro object, can be adequately represented in terms of a rotating and accreting protostar. This strengthens the suggestion that collimated outflows in young stellar objects originate when a stellar wind tries to emerge and reverse the swirling pattern of infall which gave birth to the central star.

  2. The effect of clouds on photolysis rates and ozone formation in the unpolluted troposphere

    NASA Technical Reports Server (NTRS)

    Thompson, A. M.

    1984-01-01

    The photochemistry of the lower atmosphere is sensitive to short- and long-term meteorological effects; accurate modeling therefore requires photolysis rates for trace gases which reflect this variability. As an example, the influence of clouds on the production of tropospheric ozone has been investigated, using a modification of Luther's two-stream radiation scheme to calculate cloud-perturbed photolysis rates in a one-dimensional photochemical transport model. In the unpolluted troposphere, where stratospheric inputs of odd nitrogen appear to represent the photochemical source of O3, strong cloud reflectance increases the concentration of NO in the upper troposphere, leading to greatly enhanced rates of ozone formation. Although the rate of these processes is too slow to verify by observation, the calculation is useful in distinguishing some features of the chemistry of regions of differing mean cloudiness.

  3. Indices of soil contamination by heavy metals - methodology of calculation for pollution assessment (minireview).

    PubMed

    Weissmannová, Helena Doležalová; Pavlovský, Jiří

    2017-11-07

    This article provides the assessment of heavy metal soil pollution with using the calculation of various pollution indices and contains also summarization of the sources of heavy metal soil pollution. Twenty described indices of the assessment of soil pollution consist of two groups: single indices and total complex indices of pollution or contamination with relevant classes of pollution. This minireview provides also the classification of pollution indices in terms of the complex assessment of soil quality. In addition, based on the comparison of metal concentrations in soil-selected sites of the world and used indices of pollution or contamination in soils, the concentration of heavy metal in contaminated soils varied widely, and pollution indices confirmed the significant contribution of soil pollution from anthropogenic activities mainly in urban and industrial areas.

  4. Potential health risks from postulated accidents involving the Pu-238 RTG on the Ulysses solar exploration mission

    NASA Technical Reports Server (NTRS)

    Goldman, Marvin; Hoover, Mark D.; Nelson, Robert C.; Templeton, William; Bollinger, Lance; Anspaugh, Lynn

    1991-01-01

    Potential radiation impacts from launch of the Ulysses solar exploration experiment were evaluated using eight postulated accident scenarios. Lifetime individual dose estimates rarely exceeded 1 mrem. Most of the potential health effects would come from inhalation exposures immediately after an accident, rather than from ingestion of contaminated food or water, or from inhalation of resuspended plutonium from contaminated ground. For local Florida accidents (that is, during the first minute after launch), an average source term accident was estimated to cause a total added cancer risk of up to 0.2 deaths. For accidents at later time after launch, a worldwide cancer risk of up to three cases was calculated (with a four in a million probability). Upper bound estimates were calculated to be about 10 times higher.

  5. NSRD-10: Leak Path Factor Guidance Using MELCOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louie, David; Humphries, Larry L.

    Estimates of the source term from a U.S. Department of Energy (DOE) nuclear facility requires that the analysts know how to apply the simulation tools used, such as the MELCOR code, particularly for a complicated facility that may include an air ventilation system and other active systems that can influence the environmental pathway of the materials released. DOE has designated MELCOR 1.8.5, an unsupported version, as a DOE ToolBox code in its Central Registry, which includes a leak-path-factor guidance report written in 2004 that did not include experimental validation data. To continue to use this MELCOR version requires additional verificationmore » and validations, which may not be feasible from a project cost standpoint. Instead, the recent MELCOR should be used. Without any developer support and lack of experimental data validation, it is difficult to convince regulators that the calculated source term from the DOE facility is accurate and defensible. This research replaces the obsolete version in the 2004 DOE leak path factor guidance report by using MELCOR 2.1 (the latest version of MELCOR with continuing modeling development and user support) and by including applicable experimental data from the reactor safety arena and from applicable experimental data used in the DOE-HDBK-3010. This research provides best practice values used in MELCOR 2.1 specifically for the leak path determination. With these enhancements, the revised leak-path-guidance report should provide confidence to the DOE safety analyst who would be using MELCOR as a source-term determination tool for mitigated accident evaluations.« less

  6. Calculated and measured brachytherapy dosimetry parameters in water for the Xoft Axxent X-Ray Source: an electronic brachytherapy source.

    PubMed

    Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve

    2006-11-01

    A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1

  7. Methane Emissions from Kuwait: long-term measurement, mobile plume mapping and isotopic characterisation

    NASA Astrophysics Data System (ADS)

    al-Shalaan, Aalia; Lowry, David; Fisher, Rebecca; Zazzeri, Giulia; Alsarawi, Mohammad; Nisbet, Euan

    2017-04-01

    National and EDGAR inventories suggest that the dominant sources of methane in Kuwait are leaks from gas flaring and distribution (92%) and landfills (5%),with additional smaller emissions from sewage (wastewater) treatment and ruminant animals. New measurements during 2015 and 2016 suggest that the inventories differ greatly from observations. Regular weekly bag samples have been collected from 3 sites in Kuwait, one NW of the city, one to the SE and one in the city from the rooftop of Kuwait College of Science. These take turns to have the highest recorded mole fractions, depending on wind direction. Associated with higher mole fraction is a consistent depletion in 13C of methane, pointing to a national source mix with 13C of -54.8‰. This is significantly different from the calculation using inventories that suggest a mix of -51.3‰. Mobile plume identification using a Picarro G2301 analyser, coupled with Tedlar bag sampling for isotopic analysis (Zazzeri et al., 2015), reveals that by far the largest observed source of methane in Kuwait is from landfill sites (13C of -57‰), with smaller contributions from fossil fuel industry (-51‰), wastewater treatment (-50‰) and ruminant animals (cows, -62‰; camels -60‰, sheep -64‰). Many of these isotopic signatures are close to those observed for the same source categories in other countries, for example landfill emission signatures have the same range as those calculated for UK and Hong Kong (-60 to -55‰), even to the level that older closed and capped landfills emit smaller amounts of methane at more enriched values (-55 to -50‰), due to small % of topsoil oxidation. Our findings suggest that many more top down measurements must be made to verify emissions inventories, particularly in middle eastern countries where a significant proportion of emissions are unverified calculations of fossil fuel emissions. Zazzeri, G. et al. (2015) Plume mapping and isotopic characterization of anthropogenic methane sources, Atmospheric Environment, 110, 151-162, doi.org/10.1016/j.atmosenv.2015.03.029

  8. Evaluation of gamma dose effect on PIN photodiode using analytical model

    NASA Astrophysics Data System (ADS)

    Jafari, H.; Feghhi, S. A. H.; Boorboor, S.

    2018-03-01

    The PIN silicon photodiodes are widely used in the applications which may be found in radiation environment such as space mission, medical imaging and non-destructive testing. Radiation-induced damage in these devices causes to degrade the photodiode parameters. In this work, we have used new approach to evaluate gamma dose effects on a commercial PIN photodiode (BPX65) based on an analytical model. In this approach, the NIEL parameter has been calculated for gamma rays from a 60Co source by GEANT4. The radiation damage mechanisms have been considered by solving numerically the Poisson and continuity equations with the appropriate boundary conditions, parameters and physical models. Defects caused by radiation in silicon have been formulated in terms of the damage coefficient for the minority carriers' lifetime. The gamma induced degradation parameters of the silicon PIN photodiode have been analyzed in detail and the results were compared with experimental measurements and as well as the results of ATLAS semiconductor simulator to verify and parameterize the analytical model calculations. The results showed reasonable agreement between them for BPX65 silicon photodiode irradiated by 60Co gamma source at total doses up to 5 kGy under different reverse voltages.

  9. Parameterized source term in the diffusion approximation for enhanced near-field modeling of collimated light

    NASA Astrophysics Data System (ADS)

    Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan

    2016-03-01

    Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.

  10. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2014-05-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates on the other hand are observed routinely on a much denser grid and higher temporal resolution. Gamma dose rate measurements contain no explicit information on the observed spectrum of radionuclides and have to be interpreted carefully. Nevertheless, they provide valuable information for the inverse evaluation of the source term due to their availability (Saunier et al., 2013). We present a new inversion approach combining an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The gamma dose rates are calculated from the modelled activity concentrations. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008). The a priori information on the source term is a first guess. The gamma dose rate observations will be used with inverse modelling to improve this first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  11. Investigation of mode partition noise in Fabry-Perot laser diode

    NASA Astrophysics Data System (ADS)

    Guo, Qingyi; Deng, Lanxin; Mu, Jianwei; Li, Xun; Huang, Wei-Ping

    2014-09-01

    Passive optical network (PON) is considered as the most appealing access network architecture in terms of cost-effectiveness, bandwidth management flexibility, scalability and durability. And to further reduce the cost per subscriber, a Fabry-Perot (FP) laser diode is preferred as the transmitter at the optical network units (ONUs) because of its lower cost compared to distributed feedback (DFB) laser diode. However, the mode partition noise (MPN) associated with the multi-longitudinal-mode FP laser diode becomes the limiting factor in the network. This paper studies the MPN characteristics of the FP laser diode using the time-domain simulation of noise-driven multi-mode laser rate equation. The probability density functions are calculated for each longitudinal mode. The paper focuses on the investigation of the k-factor, which is a simple yet important measure of the noise power, but is usually taken as a fitted or assumed value in the penalty calculations. In this paper, the sources of the k-factor are studied with simulation, including the intrinsic source of the laser Langevin noise, and the extrinsic source of the bit pattern. The photon waveforms are shown under four simulation conditions for regular or random bit pattern, and with or without Langevin noise. The k-factors contributed by those sources are studied with a variety of bias current and modulation current. Simulation results are illustrated in figures, and show that the contribution of Langevin noise to the k-factor is larger than that of the random bit pattern, and is more dominant at lower bias current or higher modulation current.

  12. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  13. Comparison of drought events detected by SPI calculated from different historical precipitation data sets - case study from Southern Alps

    NASA Astrophysics Data System (ADS)

    Brencic, M.; Hictaler, J.

    2012-04-01

    During recent years substantial efforts were directed toward the reconstruction of past meteorological data sets of precipitation, air temperature, air pressure and sunshine. In Alpine space of Europe long tradition of meteorological data monitoring exist starting with the first modern measurements in late 18th century. However, older data were obtained under very different conditions, standards and quality. Consequently direct comparison between data sets of different observation points is not possible. Several methods defined as data homogenisation procedures were developed intended to enable comparison of data from different observation points and sources. In spite of the fact that homogenisation procedures are scientifically agreed final result represented as homogenised data series depends on the ability and approach of the interpreters. Well know data set from the Greater Alpine region based on the common homogenisation procedure is HISTALP data series. However, HISTALP data set is not the only available homogenised data set in the region. Local agencies responsible for meteorological observations (e.g. in Slovenia Environmental Agency of Slovenia - ARSO) perform their own homogenisation procedures. Because more detailed information about measuring procedures and locations for the particular stations is available for them one can expect differences between homogenised data sets. Longer meteorological data sets can be used to detect past drought events of various magnitudes. They can help to discern past droughts and their characteristics. A very frequently used meteorological drought index is standardized precipitation index - SPI. The nature of SPI is designed to detect events of low frequency. With the help of this index periods of extremely low precipitation can be defined. It is usually based on monthly amount of precipitation where cumulative precipitation amount for the particular time period is calculated. During the calculation of SPI with a time series of monthly precipitation data for a location can calculate the SPI for any month in the record for the previous i months where i=1,2,3, …, 12, …, 24, …. 48, … depending upon the time scale of the interest. A 3 month SPI index is usually used for a short-term or seasonal drought index, a 12 month SPI is used for an intermediate term drought index, and a 48 month SPI is used for a long term drought index. In the paper results of SPI calculations are presented for the precipitation stations in the region of the Southern Alps for the last 200 years. Compared are results of differently homogenised data sets for the same observation points. We have performed comparison of homogenised data sets between HISTALP and ARSO data base. For the period after World War II when reliable precipitation measurements are available comparison was performed also between raw data series and homogenised data series. Differences between calculated form short term SPI (from 1 to 6 months) are small and don't influence the interpretation of short term drought appearance. With the prolonged length of SPI differences between calculated values rise and influence the detection of longer term drought appearance. It can be also illustrated that differences among parameters of model distribution (gamma distribution) are larger for longer SPI than for shorter SPI. It can be empirically concluded that homogenisation procedure of precipitation data sets can importantly influence the SPI values and has impact on conclusions about long term drought appearance.

  14. 8.76 W mid-infrared supercontinuum generation in a thulium doped fiber amplifier

    NASA Astrophysics Data System (ADS)

    Michalska, Maria; Grzes, Pawel; Swiderski, Jacek

    2018-07-01

    A stable mid-infrared supercontinuum (SC) generation with a maximum average power of 8.76 W in a spectral band of 1.9-2.65 μm is reported. To broaden the bandwidth of SC, a 1.55 μm pulsed laser system delivering 1 ns pulses at a pulse repetition frequency of 500 kHz was used as a seed source for one-stage thulium-doped fiber amplifier. The power conversion efficiency for wavelengths longer than 2.4 μm and 2.5 μm was determined to be 28% and 18%, respectively, which is believed to be the most efficient power distribution towards the mid-infrared in SC sources based on Tm-doped fibers. The power spectral density of the continuum was calculated to be >13 mW/nm with a potential of further scaling-up. A long-term power stability test, showing power fluctuations <3%, proved the robustness and reliability of the developed SC source.

  15. Dose rate calculations around 192Ir brachytherapy sources using a Sievert integration model

    NASA Astrophysics Data System (ADS)

    Karaiskos, P.; Angelopoulos, A.; Baras, P.; Rozaki-Mavrouli, H.; Sandilos, P.; Vlachos, L.; Sakelliou, L.

    2000-02-01

    The classical Sievert integral method is a valuable tool for dose rate calculations around brachytherapy sources, combining simplicity with reasonable computational times. However, its accuracy in predicting dose rate anisotropy around 192 Ir brachytherapy sources has been repeatedly put into question. In this work, we used a primary and scatter separation technique to improve an existing modification of the Sievert integral (Williamson's isotropic scatter model) that determines dose rate anisotropy around commercially available 192 Ir brachytherapy sources. The proposed Sievert formalism provides increased accuracy while maintaining the simplicity and computational time efficiency of the Sievert integral method. To describe transmission within the materials encountered, the formalism makes use of narrow beam attenuation coefficients which can be directly and easily calculated from the initially emitted 192 Ir spectrum. The other numerical parameters required for its implementation, once calculated with the aid of our home-made Monte Carlo simulation code, can be used for any 192 Ir source design. Calculations of dose rate and anisotropy functions with the proposed Sievert expression, around commonly used 192 Ir high dose rate sources and other 192 Ir elongated source designs, are in good agreement with corresponding accurate Monte Carlo results which have been reported by our group and other authors.

  16. Accurate and efficient modeling of the detector response in small animal multi-head PET systems.

    PubMed

    Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto

    2013-10-07

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.

  17. Accurate and efficient modeling of the detector response in small animal multi-head PET systems

    NASA Astrophysics Data System (ADS)

    Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto

    2013-10-01

    In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.

  18. FreeSASA: An open source C library for solvent accessible surface area calculations.

    PubMed

    Mitternacht, Simon

    2016-01-01

    Calculating solvent accessible surface areas (SASA) is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards' and Shrake and Rupley's approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.

  19. 40 CFR 98.143 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Fuel Combustion Sources). (2) Calculate and report the process and combustion CO2 emissions separately... Fuel Combustion Sources) the combustion CO2 emissions in the glass furnace according to the applicable... calculate and report the annual process CO2 emissions from each continuous glass melting furnace using the...

  20. 40 CFR 98.193 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... Stationary Fuel Combustion Sources) the combustion CO2 emissions from each lime kiln according to the... must calculate and report the annual process CO2 emissions from all lime kilns combined using the...

  1. 40 CFR 98.193 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... Stationary Fuel Combustion Sources) the combustion CO2 emissions from each lime kiln according to the... must calculate and report the annual process CO2 emissions from all lime kilns combined using the...

  2. 40 CFR 98.143 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Fuel Combustion Sources). (2) Calculate and report the process and combustion CO2 emissions separately... Fuel Combustion Sources) the combustion CO2 emissions in the glass furnace according to the applicable... calculate and report the annual process CO2 emissions from each continuous glass melting furnace using the...

  3. Reduced Variance using ADVANTG in Monte Carlo Calculations of Dose Coefficients to Stylized Phantoms

    NASA Astrophysics Data System (ADS)

    Hiller, Mauritius; Bellamy, Michael; Eckerman, Keith; Hertel, Nolan

    2017-09-01

    The estimation of dose coefficients of external radiation sources to the organs in phantoms becomes increasingly difficult for lower photon source energies. This study focus on the estimation of photon emitters around the phantom. The computer time needed to calculate a result within a certain precision can be lowered by several orders of magnitude using ADVANTG compared to a standard run. Using ADVANTG which employs the DENOVO adjoint calculation package enables the user to create a fully populated set of weight windows and source biasing instructions for an MCNP calculation.

  4. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems

    PubMed Central

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-01-01

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351

  5. Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.

    PubMed

    Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia

    2016-12-18

    The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.

  6. Gravitational perturbations and metric reconstruction: Method of extended homogeneous solutions applied to eccentric orbits on a Schwarzschild black hole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopper, Seth; Evans, Charles R.

    2010-10-15

    We calculate the gravitational perturbations produced by a small mass in eccentric orbit about a much more massive Schwarzschild black hole and use the numerically computed perturbations to solve for the metric. The calculations are initially made in the frequency domain and provide Fourier-harmonic modes for the gauge-invariant master functions that satisfy inhomogeneous versions of the Regge-Wheeler and Zerilli equations. These gravitational master equations have specific singular sources containing both delta function and derivative-of-delta function terms. We demonstrate in this paper successful application of the method of extended homogeneous solutions, developed recently by Barack, Ori, and Sago, to handle sourcemore » terms of this type. The method allows transformation back to the time domain, with exponential convergence of the partial mode sums that represent the field. This rapid convergence holds even in the region of r traversed by the point mass and includes the time-dependent location of the point mass itself. We present numerical results of mode calculations for certain orbital parameters, including highly accurate energy and angular momentum fluxes at infinity and at the black hole event horizon. We then address the issue of reconstructing the metric perturbation amplitudes from the master functions, the latter being weak solutions of a particular form to the wave equations. The spherical harmonic amplitudes that represent the metric in Regge-Wheeler gauge can themselves be viewed as weak solutions. They are in general a combination of (1) two differentiable solutions that adjoin at the instantaneous location of the point mass (a result that has order of continuity C{sup -1} typically) and (2) (in some cases) a delta function distribution term with a computable time-dependent amplitude.« less

  7. Accuracy-preserving source term quadrature for third-order edge-based discretization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Hiroaki; Liu, Yi

    2017-09-01

    In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frayce, D.; Khayat, R.E.; Derdouri, A.

    The dual reciprocity boundary element method (DRBEM) is implemented to solve three-dimensional transient heat conduction problems in the presence of arbitrary sources, typically as these problems arise in materials processing. The DRBEM has a major advantage over conventional BEM, since it avoids the computation of volume integrals. These integrals stem from transient, nonlinear, and/or source terms. Thus there is no need to discretize the inner domain, since only a number of internal points are needed for the computation. The validity of the method is assessed upon comparison with results from benchmark problems where analytical solutions exist. There is generally goodmore » agreement. Comparison against finite element results is also favorable. Calculations are carried out in order to assess the influence of the number and location of internal nodes. The influence of the ratio of the numbers of internal to boundary nodes is also examined.« less

  9. Volatile Organic Compounds: Characteristics, distribution and sources in urban schools

    NASA Astrophysics Data System (ADS)

    Mishra, Nitika; Bartsch, Jennifer; Ayoko, Godwin A.; Salthammer, Tunga; Morawska, Lidia

    2015-04-01

    Long term exposure to organic pollutants, both inside and outside school buildings may affect children's health and influence their learning performance. Since children spend significant amount of time in school, air quality, especially in classrooms plays a key role in determining the health risks associated with exposure at schools. Within this context, the present study investigated the ambient concentrations of Volatile Organic Compounds (VOCs) in 25 primary schools in Brisbane with the aim to quantify the indoor and outdoor VOCs concentrations, identify VOCs sources and their contribution, and based on these; propose mitigation measures to reduce VOCs exposure in schools. One of the most important findings is the occurrence of indoor sources, indicated by the I/O ratio >1 in 19 schools. Principal Component Analysis with Varimax rotation was used to identify common sources of VOCs and source contribution was calculated using an Absolute Principal Component Scores technique. The result showed that outdoor 47% of VOCs were contributed by petrol vehicle exhaust but the overall cleaning products had the highest contribution of 41% indoors followed by air fresheners and art and craft activities. These findings point to the need for a range of basic precautions during the selection, use and storage of cleaning products and materials to reduce the risk from these sources.

  10. INEEL Subregional Conceptual Model Report Volume 3: Summary of Existing Knowledge of Natural and Anthropogenic Influences on the Release of Contaminants to the Subsurface Environment from Waste Source Terms at the INEEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul L. Wichlacz

    2003-09-01

    This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less

  11. Simulation of angular and energy distributions of the PTB beta secondary standard.

    PubMed

    Faw, R E; Simons, G G; Gianakon, T A; Bayouth, J E

    1990-09-01

    Calculations and measurements have been performed to assess radiation doses delivered by the PTB Secondary Standard that employs 147Pm, 204Tl, and 90Sr:90Y sources in prescribed geometries, and features "beam-flattening" filters to assure uniformity of delivered doses within a 5-cm radius of the axis from source to detector plane. Three-dimensional, coupled, electron-photon Monte Carlo calculations, accounting for transmission through the source encapsulation and backscattering from the source mounting, led to energy spectra and angular distributions of electrons penetrating the source encapsulation that were used in the representation of pseudo sources of electrons for subsequent transport through the atmosphere, filters, and detectors. Calculations were supplemented by measurements made using bare LiF TLD chips on a thick polymethyl methacrylate phantom. Measurements using the 204Tl and 90Sr:90Y sources revealed that, even in the absence of the beam-flattening filters, delivered dose rates were very uniform radially. Dosimeter response functions (TLD:skin dose ratios) were calculated and confirmed experimentally for all three beta-particle sources and for bare LiF TLDs ranging in mass thickness from 10 to 235 mg cm-2.

  12. Excitation rate coefficients and line ratios for the optical and ultraviolet transitions in S II

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Pradhan, Anil K.

    1993-01-01

    New calculations are reported for electron excitation collision strengths, rate coefficients, transition probabilities, and line ratios for the astrophysically important optical and UV lines in S II. The collision strengths are calculated in the close coupling approximation using the R-matrix method. The present calculations are more extensive than previous ones, including all transitions among the 12 lowest LS terms and the corresponding 28 fine-structure levels in the collisional-radiative model for S II. While the present rate coefficients for electron impact excitation are within 10-30 percent of the previous values for the low-lying optical transitions employed as density diagnostics of H II regions and nebulae, the excitation rates for the UV transitions 4S super 0 sub 3/2 - 4Psub 1/2,3/2,5/2 differ significantly from earlier calculations, by up to factor of 2. We describe temperature and density sensitive flux ratios for a number of UV lines. The present UV results are likely to be of interest in a more accurate interpretation of S II emission from the Io plasma torus in the magnetosphere of Jupiter, as well as other UV sources observed from the IUE, ASTRO 1, and the HST.

  13. Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach

    NASA Astrophysics Data System (ADS)

    Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan

    2017-11-01

    The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.

  14. Initialization and assimilation of cloud and rainwater in a regional model

    NASA Technical Reports Server (NTRS)

    Raymond, William H.; Olson, William S.

    1990-01-01

    The initialization and assimilation of cloud and rainwater quantities in a mesoscale regional model was examined. Forecasts of explicit cloud and rainwater are made using conservation equations. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. These physical processes, some of which are parameterized, represent source and sink in terms in the conservation equations. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed.

  15. Melting behavior and phase relations of lunar samples. [Apollo 12 rock samples

    NASA Technical Reports Server (NTRS)

    Hays, J. F.

    1975-01-01

    Cooling rate studies of 12002 were conducted and the results interpreted in terms of the crystallization history of this rock and certain other picritic Apollo 12 samples. Calculations of liquid densities and viscosities during crystallization, crystal settling velocities, and heat loss by the parent rock body are discussed, as are petrographic studies of other Apollo 12 samples. The process of magmatic differentiation that must have accompanied the early melting and chemical fractionation of the moon's outer layers was investigated. The source of regions of both high- and low-titanium mare basalts were also studied.

  16. Improved Signal Control: An Analysis of the Effects of Automatic Gain Control for Optical Signal Detection.

    DTIC Science & Technology

    1982-12-01

    period f T - switching period a - AGC control parameter q - quantum efficiency of photon to electron conversions "I - binary "one" given in terms of the...of the photons striking the surface of the detector. This rate is defined as: X(t) = (np(t)A) / hf 0 (21) where n - quantum efficiency of the photon...mw to 10 mw [Ref 5, Table 1] for infrared wavelengths. 30 Assuming all of the source’s output power is detected, the rate is calculated to be an order

  17. Noniterative three-dimensional grid generation using parabolic partial differential equations

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1985-01-01

    A new algorithm for generating three-dimensional grids has been developed and implemented which numerically solves a parabolic partial differential equation (PDE). The solution procedure marches outward in two coordinate directions, and requires inversion of a scalar tridiagonal system in the third. Source terms have been introduced to control the spacing and angle of grid lines near the grid boundaries, and to control the outer boundary point distribution. The method has been found to generate grids about 100 times faster than comparable grids generated via solution of elliptic PDEs, and produces smooth grids for finite-difference flow calculations.

  18. Coil extensions improve line shapes by removing field distortions

    NASA Astrophysics Data System (ADS)

    Conradi, Mark S.; Altobelli, Stephen A.; McDowell, Andrew F.

    2018-06-01

    The static magnetic susceptibility of the rf coil can substantially distort the field B0 and be a dominant source of line broadening. A scaling argument shows that this may be a particular problem in microcoil NMR. We propose coil extensions to reduce the distortion. The actual rf coil is extended to a much longer overall length by abutted coil segments that do not carry rf current. The result is a long and nearly uniform sheath of copper wire, in terms of the static susceptibility. The line shape improvement is demonstrated at 43.9 MHz and in simulation calculations.

  19. Determining the perceived value of information when combining supporting and conflicting data

    NASA Astrophysics Data System (ADS)

    Hanratty, Timothy; Heilman, Eric; Richardson, John; Mittrick, Mark; Caylor, Justine

    2017-05-01

    Modern military intelligence operations involves a deluge of information from a large number of sources. A data ranking algorithm that enables the most valuable information to be reviewed first may improve timely and effective analysis. This ranking is termed the value of information (VoI) and its calculation is a current area of research within the US Army Research Laboratory (ARL). ARL has conducted an experiment to correlate the perceptions of subject matter experts with the ARL VoI model and additionally to construct a cognitive model of the ranking process and the amalgamation of supporting and conflicting information.

  20. Study of solid rocket motors for a space shuttle booster. Appendix E: Environmental impact statement, solid rocket motor, space shuttle booster

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An analysis of the combustion products resulting from the solid propellant rocket engines of the space shuttle booster is presented. Calculation of the degree of pollution indicates that the only potentially harmful pollutants, carbon monoxide and hydrochloric acid, will be too diluted to constitute a hazard. The mass of products ejected during a launch within the troposphere is insignificant in terms of similar materials that enter the atmosphere from other sources. Noise pollution will not exceed that obtained from the Saturn 5 launch vehicle.

  1. Modeling Scramjet Flows with Variable Turbulent Prandtl and Schmidt Numbers

    NASA Technical Reports Server (NTRS)

    Xiao, X.; Hassan, H. A.; Baurle, R. A.

    2006-01-01

    A complete turbulence model, where the turbulent Prandtl and Schmidt numbers are calculated as part of the solution and where averages involving chemical source terms are modeled, is presented. The ability of avoiding the use of assumed or evolution Probability Distribution Functions (PDF's) results in a highly efficient algorithm for reacting flows. The predictions of the model are compared with two sets of experiments involving supersonic mixing and one involving supersonic combustion. The results demonstrate the need for consideration of turbulence/chemistry interactions in supersonic combustion. In general, good agreement with experiment is indicated.

  2. [Features of control of electromagnetic radiation emitted by personal computers].

    PubMed

    Pal'tsev, Iu P; Buzov, A L; Kol'chugin, Iu I

    1996-01-01

    Measurements of PC electromagnetic irradiation show that the main sources are PC blocks emitting the waves of certain frequencies. Use of wide-range detectors measuring field intensity in assessment of PC electromagnetic irradiation gives unreliable results. More precise measurements by selective devices are required. Thus, it is expedient to introduce a term "spectral density of field intensity" and its maximal allowable level. In this case a frequency spectrum of PC electromagnetic irradiation is divided into 4 ranges, one of which is subjected to calculation of field intensity for each harmonic frequency, and others undergo assessment of spectral density of field intensity.

  3. Algae Biofuels Co-Location Assessment Tool for Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2011-11-29

    The Algae Biofuels Co-Location Assessment Tool for Canada uses chemical stoichiometry to estimate Nitrogen, Phosphorous, and Carbon atom availability from waste water and carbon dioxide emissions streams, and requirements for those same elements to produce a unit of algae. This information is then combined to find limiting nutrient information and estimate potential productivity associated with waste water and carbon dioxide sources. Output is visualized in terms of distributions or spatial locations. Distances are calculated between points of interest in the model using the great circle distance equation, and the smallest distances found by an exhaustive search and sort algorithm.

  4. Modeling TAE Response To Nonlinear Drives

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Berk, Herbert; Breizman, Boris; Zheng, Linjin

    2012-10-01

    Experiment has detected the Toroidal Alfven Eigenmodes (TAE) with signals at twice the eigenfrequency.These harmonic modes arise from the second order perturbation in amplitude of the MHD equation for the linear modes that are driven the energetic particle free energy. The structure of TAE in realistic geometry can be calculated by generalizing the linear numerical solver (AEGIS package). We have have inserted all the nonlinear MHD source terms, where are quadratic in the linear amplitudes, into AEGIS code. We then invert the linear MHD equation at the second harmonic frequency. The ratio of amplitudes of the first and second harmonic terms are used to determine the internal field amplitude. The spatial structure of energy and density distribution are investigated. The results can be directly employed to compare with experiments and determine the Alfven wave amplitude in the plasma region.

  5. Working memory and arithmetic calculation in children: the contributory roles of processing speed, short-term memory, and reading.

    PubMed

    Berg, Derek H

    2008-04-01

    The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing speed, short-term memory, working memory, and reading to arithmetic calculation in children. Results suggested four important findings. First, processing speed emerged as a significant contributor of arithmetic calculation only in relation to age-related differences in the general sample. Second, processing speed and short-term memory did not eliminate the contribution of working memory to arithmetic calculation. Third, individual working memory components--verbal working memory and visual-spatial working memory--each contributed unique variance to arithmetic calculation in the presence of all other variables. Fourth, a full model indicated that chronological age remained a significant contributor to arithmetic calculation in the presence of significant contributions from all other variables. Results are discussed in terms of directions for future research on working memory in arithmetic calculation.

  6. Design of Alpha Voltaic Power Source Using Americium 241 (241Am) and Diamond with a Power Density of 10 mW/cm3

    DTIC Science & Technology

    2017-10-19

    GaN) was calculated and compared . Alpha-voltaic energy converters were designed in diamond and GaN based on the energy deposition calculations...Example Power Source Two example device designs are calculated and compared . A diamond device containing 2 charge collection regions (Schottky and p...ARL-TR-8189 ● OCT 2017 US Army Research Laboratory Design of Alpha-Voltaic Power Source Using Americium-241 (241Am) and Diamond

  7. A Program for Calculating and Plotting Synthetic Common-Source Seismic-Reflection Traces for Multilayered Earth Models.

    ERIC Educational Resources Information Center

    Ramananantoandro, Ramanantsoa

    1988-01-01

    Presented is a description of a BASIC program to be used on an IBM microcomputer for calculating and plotting synthetic seismic-reflection traces for multilayered earth models. Discusses finding raypaths for given source-receiver offsets using the "shooting method" and calculating the corresponding travel times. (Author/CW)

  8. The electromagnetic radiation from simple sources in the presence of a homogeneous dielectric sphere

    NASA Technical Reports Server (NTRS)

    Mason, V. B.

    1973-01-01

    In this research, the effect of a homogeneous dielectric sphere on the electromagnetic radiation from simple sources is treated as a boundary value problem, and the solution is obtained by the technique of dyadic Green's functions. Exact representations of the electric fields in the various regions due to a source located inside, outside, or on the surface of a dielectric sphere are formulated. Particular attention is given to the effect of sphere size, source location, dielectric constant, and dielectric loss on the radiation patterns and directivity of small spheres (less than 5 wavelengths in diameter) using the Huygens' source excitation. The computed results are found to closely agree with those measured for waveguide-excited plexiglas spheres. Radiation patterns for an extended Huygens' source and for curved electric dipoles located on the sphere's surface are also presented. The resonance phenomenon associated with the dielectric sphere is studied in terms of the modal representation of the radiated fields. It is found that when the sphere is excited at certain frequencies, much of the energy is radiated into the sidelobes. The addition of a moderate amount of dielectric loss, however, quickly attenuates this resonance effect. A computer program which may be used to calculate the directivity and radiation pattern of a Huygens' source located inside or on the surface of a lossy dielectric sphere is listed.

  9. An Improved Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.

    2000-01-01

    A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.

  10. Progress in the development of PDF turbulence models for combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.

  11. Global threat to agriculture from invasive species.

    PubMed

    Paini, Dean R; Sheppard, Andy W; Cook, David C; De Barro, Paul J; Worner, Susan P; Thomas, Matthew B

    2016-07-05

    Invasive species present significant threats to global agriculture, although how the magnitude and distribution of the threats vary between countries and regions remains unclear. Here, we present an analysis of almost 1,300 known invasive insect pests and pathogens, calculating the total potential cost of these species invading each of 124 countries of the world, as well as determining which countries present the greatest threat to the rest of the world given their trading partners and incumbent pool of invasive species. We find that countries vary in terms of potential threat from invasive species and also their role as potential sources, with apparently similar countries sometimes varying markedly depending on specifics of agricultural commodities and trade patterns. Overall, the biggest agricultural producers (China and the United States) could experience the greatest absolute cost from further species invasions. However, developing countries, in particular, Sub-Saharan African countries, appear most vulnerable in relative terms. Furthermore, China and the United States represent the greatest potential sources of invasive species for the rest of the world. The analysis reveals considerable scope for ongoing redistribution of known invasive pests and highlights the need for international cooperation to slow their spread.

  12. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    NASA Astrophysics Data System (ADS)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  13. Theoretical simulation of the multipole seismoelectric logging while drilling

    NASA Astrophysics Data System (ADS)

    Guan, Wei; Hu, Hengshan; Zheng, Xiaobo

    2013-11-01

    Acoustic logging-while-drilling (LWD) technology has been commercially used in the petroleum industry. However it remains a rather difficult task to invert formation compressional and shear velocities from acoustic LWD signals due to the unwanted strong collar wave, which covers or interferes with signals from the formation. In this paper, seismoelectric LWD is investigated for solving that problem. The seismoelectric field is calculated by solving a modified Poisson's equation, whose source term is the electric disturbance induced electrokinetically by the travelling seismic wave. The seismic wavefield itself is obtained by solving Biot's equations for poroelastic waves. From the simulated waveforms and the semblance plots for monopole, dipole and quadrupole sources, it is found that the electric field accompanies the collar wave as well as other wave groups of the acoustic pressure, despite the fact that seismoelectric conversion occurs only in porous formations. The collar wave in the electric field, however, is significantly weakened compared with that in the acoustic pressure, in terms of its amplitude relative to the other wave groups in the full waveforms. Thus less and shallower grooves are required to damp the collar wave if the seismoelectric LWD signals are recorded for extracting formation compressional and shear velocities.

  14. Global threat to agriculture from invasive species

    PubMed Central

    Paini, Dean R.; Sheppard, Andy W.; Cook, David C.; De Barro, Paul J.; Worner, Susan P.; Thomas, Matthew B.

    2016-01-01

    Invasive species present significant threats to global agriculture, although how the magnitude and distribution of the threats vary between countries and regions remains unclear. Here, we present an analysis of almost 1,300 known invasive insect pests and pathogens, calculating the total potential cost of these species invading each of 124 countries of the world, as well as determining which countries present the greatest threat to the rest of the world given their trading partners and incumbent pool of invasive species. We find that countries vary in terms of potential threat from invasive species and also their role as potential sources, with apparently similar countries sometimes varying markedly depending on specifics of agricultural commodities and trade patterns. Overall, the biggest agricultural producers (China and the United States) could experience the greatest absolute cost from further species invasions. However, developing countries, in particular, Sub-Saharan African countries, appear most vulnerable in relative terms. Furthermore, China and the United States represent the greatest potential sources of invasive species for the rest of the world. The analysis reveals considerable scope for ongoing redistribution of known invasive pests and highlights the need for international cooperation to slow their spread. PMID:27325781

  15. Surveillance system for air pollutants by combination of the decision support system COMPAS and optical remote sensing systems

    NASA Astrophysics Data System (ADS)

    Flassak, Thomas; de Witt, Helmut; Hahnfeld, Peter; Knaup, Andreas; Kramer, Lothar

    1995-09-01

    COMPAS is a decision support system designed to assist in the assessment of the consequences of accidental releases of toxic and flammable substances. One of the key elements of COMPAS is a feedback algorithm which allows us to calculate the source term with the aid of concentration measurements. Up to now the feedback technique is applied to concentration measurements done with test tubes or conventional point sensors. In this paper the extension of the actual method is presented which is the combination of COMPAS and an optical remote sensing system like the KAYSER-THREDE K300 FTIR system. Active remote sensing methods based on FTIR are, among other applications, ideal for the so-called fence line monitoring of the diffuse emissions and accidental releases from industrial facilities, since from the FTIR spectra averaged concentration levels along the measurement path can be achieved. The line-averaged concentrations are ideally suited as on-line input for COMPAS' feedback technique. Uncertainties in the assessment of the source term related with both shortcomings of the dispersion model itself and also problems of a feedback strategy based on point measurements are reduced.

  16. Unit with Fluidized Bed for Gas-Vapor Activation of Different Carbonaceous Materials for Various Purposes: Design, Computation, Implementation.

    PubMed

    Strativnov, Eugene

    2017-12-01

    We propose the technology of obtaining the promising material with wide specter of application-activated nanostructured carbon. In terms of technical indicators, it will stand next to the materials produced by complex regulations with the use of costly chemical operations. It can be used for the following needs: as a sorbent for hemosorption and enterosorption, for creation of the newest source of electric current (lithium and zinc air batteries, supercapacitors), and for processes of short-cycle adsorption gas separation.In this study, the author gives recommendations concerning the design of the apparatus with fluidized bed and examples of calculation of specific devices. The whole given information can be used as guidelines for the design of energy effective aggregates. Calculation and design of the reactor were carried out using modern software complexes (ANSYS and SolidWorks).

  17. Examining the impact of harmonic correlation on vibrational frequencies calculated in localized coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanson-Heine, Magnus W. D., E-mail: magnus.hansonheine@nottingham.ac.uk

    Carefully choosing a set of optimized coordinates for performing vibrational frequency calculations can significantly reduce the anharmonic correlation energy from the self-consistent field treatment of molecular vibrations. However, moving away from normal coordinates also introduces an additional source of correlation energy arising from mode-coupling at the harmonic level. The impact of this new component of the vibrational energy is examined for a range of molecules, and a method is proposed for correcting the resulting self-consistent field frequencies by adding the full coupling energy from connected pairs of harmonic and pseudoharmonic modes, termed vibrational self-consistent field (harmonic correlation). This approach ismore » found to lift the vibrational degeneracies arising from coordinate optimization and provides better agreement with experimental and benchmark frequencies than uncorrected vibrational self-consistent field theory without relying on traditional correlated methods.« less

  18. Air pollution forecasting in Ankara, Turkey using air pollution index and its relation to assimilative capacity of the atmosphere.

    PubMed

    Genc, D Deniz; Yesilyurt, Canan; Tuncel, Gurdal

    2010-07-01

    Spatial and temporal variations in concentrations of CO, NO, NO(2), SO(2), and PM(10), measured between 1999 and 2000, at traffic-impacted and residential stations in Ankara were investigated. Air quality in residential areas was found to be influenced by traffic activities in the city. Pollutant ratios were proven to be reliable tracers to differentiate between different sources. Air pollution index (API) of the whole city was calculated to evaluate the level of air quality in Ankara. Multiple linear regression model was developed for forecasting API in Ankara. The correlation coefficients were found to be 0.79 and 0.63 for different time periods. The assimilative capacity of Ankara atmosphere was calculated in terms of ventilation coefficient (VC). The relation between API and VC was investigated and found that the air quality in Ankara was determined by meteorology rather than emissions.

  19. MetroBeta: Beta Spectrometry with Metallic Magnetic Calorimeters in the Framework of the European Program of Ionizing Radiation Metrology

    NASA Astrophysics Data System (ADS)

    Loidl, M.; Beyer, J.; Bockhorn, L.; Enss, C.; Györi, D.; Kempf, S.; Kossert, K.; Mariam, R.; Nähle, O.; Paulsen, M.; Rodrigues, M.; Schmidt, M.

    2018-05-01

    MetroBeta is a European project aiming at the improvement of the knowledge of the shapes of beta spectra, both in terms of theoretical calculations and measurements. It is part of a common European program of ionizing radiation metrology. Metallic magnetic calorimeters (MMCs) with the beta emitter embedded in the absorber have in the past proven to be among the best beta spectrometers, in particular for low-energy beta transitions. Within this project, new designs of MMCs optimized for five different beta energy ranges were developed. A new detector module with thermal decoupling of MMC and SQUID chips was designed. An important aspect of the research and development concerns the source/absorber preparation techniques. Four beta spectra with maximum energies ranging from 76 to 709 keV will be measured. Improved theoretical calculation methods and complementary measurement techniques complete the project.

  20. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr

    2015-12-31

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less

  1. Basic and Exceptional Calculation Abilities in a Calculating Prodigy: A Case Study.

    ERIC Educational Resources Information Center

    Pesenti, Mauro; Seron, Xavier; Samson, Dana; Duroux, Bruno

    1999-01-01

    Describes the basic and exceptional calculation abilities of a calculating prodigy whose performances were investigated in single- and multi-digit number multiplication, numerical comparison, raising of powers, and short-term memory tasks. Shows how his highly efficient long-term memory storage and retrieval processes, knowledge of calculation…

  2. Theory and Performance of AIMS for Active Interrogation

    NASA Astrophysics Data System (ADS)

    Walters, William J.; Royston, Katherine E. K.; Haghighat, Alireza

    2014-06-01

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) determination of neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, γ) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water. In the first step, a response-function formulation has been developed to calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, γ) cross sections to find the resulting gamma source distribution. Finally, in the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma flux at a detector window. A code, AIMS (Active Interrogation for Monitoring Special-Nuclear-materials), has been written to output the gamma current for an source-detector assembly scanning across the cargo using the pre-calculated values and takes significantly less time than a reference MCNP5 calculation.

  3. Long-term aerosol measurements in Gran Canaria, Canary Islands: Particle concentration, sources and elemental composition

    NASA Astrophysics Data System (ADS)

    Gelado-Caballero, MaríA. D.; López-GarcíA, Patricia; Prieto, Sandra; Patey, Matthew D.; Collado, Cayetano; HéRnáNdez-Brito, José J.

    2012-02-01

    There are very few sets of long-term measurements of aerosol concentrations over the North Atlantic Ocean, yet such data is invaluable in quantifying atmospheric dust inputs to this ocean region. We present an 8-year record of total suspended particles (TSP) collected at three stations on Gran Canaria Island, Spain (Taliarte at sea level, Tafira 269 m above sea level (a.s.l.) and Pico de la Gorra 1930 m a.s.l.). Using wet and dry deposition measurements, the mean dust flux was calculated at 42.3 mg m-2 d-1. Air mass back trajectories (HYSPLIT, NOAA) suggested that the Sahara desert is the major source of African dust (dominant during 32-50% of days), while the Sahel desert was the major source only 2-10% of the time (maximum in summer). Elemental composition ratios of African samples indicate that, despite the homogeneity of the dust in collected samples, some signatures of the bedrocks can still be detected. Differences were found for the Sahel, Central Sahara and North of Sahara regions in Ti/Al, Mg/Al and Ca/Al ratios, respectively. Elements often associated with pollution (Pb, Cd, Ni, Zn) appeared to share a common origin, while Cu may have a predominantly local source, as suggested by a decrease in the enrichment factor (EF) of Cu during dust events. The inter-annual variability of dust concentrations is investigated in this work. During winter, African dust concentration measurements at the Pico de la Gorra station were found to correlate with the North Atlantic Oscillation (NAO) index.

  4. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  5. Fuselage boundary-layer refraction of fan tones radiated from an installed turbofan aero-engine.

    PubMed

    Gaffney, James; McAlpine, Alan; Kingan, Michael J

    2017-03-01

    A distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is extended to include the refraction effects caused by the fuselage boundary layer. The model is a simple representation of an installed turbofan, where fan tones are represented in terms of spinning modes radiated from a semi-infinite circular duct, and the aircraft's fuselage is represented by an infinitely long, rigid cylinder. The distributed source is a disk, formed by integrating infinitesimal volume sources located on the intake duct termination. The cylinder is located adjacent to the disk. There is uniform axial flow, aligned with the axis of the cylinder, everywhere except close to the cylinder where there is a constant thickness boundary layer. The aim is to predict the near-field acoustic pressure, and in particular, to predict the pressure on the cylindrical fuselage which is relevant to assess cabin noise. Thus no far-field approximations are included in the modelling. The effect of the boundary layer is quantified by calculating the area-averaged mean square pressure over the cylinder's surface with and without the boundary layer included in the prediction model. The sound propagation through the boundary layer is calculated by solving the Pridmore-Brown equation. Results from the theoretical method show that the boundary layer has a significant effect on the predicted sound pressure levels on the cylindrical fuselage, owing to sound radiation of fan tones from an installed turbofan aero-engine.

  6. Characterizing the source properties of terrestrial gamma ray flashes

    NASA Astrophysics Data System (ADS)

    Dwyer, Joseph R.; Liu, Ningyu; Eric Grove, J.; Rassoul, Hamid; Smith, David M.

    2017-08-01

    Monte Carlo simulations are used to determine source properties of terrestrial gamma ray flashes (TGFs) as a function of atmospheric column depth and beaming geometry. The total mass per unit area traversed by all the runaway electrons (i.e., the total grammage) during a TGF, Ξ, is introduced, defined to be the total distance traveled by all the runaway electrons along the electric field lines multiplied by the local air mass density along their paths. It is shown that key properties of TGFs may be directly calculated from Ξ and its time derivative, including the gamma ray emission rate, the current moment, and the optical power of the TGF. For the calculations presented in this paper, a standard TGF gamma ray fluence, F0 = 0.1 cm-2 above 100 keV for a spacecraft altitude of 500 km, and a standard total grammage, Ξ0 = 1018 g/cm2, are introduced, and results are presented in terms of these values. In particular, the current moments caused by the runaway electrons and their accompanying ionization are found for a standard TGF fluence, as a function of source altitude and beaming geometry, allowing a direct comparison between the gamma rays measured in low-Earth orbit and the VLF-LF radio frequency emissions recorded on the ground. Such comparisons should help test and constrain TGF models and help identify the roles of lightning leaders and streamers in the production of TGFs.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purwaningsih, Anik

    Dosimetric data for a brachytherapy source should be known before it used for clinical treatment. Iridium-192 source type H01 was manufactured by PRR-BATAN aimed to brachytherapy is not yet known its dosimetric data. Radial dose function and anisotropic dose distribution are some primary keys in brachytherapy source. Dose distribution for Iridium-192 source type H01 was obtained from the dose calculation formalism recommended in the AAPM TG-43U1 report using MCNPX 2.6.0 Monte Carlo simulation code. To know the effect of cavity on Iridium-192 type H01 caused by manufacturing process, also calculated on Iridium-192 type H01 if without cavity. The result ofmore » calculation of radial dose function and anisotropic dose distribution for Iridium-192 source type H01 were compared with another model of Iridium-192 source.« less

  8. Accident Source Terms for Pressurized Water Reactors with High-Burnup Cores Calculated using MELCOR 1.8.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.

    2016-12-01

    In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less

  9. On the calculation of atomic term populations

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1992-01-01

    The usefulness of calculations on model atomic term systems which can give spectral multiplet intensities is emphasized, in contrast to more detailed level calculations which are not always feasible because of lack of appropriate atomic data. A more general expression for the multiplet radiative transition rate is proposed to facilitate term representations. The differences between term and level representations are discussed quantitatively with respect to a model three-level atom and real examples of the C III and Ne IV ions. It is shown that term representations fail at lower densities when level inverse lifetimes within terms differ by only a few orders of magnitude. In such cases one must resort to other methods; a hybrid calculation is therefore proposed to fill this need and is carried out for the C III ion to demonstrate its feasibility and validity.

  10. Initial conditions in high-energy collisions

    NASA Astrophysics Data System (ADS)

    Petreska, Elena

    This thesis is focused on the initial stages of high-energy collisions in the saturation regime. We start by extending the McLerran-Venugopalan distribution of color sources in the initial wave-function of nuclei in heavy-ion collisions. We derive a fourth-order operator in the action and discuss its relevance for the description of color charge distributions in protons in high-energy experiments. We calculate the dipole scattering amplitude in proton-proton collisions with the quartic action and find an agreement with experimental data. We also obtain a modification to the fluctuation parameter of the negative binomial distribution of particle multiplicities in proton-proton experiments. The result implies an advancement of the fourth-order action towards Gaussian when the energy is increased. Finally, we calculate perturbatively the expectation value of the magnetic Wilson loop operator in the first moments of heavy-ion collisions. For the magnetic flux we obtain a first non-trivial term that is proportional to the square of the area of the loop. The result is close to numerical calculations for small area loops.

  11. Reflection full-waveform inversion using a modified phase misfit function

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe

    2017-09-01

    Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.

  12. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  13. Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Liu, Quanhua; Weng, Fuzhong

    2006-12-01

    The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.

  14. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  15. Atmospheric characterization on the Kennedy Space Center Shuttle Landing Facility

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Coffaro, Joseph; Wu, Chensheng; Paulson, Daniel; Davis, Christopher

    2017-08-01

    Large temperature gradients are a known source of strong atmospheric turbulence conditions. Often times these areas of strong turbulence conditions are also accompanied by conditions that make it difficult to conduct long term optical atmospheric tests. The Shuttle Landing Facility (SLF) at the Kennedy Space Center (KSC) provides a prime testing environment that is capable of generating strong atmospheric turbulence yet is also easily accessible for well instrumented testing. The Shuttle Landing Facility features a 5000 m long and 91 m wide concrete runway that provides ample space for measurements of atmospheric turbulence as well as the opportunity for large temperature gradients to form as the sun heats the surface. We present the results of a large aperture LED scintillometer, a triple aperture laser scintillometer, and a thermal probe system that were used to calculate a path averaged and a point calculation of Cn2. In addition, we present the results of the Plenoptic Sensor that was used to calculate a path averaged Cn2 value. These measurements were conducted over a multi-day continuous test with supporting atmospheric and weather data provided by the University of Central Florida.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    WANG,YIFENG; XU,HUIFANG

    Correctly identifying the possible alteration products and accurately predicting their occurrence in a repository-relevant environment are the key for the source-term calculation in a repository performance assessment. Uraninite in uranium deposits has long been used as a natural analog to spent fuel in a repository because of their chemical and structural similarity. In this paper, a SEM/AEM investigation has been conducted on a partially alternated uraninite sample from a uranium ore deposit of Shinkolobwe of Congo. The mineral formation sequences were identified: uraninite {yields} uranyl hydrates {yields} uranyl silicates {yields} Ca-uranyl silicates or uraninite {yields} uranyl silicates {yields} Ca-uranyl silicates.more » Reaction-path calculations were conducted for the oxidative dissolution of spent fuel in a representative Yucca Mountain groundwater. The predicted sequence is in general consistent with the SEM observations. The calculations also show that uranium carbonate minerals are unlikely to become major solubility-controlling mineral phases in a Yucca Mountain environment. Some discrepancies between model predictions and field observations are observed. Those discrepancies may result from poorly constrained thermodynamic data for uranyl silicate minerals.« less

  17. A Methodology for the Integration of a Mechanistic Source Term Analysis in a Probabilistic Framework for Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew

    GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level, the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of PRA methodologies to conduct a mechanistic source term (MST) analysis for event sequences that could result in the release ofmore » radionuclides. The MST analysis seeks to realistically model and assess the transport, retention, and release of radionuclides from the reactor to the environment. The MST methods developed during this project seek to satisfy the requirements of the Mechanistic Source Term element of the ASME/ANS Non-LWR PRA standard. The MST methodology consists of separate analysis approaches for risk-significant and non-risk significant event sequences that may result in the release of radionuclides from the reactor. For risk-significant event sequences, the methodology focuses on a detailed assessment, using mechanistic models, of radionuclide release from the fuel, transport through and release from the primary system, transport in the containment, and finally release to the environment. The analysis approach for non-risk significant event sequences examines the possibility of large radionuclide releases due to events such as re-criticality or the complete loss of radionuclide barriers. This paper provides details on the MST methodology, including the interface between the MST analysis and other elements of the PRA, and provides a simplified example MST calculation for a sodium fast reactor.« less

  18. Gaseous Nitrogen Orifice Mass Flow Calculator

    NASA Technical Reports Server (NTRS)

    Ritrivi, Charles

    2013-01-01

    The Gaseous Nitrogen (GN2) Orifice Mass Flow Calculator was used to determine Space Shuttle Orbiter Water Spray Boiler (WSB) GN2 high-pressure tank source depletion rates for various leak scenarios, and the ability of the GN2 consumables to support cooling of Auxiliary Power Unit (APU) lubrication during entry. The data was used to support flight rationale concerning loss of an orbiter APU/hydraulic system and mission work-arounds. The GN2 mass flow-rate calculator standardizes a method for rapid assessment of GN2 mass flow through various orifice sizes for various discharge coefficients, delta pressures, and temperatures. The calculator utilizes a 0.9-lb (0.4 kg) GN2 source regulated to 40 psia (.276 kPa). These parameters correspond to the Space Shuttle WSB GN2 Source and Water Tank Bellows, but can be changed in the spreadsheet to accommodate any system parameters. The calculator can be used to analyze a leak source, leak rate, gas consumables depletion time, and puncture diameter that simulates the measured GN2 system pressure drop.

  19. eHCM: Resources Reduction & Demand Increase, cover the gap by a managerial approach powered by an IT solutions.

    PubMed

    Buccioli, Matteo; Agnoletti, Vanni; Padovani, Emanuele; Perger, Peter

    2014-01-01

    The economic and financial crisis has also had an important impact on the healthcare sector. Available resources have decreased, while at the same time costs as well as demand for healthcare services are on the rise. This coalescing negative impact on availability of healthcare resources is exacerbated even further by a widespread ignorance of management accounting matters. Little knowledge about costs is a strong source of costs augmentation. Although it is broadly recognized that cost accounting has a positive impact on healthcare organizations, it is not widespread adopted. Hospitals are essential components in providing overall healthcare. Operating rooms are critical hospital units not only in patient safety terms but also in expenditure terms. Understanding OR procedures in the hospital provides important information about how health care resources are used. There have been several scientific studies on management accounting in healthcare environments and more than ever there is a need for innovation, particularly by connecting business administration research findings to modern IT tools. IT adoption constitutes one of the most important innovation fields within the healthcare sector, with beneficial effects on the decision making processes. The e-HCM (e-Healthcare Cost Management) project consists of a cost calculation model which is applicable to Business Intelligence. The cost calculation approach comprises elements from both traditional cost accounting and activity-based costing. Direct costs for all surgical procedures can be calculated through a seven step implementation process.

  20. Does one need the O({epsilon})- and O({epsilon}{sup 2})-terms of one-loop amplitudes in a next-to-next-to-leading order calculation ?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weinzierl, Stefan

    2011-10-01

    This article discusses the occurrence of one-loop amplitudes within a next-to-next-to-leading-order calculation. In a next-to-next-to-leading-order calculation, the one-loop amplitude enters squared and one would therefore naively expect that the O({epsilon})- and O({epsilon}{sup 2})-terms of the one-loop amplitudes are required. I show that the calculation of these terms can be avoided if a method is known, which computes the O({epsilon}{sup 0})-terms of the finite remainder function of the two-loop amplitude.

  1. Multisource Estimation of Long-term Global Terrestrial Surface Radiation

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.

    2017-12-01

    Land surface net radiation is the essential energy source at the earth's surface. It determines the surface energy budget and its partitioning, drives the hydrological cycle by providing available energy, and offers heat, light, and energy for biological processes. Individual components in net radiation have changed historically due to natural and anthropogenic climate change and land use change. Decadal variations in radiation such as global dimming or brightening have important implications for hydrological and carbon cycles. In order to assess the trends and variability of net radiation and evapotranspiration, there is a need for accurate estimates of long-term terrestrial surface radiation. While large progress in measuring top of atmosphere energy budget has been made, huge discrepancies exist among ground observations, satellite retrievals, and reanalysis fields of surface radiation, due to the lack of observational networks, the difficulty in measuring from space, and the uncertainty in algorithm parameters. To overcome the weakness of single source datasets, we propose a multi-source merging approach to fully utilize and combine multiple datasets of radiation components separately, as they are complementary in space and time. First, we conduct diagnostic analysis of multiple satellite and reanalysis datasets based on in-situ measurements such as Global Energy Balance Archive (GEBA), existing validation studies, and other information such as network density and consistency with other meteorological variables. Then, we calculate the optimal weighted average of multiple datasets by minimizing the variance of error between in-situ measurements and other observations. Finally, we quantify the uncertainties in the estimates of surface net radiation and employ physical constraints based on the surface energy balance to reduce these uncertainties. The final dataset is evaluated in terms of the long-term variability and its attribution to changes in individual components. The goal of this study is to provide a merged observational benchmark for large-scale diagnostic analyses, remote sensing and land surface modeling.

  2. Dualities and Topological Field Theories from Twisted Geometries

    NASA Astrophysics Data System (ADS)

    Markov, Ruza

    I will present three studies of string theory on twisted geometries. In the first calculation included in this dissertation we use gauge/gravity duality to study the Coulomb branch of an unusual type of nonlocal field theory, called Puff Field Theory. On the gravity side, this theory is given in terms of D3-branes in type IIB string theory with a geometric twist. While the field theory description, available in the IR limit, is a deformation of Yang-Mills gauge theory by an order seven operator which we here compute. In the rest of this dissertation we explore N = 4 super Yang-Mills (SYM) theory compactied on a circle with S-duality and R-symmetry twists that preserve N = 6 supersymmetry in 2 + 1D. It was shown that abelian theory on a flat manifold gives Chern-Simons theory in the low-energy limit and here we are interested in the non-abelian counterpart. To that end, we introduce external static supersymmetric quark and anti-quark sources into the theory and calculate the Witten Index of the resulting Hilbert space of ground states on a two-torus. Using these results we compute the action of simple Wilson loops on the Hilbert space of ground states without sources. In some cases we find disagreement between our results for the Wilson loop eigenvalues and previous conjectures about a connection with Chern-Simons theory. The last result discussed in this dissertation demonstrates a connection between gravitational Chern-Simons theory and N = 4 four-dimensional SYM theory compactified on a circle twisted by S-duality where the remaining three-manifold is not flat starting with the explicit geometric realization of S-duality in terms of (2, 0) theory.

  3. Coronary CT angiography with single-source and dual-source CT: comparison of image quality and radiation dose between prospective ECG-triggered and retrospective ECG-gated protocols.

    PubMed

    Sabarudin, Akmal; Sun, Zhonghua; Yusof, Ahmad Khairuddin Md

    2013-09-30

    This study is conducted to investigate and compare image quality and radiation dose between prospective ECG-triggered and retrospective ECG-gated coronary CT angiography (CCTA) with the use of single-source CT (SSCT) and dual-source CT (DSCT). A total of 209 patients who underwent CCTA with suspected coronary artery disease scanned with SSCT (n=95) and DSCT (n=114) scanners using prospective ECG-triggered and retrospective ECG-gated protocols were recruited from two institutions. The image was assessed by two experienced observers, while quantitative assessment was performed by measuring the image noise, the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR). Effective dose was calculated using the latest published conversion coefficient factor. A total of 2087 out of 2880 coronary artery segments were assessable, with 98.0% classified as of sufficient and 2.0% as of insufficient image quality for clinical diagnosis. There was no significant difference in overall image quality between prospective ECG-triggered and retrospective gated protocols, whether it was performed with DSCT or SSCT scanners. Prospective ECG-triggered protocol was compared in terms of radiation dose calculation between DSCT (6.5 ± 2.9 mSv) and SSCT (6.2 ± 1.0 mSv) scanners and no significant difference was noted (p=0.99). However, the effective dose was significantly lower with DSCT (18.2 ± 8.3 mSv) than with SSCT (28.3 ± 7.0 mSv) in the retrospective gated protocol. Prospective ECG-triggered CCTA reduces radiation dose significantly compared to retrospective ECG-gated CCTA, while maintaining good image quality. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  4. Methodology for worker neutron exposure evaluation in the PDCF facility design.

    PubMed

    Scherpelz, R I; Traub, R J; Pryor, K H

    2004-01-01

    A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons were learned from this effect. This paper addresses these issues and the resulting methodology.

  5. Comment on "An Efficient and Stable Hydrodynamic Model With Novel Source Term Discretization Schemes for Overland Flow and Flood Simulations" by Xilin Xia et al.

    NASA Astrophysics Data System (ADS)

    Lu, Xinhua; Mao, Bing; Dong, Bingjiang

    2018-01-01

    Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.

  6. Cost of care of haemophilia with inhibitors.

    PubMed

    Di Minno, M N D; Di Minno, G; Di Capua, M; Cerbone, A M; Coppola, A

    2010-01-01

    In Western countries, the treatment of patients with inhibitors is presently the most challenging and serious issue in haemophilia management, direct costs of clotting factor concentrates accounting for >98% of the highest economic burden absorbed for the healthcare of patients in this setting. Being designed to address questions of resource allocation and effectiveness, decision models are the golden standard to reliably assess the overall economic implications of haemophilia with inhibitors in terms of mortality, bleeding-related morbidity, and severity of arthropathy. However, presently, most data analyses stem from retrospective short-term evaluations, that only allow for the analysis of direct health costs. In the setting of chronic diseases, the cost-utility analysis, that takes into account the beneficial effects of a given treatment/healthcare intervention in terms of health-related quality of life, is likely to be the most appropriate approach. To calculate net benefits, the quality adjusted life year, that significantly reflects such health gain, has to be compared with specific economic impacts. Differences in data sources, in medical practice and/or in healthcare systems and costs, imply that most current pharmacoeconomic analyses are confined to a narrow healthcare payer perspective. Long-term/lifetime prospective or observational studies, devoted to a careful definition of when to start a treatment; of regimens (dose and type of product) to employ, and of inhibitor population (children/adults, low-responding/high responding inhibitors) to study, are thus urgently needed to allow for newer insights, based on reliable data sources into resource allocation, effectiveness and cost-utility analysis in the treatment of haemophiliacs with inhibitors.

  7. Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Schubert, Siegfried D.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Numerous studies suggest that local feedback of surface evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote geographic sources of surface evaporation for precipitation, based on the implementation of three-dimensional constituent tracers of regional water vapor sources (termed water vapor tracers, WVT) in a general circulation model. The major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In the WVT approach, each tracer is associated with an evaporative source region for a prognostic three-dimensional variable that represents a partial amount of the total atmospheric water vapor. The physical processes that act on a WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be predicted within the model simulation, and can be validated against the model's prognostic water vapor. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional sources, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In most North American continental regions, the local source of precipitation is correlated with total precipitation. There is a general positive correlation between local evaporation and local precipitation, but it can be weaker because large evaporation can occur when precipitation is inhibited. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.

  8. Finite Moment Tensors of Southern California Earthquakes

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Chen, P.; Zhao, L.

    2003-12-01

    We have developed procedures for inverting broadband waveforms for the finite moment tensors (FMTs) of regional earthquakes. The FMT is defined in terms of second-order polynomial moments of the source space-time function and provides the lowest order representation of a finite fault rupture; it removes the fault-plane ambiguity of the centroid moment tensor (CMT) and yields several additional parameters of seismological interest: the characteristic length L{c}, width W{c}, and duration T{c} of the faulting, as well as the directivity vector {v}{d} of the fault slip. To formulate the inverse problem, we follow and extend the methods of McGuire et al. [2001, 2002], who have successfully recovered the second-order moments of large earthquakes using low-frequency teleseismic data. We express the Fourier spectra of a synthetic point-source waveform in its exponential (Rytov) form and represent the observed waveform relative to the synthetic in terms two frequency-dependent differential times, a phase delay δ τ {p}(ω ) and an amplitude-reduction time δ τ {q}(ω ), which we measure using Gee and Jordan's [1992] isolation-filter technique. We numerically calculate the FMT partial derivatives in terms of second-order spatiotemporal gradients, which allows us to use 3D finite-difference seismograms as our isolation filters. We have applied our methodology to a set of small to medium-sized earthquakes in Southern California. The errors in anelastic structure introduced perturbations larger than the signal level caused by finite source effect. We have therefore employed a joint inversion technique that recovers the CMT parameters of the aftershocks, as well as the CMT and FMT parameters of the mainshock, under the assumption that the source finiteness of the aftershocks can be ignored. The joint system of equations relating the δ τ {p} and δ τ {q} data to the source parameters of the mainshock-aftershock cluster is denuisanced for path anomalies in both observables; this projection operation effectively corrects the mainshock data for path-related amplitude anomalies in a way similar to, but more flexible than, empirical Green function (EGF) techniques.

  9. Fine-Tuning the Accretion Disk Clock in Hercules X-1

    NASA Technical Reports Server (NTRS)

    Still, M.; Boyd, P.

    2004-01-01

    RXTE ASM count rates from the X-ray pulsar Her X-1 began falling consistently during the late months of 2003. The source is undergoing another state transition similar to the anomalous low state of 1999. This new event has triggered observations from both space and ground-based observatories. In order to aid data interpretation and telescope scheduling, and to facilitate the phase-connection of cycles before and after the state transition, we have re-calculated the precession ephemeris using cycles over the last 3.5 years. We report that the source has displayed a different precession period since the last anomalous event. Additional archival data from CGRO suggests that each low state is accompanied by a change in precession period and that the subsequent period is correlated with accretion flux. Consequently our analysis reveals long-term accretion disk behaviour which is predicted by theoretical models of radiation-driven warping.

  10. Correlation between Ti source/drain contact and performance of InGaZnO-based thin film transistors

    NASA Astrophysics Data System (ADS)

    Choi, Kwang-Hyuk; Kim, Han-Ki

    2013-02-01

    Ti contact properties and their electrical contribution to an amorphous InGaZnO (a-IGZO) semiconductor-based thin film transistor (TFT) were investigated in terms of chemical, structural, and electrical considerations. TFT device parameters were quantitatively studied by a transmission line method. By comparing various a-IGZO TFT parameters with those of different Ag and Ti source/drain electrodes, Ti S/D contact with an a-IGZO channel was found to lead to a negative shift in VT (-Δ 0.52 V). This resulted in higher saturation mobility (8.48 cm2/Vs) of a-IGZO TFTs due to effective interfacial reaction between Ti and an a-IGZO semiconducting layer. Based on transmission electron microcopy, x-ray photoelectron depth profile analyses, and numerical calculation of TFT parameters, we suggest a possible Ti contact mechanism on semiconducting a-IGZO channel layers for TFTs.

  11. Information theoretic approach for assessing image fidelity in photon-counting arrays.

    PubMed

    Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram

    2010-02-01

    The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.

  12. The photon fluence non-uniformity correction for air kerma near Cs-137 brachytherapy sources.

    PubMed

    Rodríguez, M L; deAlmeida, C E

    2004-05-07

    The use of brachytherapy sources in radiation oncology requires their proper calibration to guarantee the correctness of the dose delivered to the treatment volume of a patient. One of the elements to take into account in the dose calculation formalism is the non-uniformity of the photon fluence due to the beam divergence that causes a steep dose gradient near the source. The correction factors for this phenomenon have been usually evaluated by the two theories available, both of which were conceived only for point sources. This work presents the Monte Carlo assessment of the non-uniformity correction factors for a Cs-137 linear source and a Farmer-type ionization chamber. The results have clearly demonstrated that for linear sources there are some important differences among the values obtained from different calculation models, especially at short distances from the source. The use of experimental values for each specific source geometry is recommended in order to assess the non-uniformity factors for linear sources in clinical situations that require special dose calculations or when the correctness of treatment planning software is verified during the acceptance tests.

  13. Examination of the suitability of an implementation of the Jette localized heterogeneities fluence term L(1)(x,y,z) in an electron beam treatment planning algorithm

    NASA Astrophysics Data System (ADS)

    Rodebaugh, Raymond Francis, Jr.

    2000-11-01

    In this project we applied modifications of the Fermi- Eyges multiple scattering theory to attempt to achieve the goals of a fast, accurate electron dose calculation algorithm. The dose was first calculated for an ``average configuration'' based on the patient's anatomy using a modification of the Hogstrom algorithm. It was split into a measured central axis depth dose component based on the material between the source and the dose calculation point, and an off-axis component based on the physics of multiple coulomb scattering for the average configuration. The former provided the general depth dose characteristics along the beam fan lines, while the latter provided the effects of collimation. The Gaussian localized heterogeneities theory of Jette provided the lateral redistribution of the electron fluence by heterogeneities. Here we terminated Jette's infinite series of fluence redistribution terms after the second term. Experimental comparison data were collected for 1 cm thick x 1 cm diameter air and aluminum pillboxes using the Varian 2100C linear accelerator at Rush-Presbyterian- St. Luke's Medical Center. For an air pillbox, the algorithm results were in reasonable agreement with measured data at both 9 and 20 MeV. For the Aluminum pill box, there were significant discrepancies between the results of this algorithm and experiment. This was particularly apparent for the 9 MeV beam. Of course a one cm thick Aluminum heterogeneity is unlikely to be encountered in a clinical situation; the thickness, linear stopping power, and linear scattering power of Aluminum are all well above what would normally be encountered. We found that the algorithm is highly sensitive to the choice of the average configuration. This is an indication that the series of fluence redistribution terms does not converge fast enough to terminate after the second term. It also makes it difficult to apply the algorithm to cases where there are no a priori means of choosing the best average configuration or where there is a complex geometry containing both lowly and highly scattering heterogeneities. There is some hope of decreasing the sensitivity to the average configuration by including portions of the next term of the localized heterogeneities series.

  14. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less

  15. Mapping AmeriFlux footprints: Towards knowing the flux source area across a network of towers

    NASA Astrophysics Data System (ADS)

    Menzer, O.; Pastorello, G.; Metzger, S.; Poindexter, C.; Agarwal, D.; Papale, D.

    2014-12-01

    The AmeriFlux network collects long-term carbon, water and energy flux measurements obtained with the eddy covariance method. In order to attribute fluxes to specific areas of the land surface, flux source calculations are essential. Consequently, footprint models can support flux up-scaling exercises to larger regions, often based on remote sensing data. However, flux footprints are not currently being routinely calculated; different approaches exist but have not been standardized. In part, this is due to varying instrumentation and data processing methods at the site level. The goal of this work is to map tower footprints for a future standardized AmeriFlux product to be generated at the network level. These footprints can be estimated by analytical models, Lagrangian simulations, and large-eddy simulations. However, for many sites, the datasets currently submitted to central databases generally do not include all variables required. The AmeriFlux network is moving to collection of raw data and expansion of the variables requested from sites, giving the possibility to calculate all parameters and variables needed to run most of the available footprint models. In this pilot study, we are applying state of the art footprint models across a subset of AmeriFlux sites, to evaluate the feasibility and merit of developing standardized footprint results. In addition to comparing outcomes from several footprint models, we will attempt to verify and validate the results in two ways: (i) Verification of our footprint calculations at sites where footprints have been experimentally estimated. (ii) Validation at towers situated in heterogeneous landscapes: here, variations in the observed fluxes are expected to correlate with spatiotemporal variations of the source area composition. Once implemented, the footprint results can be used as additional information within the AmeriFlux database that can support data interpretation and data assimilation. Lastly, we will explore the expandability of this approach to other flux networks by collaborating with and including sites from the ICOS and NEON networks in our analyses. This can enable utilizing the footprint model output to improve network interoperability, thus further promoting synthesis analyses and understanding of system-level questions in the future.

  16. Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano

    2016-04-01

    On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.

  17. A comprehensive classification method for VOC emission sources to tackle air pollution based on VOC species reactivity and emission amounts.

    PubMed

    Li, Guohao; Wei, Wei; Shao, Xia; Nie, Lei; Wang, Hailin; Yan, Xiao; Zhang, Rui

    2018-05-01

    In China, volatile organic compound (VOC) control directives have been continuously released and implemented for important sources and regions to tackle air pollution. The corresponding control requirements were based on VOC emission amounts (EA), but never considered the significant differentiation of VOC species in terms of atmospheric chemical reactivity. This will adversely influence the effect of VOC reduction on air quality improvement. Therefore, this study attempted to develop a comprehensive classification method for typical VOC sources in the Beijing-Tianjin-Hebei region (BTH), by combining the VOC emission amounts with the chemical reactivities of VOC species. Firstly, we obtained the VOC chemical profiles by measuring 5 key sources in the BTH region and referencing another 10 key sources, and estimated the ozone formation potential (OFP) per ton VOC emission for these sources by using the maximum incremental reactivity (MIR) index as the characteristic of source reactivity (SR). Then, we applied the data normalization method to respectively convert EA and SR to normalized EA (NEA) and normalized SR (NSR) for various sources in the BTH region. Finally, the control index (CI) was calculated, and these sources were further classified into four grades based on the normalized CI (NCI). The study results showed that in the BTH region, furniture coating, automobile coating, and road vehicles are characterized by high NCI and need to be given more attention; however, the petro-chemical industry, which was designated as an important control source by air quality managers, has a lower NCI. Copyright © 2017. Published by Elsevier B.V.

  18. Measuring the 511 keV emission in the direction of 1E1740.7-2942 with BATSE

    NASA Technical Reports Server (NTRS)

    Wallyn, P.; Ling, J. C.; Mahoney, W. A.; Wheaton, W. A.; Durouchoux, P.; Corbel, S.; Astier-Perret, L.; Poirot, L.

    1997-01-01

    Observations of the 511 keV emission in the direction of 1E 1740.7-2942 (1E) using the earth burst and transient source experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO), are presented. The CGRO phase 1 average spectrum of 1E is calculated using a method which assumes that a given source spectrum is the sum of the flux coming directly from the object and the contribution from the surrounding diffuse emission. The 1E light curve is calculated in the 40 to 150 keV range. It presents a constant flux excess of 70 mCrab in comparison with observations from the SIGMA gamma ray telescope onboard the GRANAT observatory. By removing this contribution, the 1E spectral transition from the low state to the high standard state observed by SIGMA is confirmed, and it is shown that the 511 keV flux is independent of the 1E long term evolution from low state to high standard state. It is concluded that the 511 keV emission of (4.2 +/- 1.3) x 140(exp -4) photons/sq cm s observed in the direction of 1E is mainly diffuse and spatially extended.

  19. Multiagency Urban Search Experiment Detector and Algorithm Test Bed

    NASA Astrophysics Data System (ADS)

    Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.

    2017-07-01

    In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.

  20. Do Methodological Choices in Environmental Modeling Bias Rebound Effects? A Case Study on Electric Cars.

    PubMed

    Font Vivanco, David; Tukker, Arnold; Kemp, René

    2016-10-18

    Improvements in resource efficiency often underperform because of rebound effects. Calculations of the size of rebound effects are subject to various types of bias, among which methodological choices have received particular attention. Modellers have primarily focused on choices related to changes in demand, however, choices related to modeling the environmental burdens from such changes have received less attention. In this study, we analyze choices in the environmental assessment methods (life cycle assessment (LCA) and hybrid LCA) and environmental input-output databases (E3IOT, Exiobase and WIOD) used as a source of bias. The analysis is done for a case study on battery electric and hydrogen cars in Europe. The results describe moderate rebound effects for both technologies in the short term. Additionally, long-run scenarios are calculated by simulating the total cost of ownership, which describe notable rebound effect sizes-from 26 to 59% and from 18 to 28%, respectively, depending on the methodological choices-with favorable economic conditions. Relevant sources of bias are found to be related to incomplete background systems, technology assumptions and sectorial aggregation. These findings highlight the importance of the method setup and of sensitivity analyses of choices related to environmental modeling in rebound effect assessments.

  1. Mesoscale carbon sequestration site screening and CCS infrastructure analysis.

    PubMed

    Keating, Gordon N; Middleton, Richard S; Stauffer, Philip H; Viswanathan, Hari S; Letellier, Bruce C; Pasqualini, Donatella; Pawar, Rajesh J; Wolfsberg, Andrew V

    2011-01-01

    We explore carbon capture and sequestration (CCS) at the meso-scale, a level of study between regional carbon accounting and highly detailed reservoir models for individual sites. We develop an approach to CO(2) sequestration site screening for industries or energy development policies that involves identification of appropriate sequestration basin, analysis of geologic formations, definition of surface sites, design of infrastructure, and analysis of CO(2) transport and storage costs. Our case study involves carbon management for potential oil shale development in the Piceance-Uinta Basin, CO and UT. This study uses new capabilities of the CO(2)-PENS model for site screening, including reservoir capacity, injectivity, and cost calculations for simple reservoirs at multiple sites. We couple this with a model of optimized source-sink-network infrastructure (SimCCS) to design pipeline networks and minimize CCS cost for a given industry or region. The CLEAR(uff) dynamical assessment model calculates the CO(2) source term for various oil production levels. Nine sites in a 13,300 km(2) area have the capacity to store 6.5 GtCO(2), corresponding to shale-oil production of 1.3 Mbbl/day for 50 years (about 1/4 of U.S. crude oil production). Our results highlight the complex, nonlinear relationship between the spatial deployment of CCS infrastructure and the oil-shale production rate.

  2. Development and application of a hybrid transport methodology for active interrogation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royston, K.; Walters, W.; Haghighat, A.

    A hybrid Monte Carlo and deterministic methodology has been developed for application to active interrogation systems. The methodology consists of four steps: i) neutron flux distribution due to neutron source transport and subcritical multiplication; ii) generation of gamma source distribution from (n, 7) interactions; iii) determination of gamma current at a detector window; iv) detection of gammas by the detector. This paper discusses the theory and results of the first three steps for the case of a cargo container with a sphere of HEU in third-density water cargo. To complete the first step, a response-function formulation has been developed tomore » calculate the subcritical multiplication and neutron flux distribution. Response coefficients are pre-calculated using the MCNP5 Monte Carlo code. The second step uses the calculated neutron flux distribution and Bugle-96 (n, 7) cross sections to find the resulting gamma source distribution. In the third step the gamma source distribution is coupled with a pre-calculated adjoint function to determine the gamma current at a detector window. The AIMS (Active Interrogation for Monitoring Special-Nuclear-Materials) software has been written to output the gamma current for a source-detector assembly scanning across a cargo container using the pre-calculated values and taking significantly less time than a reference MCNP5 calculation. (authors)« less

  3. Velocity-induced heavy quarkonium dissociation using the gauge-gravity correspondence

    NASA Astrophysics Data System (ADS)

    Patra, Binoy Krishna; Khanchandani, Himanshu; Thakur, Lata

    2015-10-01

    Using the gauge-gravity duality, we have obtained the potential between a heavy quark and an antiquark pair, which is moving perpendicular to the direction of orientation, in a strongly coupled supersymmetric hot plasma. For this purpose we work on a metric in the gravity side, viz. Ouyang-Klebanov-Strassler black hole geometry, of which the dual in the gauge theory side runs with the energy and hence proves to be a better background for thermal QCD. The potential obtained has a confining term both in the vacuum and in a medium, in addition to the Coulomb term alone, usually reported in the literature. As the velocity of the pair is increased, the screening of the potential gets weakened, which may be understood by the decrease of the effective temperature with the increase of the velocity. The crucial observation of our work is that, beyond a critical separation of the heavy quark pair, the potential develops an imaginary part which is nowadays understood to be the main source of dissociation. The imaginary part is found to vanish at small r , thus agreeing with the perturbative result. Finally we have estimated the thermal width for the ground and first excited states and found that nonzero rapidities lead to an increase of thermal width. This implies that the moving quarkonia dissociate more easily than the static ones, which agrees with other calculations. However, the width in our case is larger than other calculations due to the presence of confining terms.

  4. 77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...

  5. Radionuclides in the Arctic seas from the former Soviet Union: Potential health and ecological risks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Layton, D W; Edson, R; Varela, M

    1999-11-15

    The primary goal of the assessment reported here is to evaluate the health and environmental threat to coastal Alaska posed by radioactive-waste dumping in the Arctic and Northwest Pacific Oceans by the FSU. In particular, the FSU discarded 16 nuclear reactors from submarines and an icebreaker in the Kara Sea near the island of Novaya Zemlya, of which 6 contained spent nuclear fuel (SNF); disposed of liquid and solid wastes in the Sea of Japan; lost a {sup 90}Sr-powered radioisotope thermoelectric generator at sea in the Sea of Okhotsk; and disposed of liquid wastes at several sites in the Pacificmore » Ocean, east of the Kamchatka Peninsula. In addition to these known sources in the oceans, the RAIG evaluated FSU waste-disposal practices at inland weapons-development sites that have contaminated major rivers flowing into the Arctic Ocean. The RAIG evaluated these sources for the potential for release to the environment, transport, and impact to Alaskan ecosystems and peoples through a variety of scenarios, including a worst-case total instantaneous and simultaneous release of the sources under investigation. The risk-assessment process described in this report is applicable to and can be used by other circumpolar countries, with the addition of information about specific ecosystems and human life-styles. They can use the ANWAP risk-assessment framework and approach used by ONR to establish potential doses for Alaska, but add their own specific data sets about human and ecological factors. The ANWAP risk assessment addresses the following Russian wastes, media, and receptors: dumped nuclear submarines and icebreaker in Kara Sea--marine pathways; solid reactor parts in Sea of Japan and Pacific Ocean--marine pathways; thermoelectric generator in Sea of Okhotsk--marine pathways; current known aqueous wastes in Mayak reservoirs and Asanov Marshes--riverine to marine pathways; and Alaska as receptor. For these waste and source terms addressed, other pathways, such as atmospheric transport, could be considered under future-funded research efforts for impacts to Alaska. The ANWAP risk assessment does not address the following wastes, media, and receptors: radioactive sources in Alaska (except to add perspective for Russian source term); radioactive wastes associated with Russian naval military operations and decommissioning; Russian production reactor and spent-fuel reprocessing facilities nonaqueous source terms; atmospheric, terrestrial and nonaqueous pathways; and dose calculations for any circumpolar locality other than Alaska. These other, potentially serious sources of radioactivity to the Arctic environment, while outside the scope of the current ANWAP mandate, should be considered for future funding research efforts.« less

  6. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  7. OH{sup +} in astrophysical media: state-to-state formation rates, Einstein coefficients and inelastic collision rates with He

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gómez-Carrasco, Susana; Godard, Benjamin; Lique, François

    The rate constants required to model the OH{sup +} observations in different regions of the interstellar medium have been determined using state of the art quantum methods. First, state-to-state rate constants for the H{sub 2}(v = 0, J = 0, 1) + O{sup +}({sup 4} S) → H + OH{sup +}(X {sup 3}Σ{sup –}, v', N) reaction have been obtained using a quantum wave packet method. The calculations have been compared with time-independent results to assess the accuracy of reaction probabilities at collision energies of about 1 meV. The good agreement between the simulations and the existing experimental cross sectionsmore » in the 0.01-1 eV energy range shows the quality of the results. The calculated state-to-state rate constants have been fitted to an analytical form. Second, the Einstein coefficients of OH{sup +} have been obtained for all astronomically significant rovibrational bands involving the X {sup 3}Σ{sup –} and/or A {sup 3}Π electronic states. For this purpose, the potential energy curves and electric dipole transition moments for seven electronic states of OH{sup +} are calculated with ab initio methods at the highest level, including spin-orbit terms, and the rovibrational levels have been calculated including the empirical spin-rotation and spin-spin terms. Third, the state-to-state rate constants for inelastic collisions between He and OH{sup +}(X {sup 3}Σ{sup –}) have been calculated using a time-independent close coupling method on a new potential energy surface. All these rates have been implemented in detailed chemical and radiative transfer models. Applications of these models to various astronomical sources show that inelastic collisions dominate the excitation of the rotational levels of OH{sup +}. In the models considered, the excitation resulting from the chemical formation of OH{sup +} increases the line fluxes by about 10% or less depending on the density of the gas.« less

  8. Atmospheric observations and inverse modelling for quantifying emissions of point-source synthetic greenhouse gases in East Asia

    NASA Astrophysics Data System (ADS)

    Arnold, Tim; Manning, Alistair; Li, Shanlan; Kim, Jooil; Park, Sunyoung; Muhle, Jens; Weiss, Ray

    2017-04-01

    The fluorinated species carbon tetrafluoride (CF4; PFC-14), nitrogen trifluoride (NF3) and trifluoromethane (CHF3; HFC-23) are potent greenhouse gases with 100-year global warming potentials of 6,630, 16,100 and 12,400, respectively. Unlike the majority of CFC-replacements that are emitted from fugitive and mobile emission sources, these gases are mostly emitted from large single point sources - semiconductor manufacturing facilities (all three), aluminium smelting plants (CF4) and chlorodifluoromethane (HCFC-22) factories (HFC-23). In this work we show that atmospheric measurements can serve as a basis to calculate emissions of these gases and to highlight emission 'hotspots'. We use measurements from one Advanced Global Atmospheric Gases Experiment (AGAGE) long term monitoring sites at Gosan on Jeju Island in the Republic of Korea. This site measures CF4, NF3 and HFC-23 alongside a suite of greenhouse and stratospheric ozone depleting gases every two hours using automated in situ gas-chromatography mass-spectrometry instrumentation. We couple each measurement to an analysis of air history using the regional atmospheric transport model NAME (Numerical Atmospheric dispersion Modelling Environment) driven by 3D meteorology from the Met Office's Unified Model, and use a Bayesian inverse method (InTEM - Inversion Technique for Emission Modelling) to calculate yearly emission changes over seven years between 2008 and 2015. We show that our 'top-down' emission estimates for NF3 and CF4 are significantly larger than 'bottom-up' estimates in the EDGAR emissions inventory (edgar.jrc.ec.europa.eu). For example we calculate South Korean emissions of CF4 in 2010 to be 0.29±0.04 Gg/yr, which is significantly larger than the Edgar prior emissions of 0.07 Gg/yr. Further, inversions for several separate years indicate that emission hotspots can be found without prior spatial information. At present these gases make a small contribution to global radiative forcing, however, given that the impact of these long-lived gases could rise significantly and that point sources of such gases can be mitigated, atmospheric monitoring could be an important tool for aiding emissions reduction policy.

  9. Climate Change Impacts of US Reactive Nitrogen Emissions

    NASA Astrophysics Data System (ADS)

    Pinder, R. W.; Davidson, E. A.; Goodale, C. L.; Greaver, T.; Herrick, J.; Liu, L.

    2011-12-01

    By fossil fuel combustion and fertilizer application, the US has substantially altered the nitrogen cycle, with serious effects on climate change. The climate effects can be short-lived, by impacting the chemistry of the atmosphere, or long-lived, by altering ecosystem greenhouse gas fluxes. Here, we develop a coherent framework for assessing the climate change impacts of US reactive nitrogen emissions. We use the global temperature potential (GTP) as a common metric, and we calculate the GTP at 20 and 100 years in units of CO2 equivalents. At both time-scales, nitrogen enhancement of CO2 uptake has the largest impact, because in the eastern US, areas of high nitrogen deposition are co-located with forests. In the short-term, the effect due to NOx altering ozone and methane concentrations is also substantial, but are not important on the 100 year time scale. Finally, the GTP of N2O emissions is substantial at both time scales. We have also attributed these impacts to combustion and agricultural sources, and quantified the uncertainty. Reactive nitrogen from combustion sources contribute more to cooling than warming. The impacts of agricultural sources tend to cancel each other out, and the net effect is uncertain. Recent trends show decreasing reactive nitrogen from US combustion sources, while agricultural sources are increasing. Fortunately, there are many mitigation strategies currently available to reduce the climate change impacts of US agricultural sources.

  10. Effect of U on the electronic properties of neodymium gallate (NdGaO3): theoretical and experimental studies.

    PubMed

    Reshak, Ali Hussain; Piasecki, M; Auluck, S; Kityk, I V; Khenata, R; Andriyevsky, B; Cobet, C; Esser, N; Majchrowski, A; Swirkowicz, M; Diduszko, R; Szyrski, W

    2009-11-19

    We have performed a density functional calculation for the centrosymmetric neodymium gallate using a full-potential linear augmented plane wave method with the LDA and LDA+U exchange correlation. In particular, we explored the influence of U on the band dispersion and optical transitions. Our calculations show that U = 0.55 Ry gives the best agreement with our ellipsometry data taken in the VUV spectral range with a synchrotron source. Our LDA+U (U = 0.55) calculation shows that the valence band maximum (VBM) is located at T and the conduction band minimum (CBM) is located at the center of the Brillouin zone, resulting in a wide indirect energy band gap of about 3.8 eV in excellent agreement with our experiment. The partial density of states show that the upper valence band originates predominantly from Nd-f and O-p states, with a small admixture of Nd-s/p and Ga-p B-p states, while the lower conduction band prevailingly originates from the Nd-f and Nd-d terms with a small contribution of O-p-Ga-s/p states. The Nd-f states in the upper valence band and lower conduction band have a significant influence on the energy band gap dispersion which is illustrated by our calculations. The calculated frequency dependent optical properties show a small positive uniaxial anisotropy.

  11. Main Sources and Doses of Space Radiation during Mars Missions and Total Radiation Risk for Cosmonauts

    NASA Astrophysics Data System (ADS)

    Mitrikas, Victor; Aleksandr, Shafirkin; Shurshakov, Vyacheslav

    This work contains calculation data of generalized doses and dose equivalents in critical organs and tissues of cosmonauts produces by galactic cosmic rays (GCR), solar cosmic rays (SCR) and the Earth’s radiation belts (ERB) that will impact crewmembers during a flight to Mars, while staying in the landing module and on the Martian surface, and during the return to Earth. Also calculated total radiation risk values during whole life of cosmonauts after the flight are presented. Radiation risk (RR) calculations are performed on the basis of a radiobiological model of radiation damage to living organisms, while taking into account reparation processes acting during continuous long-term exposure at various dose rates and under acute recurrent radiation impact. The calculations of RR are performed for crewmembers of various ages implementing a flight to Mars over 2 - 3 years in maximum and minimum of the solar cycle. The total carcinogenic and non-carcinogenic RR and possible life-span shortening are estimated on the basis of a model of the radiation death probability for mammals. This model takes into account the decrease in compensatory reserve of an organism as well as the increase in mortality rate and descent of the subsequent lifetime of the cosmonaut. The analyzed dose distributions in the shielding and body areas are applied to making model calculations of tissue equivalent spherical and anthropomorphic phantoms.

  12. Accuracy of assessing the level of impulse sound from distant sources.

    PubMed

    Wszołek, Tadeusz; Kłaczyński, Maciej

    2007-01-01

    Impulse sound events are characterised by ultra high pressures and low frequencies. Lower frequency sounds are generally less attenuated over a given distance in the atmosphere than higher frequencies. Thus, impulse sounds can be heard over greater distances and will be more affected by the environment. To calculate a long-term average immission level it is necessary to apply weighting factors like the probability of the occurrence of each weather condition during the relevant time period. This means that when measuring impulse noise at a long distance it is necessary to follow environmental parameters in many points along the way sound travels and also to have a database of sound transfer functions in the long term. The paper analyses the uncertainty of immission measurement results of impulse sound from cladding and destroying explosive materials. The influence of environmental conditions on the way sound travels is the focus of this paper.

  13. Northern Hemisphere Biome-and Process-Specific Changes in Forest Area and Gross Merchantable Volume: 1890-1990 (DB1017)

    DOE Data Explorer

    Auclair, A.N.D. [Science and Policy Associates, Inc., Washington, D.C. (United States; Bedford, J.A. [Science and Policy Associates, Inc., Washington, D.C. (United States); Revenga, C. [Science and Policy Associates, Inc., Washington, D.C. (United States); Brenkert, A.L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    1997-01-01

    This database lists annual changes in areal extent (Ha) and gross merchantable wood volume (m3) produced by depletion and accrual processes in boreal and temperate forests in Alaska, Canada, Europe, Former Soviet Union, Non-Soviet temperate Asia, and the contiguous United States for the years 1890 through 1990. Forest depletions (source terms for atmospheric CO2) are identified as forest pests, forest dieback, forest fires, forest harvest, and land-use changes (predominantly the conversion of forest, temperate woodland, and shrubland to cropland). Forest accruals (sink terms for atmospheric CO2) are identified as fire exclusion, fire suppression, and afforestation or crop abandonment. The changes in areal extent and gross merchantable wood volume are calculated separately for each of the following biomes: forest tundra, boreal softwoods, mixed hardwoods, temperate softwoods, temperate hardwoods, and temperate wood- and shrublands.

  14. Fragment emission from the mass-symmetric reactions 58Fe,58Ni +58Fe,58Ni at Ebeam=30 MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, E.; Johnston, H.; Gimeno-Nogues, F.; Rowland, D. J.; Laforest, R.; Lui, Y.-W.; Ferro, S.; Vasal, S.; Yennello, S. J.

    1998-04-01

    The mass-symmetric reactions 58Fe,58Ni +58Fe,58Ni were studied at a beam energy of Ebeam=30 MeV/nucleon in order to investigate the isospin dependence of fragment emission. Ratios of inclusive yields of isotopic fragments from hydrogen through nitrogen were extracted as a function of laboratory angle. A moving source analysis of the data indicates that at laboratory angles around 40° the yield of intermediate mass fragments (IMF's) beyond Z=3 is predominantly from a midrapidity source. The angular dependence of the relative yields of isotopes beyond Z=3 indicates that the IMF's at more central angles originate from a source which is more neutron deficient than the source responsible for fragments emitted at forward angles. The charge distributions and kinetic energy spectra of the IMF's at various laboratory angles were well reproduced by calculations employing a quantum molecular-dynamics code followed by a statistical multifragmentation model for generating fragments. The calculations indicate that the measured IMF's originate mainly from a single source. The isotopic composition of the emitted fragments is, however, not reproduced by the same calculation. The measured isotopic and isobaric ratios indicate an emitting source that is more neutron rich in comparison to the source predicted by model calculations.

  15. Limitations of current dosimetry for intracavitary accelerated partial breast irradiation with high dose rate iridium-192 and electronic brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Raffi, Julie A.

    Intracavitary accelerated partial breast irradiation (APBI) is a method of treating early stage breast cancer using a high dose rate (HDR) brachytherapy source positioned within the lumpectomy cavity. An expandable applicator stretches the surrounding tissue into a roughly spherical or elliptical shape and the dose is prescribed to 1 cm beyond the edge of the cavity. Currently, dosimetry for these treatments is most often performed using the American Association of Physicists in Medicine Task Group No. 43 (TG-43) formalism. The TG-43 dose-rate equation determines the dose delivered to a homogeneous water medium by scaling the measured source strength with standardized parameters that describe the radial and angular features of the dose distribution. Since TG-43 parameters for each source model are measured or calculated in a homogeneous water medium, the dosimetric effects of the patient's dimensions and composition are not accounted for. Therefore, the accuracy of TG-43 calculations for intracavitary APBI is limited by the presence of inhomogeneities in and around the target volume. Specifically, the breast is smaller than the phantoms used to determine TG-43 parameters and is surrounded by air, ribs, and lung tissue. Also, the composition of the breast tissue itself can affect the dose distribution. This dissertation is focused on investigating the limitations of TG-43 dosimetry for intracavitary APBI for two HDR brachytherapy sources: the VariSource TM VS2000 192Ir source and the AxxentRTM miniature x-ray source. The dose for various conditions was determined using thermoluminescent dosimeters (TLDs) and Monte Carlo (MC) calculations. Accurate measurements and calculations were achieved through the implementation of new measurement and simulation techniques and a novel breast phantom was developed to enable anthropomorphic phantom measurements. Measured and calculated doses for phantom and patient geometries were compared with TG-43 calculated doses to illustrate the limitations of TG-43 dosimetry for intracavitary APBI. TG-43 dose calculations overestimate the dose for regions approaching the lung and breast surface and underestimate the dose for regions in and beyond less-attenuating media such as lung tissue, and for lower energies, breast tissue as well.

  16. Sensitivity of Earthquake Loss Estimates to Source Modeling Assumptions and Uncertainty

    USGS Publications Warehouse

    Reasenberg, Paul A.; Shostak, Nan; Terwilliger, Sharon

    2006-01-01

    Introduction: This report explores how uncertainty in an earthquake source model may affect estimates of earthquake economic loss. Specifically, it focuses on the earthquake source model for the San Francisco Bay region (SFBR) created by the Working Group on California Earthquake Probabilities. The loss calculations are made using HAZUS-MH, a publicly available computer program developed by the Federal Emergency Management Agency (FEMA) for calculating future losses from earthquakes, floods and hurricanes within the United States. The database built into HAZUS-MH includes a detailed building inventory, population data, data on transportation corridors, bridges, utility lifelines, etc. Earthquake hazard in the loss calculations is based upon expected (median value) ground motion maps called ShakeMaps calculated for the scenario earthquake sources defined in WGCEP. The study considers the effect of relaxing certain assumptions in the WG02 model, and explores the effect of hypothetical reductions in epistemic uncertainty in parts of the model. For example, it addresses questions such as what would happen to the calculated loss distribution if the uncertainty in slip rate in the WG02 model were reduced (say, by obtaining additional geologic data)? What would happen if the geometry or amount of aseismic slip (creep) on the region's faults were better known? And what would be the effect on the calculated loss distribution if the time-dependent earthquake probability were better constrained, either by eliminating certain probability models or by better constraining the inherent randomness in earthquake recurrence? The study does not consider the effect of reducing uncertainty in the hazard introduced through models of attenuation and local site characteristics, although these may have a comparable or greater effect than does source-related uncertainty. Nor does it consider sources of uncertainty in the building inventory, building fragility curves, and other assumptions adopted in the loss calculations. This is a sensitivity study aimed at future regional earthquake source modelers, so that they may be informed of the effects on loss introduced by modeling assumptions and epistemic uncertainty in the WG02 earthquake source model.

  17. A systematic examination of a random sampling strategy for source apportionment calculations.

    PubMed

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. MPACT Subgroup Self-Shielding Efficiency Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane; Liu, Yuxuan; Collins, Benjamin S.

    Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The nextmore » improvement that is covered is what is currently being termed as “Lumped Parameter MOC”. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.« less

  19. The General Formulation and Practical Calculation of the Diffusion Coefficient in a Lattice Containing Cavities; FORMULATION GENERALE ET CALCUL PRATIQUE DU COEFFICIENT DE DIFFUSION DANS UN RESEAU COMPORTANT DES CAVITES (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benoist, P.

    The calculation of diffusion coefficients in a lattice necessitates the knowledge of a correct method of weighting the free paths of the different constituents. An unambiguous definition of this weighting method is given here, based on the calculation of leakages from a zone of a reactor. The formulation obtained, which is both simple and general, reduces the calculation of diffusion coefficients to that of collision probabilities in the different media; it reveals in the expression for the radial coefficient the series of the terms of angular correlation (cross terms) recently shown by several authors. This formulation is then used tomore » calculate the practical case of a classical type of lattice composed of a moderator and a fuel element surrounded by an empty space. Analytical and numerical comparison of the expressions obtained with those inferred from the theory of BEHRENS shows up the importance of several new terms some of which are linked with the transparency of the fuel element. Cross terms up to the second order are evaluated. A practical formulary is given at the end of the paper. (author) [French] Le calcul des coefficients de diffusion dans un reseau suppose la connaissance d'un mode de ponderation correct des libres parcours des differents constituants. On definit ici sans ambiguite ce mode de ponderation a partir du calcul des fuites hors d'une zone de reacteur. La formulation obtenue, simple et generale, ramene le calcul des coefficients de diffusion a celui des probabilites de collision dans les differents milieux; elle fait apparaitre dans l'expression du coefficient radial la serie des termes de correlation angulaire (termes rectangles), mis en evidence recemment par plusieurs auteurs. Cette formulation est ensuite appliquee au calcul pratique d'un reseau classique, compose d'un moderateur et d'un element combustible entoure d'une cavite; la comparaison analytique et numerique des expressions obtenues avec celles deduites de la theorie de BEHRENS fait apparaitre l'importance de plusieurs termes nouveaux, dont certains sont lies a la transparence de l'element combustible; les termes rectangles sont calcules jusqu'a l'ordre 2. Un formulaire pratique est donne a la fin de cette etude. (auteur)« less

  20. An Air Quality Data Analysis System for Interrelating Effects, Standards and Needed Source Reductions

    ERIC Educational Resources Information Center

    Larsen, Ralph I.

    1973-01-01

    Makes recommendations for a single air quality data system (using average time) for interrelating air pollution effects, air quality standards, air quality monitoring, diffusion calculations, source-reduction calculations, and emission standards. (JR)

  1. Crowd-Sourced Amputee Gait Data: A Feasibility Study Using YouTube Videos of Unilateral Trans-Femoral Gait.

    PubMed

    Gardiner, James; Gunarathne, Nuwan; Howard, David; Kenney, Laurence

    2016-01-01

    Collecting large datasets of amputee gait data is notoriously difficult. Additionally, collecting data on less prevalent amputations or on gait activities other than level walking and running on hard surfaces is rarely attempted. However, with the wealth of user-generated content on the Internet, the scope for collecting amputee gait data from alternative sources other than traditional gait labs is intriguing. Here we investigate the potential of YouTube videos to provide gait data on amputee walking. We use an example dataset of trans-femoral amputees level walking at self-selected speeds to collect temporal gait parameters and calculate gait asymmetry. We compare our YouTube data with typical literature values, and show that our methodology produces results that are highly comparable to data collected in a traditional manner. The similarity between the results of our novel methodology and literature values lends confidence to our technique. Nevertheless, clear challenges with the collection and interpretation of crowd-sourced gait data remain, including long term access to datasets, and a lack of validity and reliability studies in this area.

  2. Crowd-Sourced Amputee Gait Data: A Feasibility Study Using YouTube Videos of Unilateral Trans-Femoral Gait

    PubMed Central

    Gardiner, James; Gunarathne, Nuwan; Howard, David; Kenney, Laurence

    2016-01-01

    Collecting large datasets of amputee gait data is notoriously difficult. Additionally, collecting data on less prevalent amputations or on gait activities other than level walking and running on hard surfaces is rarely attempted. However, with the wealth of user-generated content on the Internet, the scope for collecting amputee gait data from alternative sources other than traditional gait labs is intriguing. Here we investigate the potential of YouTube videos to provide gait data on amputee walking. We use an example dataset of trans-femoral amputees level walking at self-selected speeds to collect temporal gait parameters and calculate gait asymmetry. We compare our YouTube data with typical literature values, and show that our methodology produces results that are highly comparable to data collected in a traditional manner. The similarity between the results of our novel methodology and literature values lends confidence to our technique. Nevertheless, clear challenges with the collection and interpretation of crowd-sourced gait data remain, including long term access to datasets, and a lack of validity and reliability studies in this area. PMID:27764226

  3. Near-field sound radiation of fan tones from an installed turbofan aero-engine.

    PubMed

    McAlpine, Alan; Gaffney, James; Kingan, Michael J

    2015-09-01

    The development of a distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is reported. The key objective is to examine a canonical problem: how to predict the pressure field due to a distributed source located near an infinite, rigid cylinder. This canonical problem is a simple representation of an installed turbofan, where the distributed source is based on the pressure pattern generated by a spinning duct mode, and the rigid cylinder represents an aircraft fuselage. The radiation of fan tones can be modelled in terms of spinning modes. In this analysis, based on duct modes, theoretical expressions for the near-field acoustic pressures on the cylinder, or at the same locations without the cylinder, have been formulated. Simulations of the near-field acoustic pressures are compared against measurements obtained from a fan rig test. Also, the installation effect is quantified by calculating the difference in the sound pressure levels with and without the adjacent cylindrical fuselage. Results are shown for the blade passing frequency fan tone radiated at a supersonic fan operating condition.

  4. Kicking the rugby ball: perturbations of 6D gauged chiral supergravity

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; de Rham, C.; Hoover, D.; Mason, D.; Tolley, A. J.

    2007-02-01

    We analyse the axially symmetric scalar perturbations of 6D chiral gauged supergravity compactified on the general warped geometries in the presence of two source branes. We find that all of the conical geometries are marginally stable for normalizable perturbations (in disagreement with some recent calculations) and the non-conical ones for regular perturbations, even though none of them are supersymmetric (apart from the trivial Salam Sezgin solution, for which there are no source branes). The marginal direction is the one whose presence is required by the classical scaling property of the field equations, and all other modes have positive squared mass. In the special case of the conical solutions, including (but not restricted to) the unwarped 'rugby-ball' solutions, we find closed-form expressions for the mode functions in terms of Legendre and hypergeometric functions. In so doing we show how to match the asymptotic near-brane form for the solution to the physics of the source branes, and thereby how to physically interpret perturbations which can be singular at the brane positions.

  5. Gamma ray bursts from extragalactic sources

    NASA Technical Reports Server (NTRS)

    Hoyle, Fred; Burbidge, Geoffrey

    1992-01-01

    The properties of gamma ray bursts of classical type are found to be explicable in terms of high speed collisions between stars. A model is proposed in which the frequency of such collisions can be calculated. The model is then applied to the nuclei of galaxies in general on the basis that galaxies, or at least some fraction of them, originate in the expulsion of stars from creation centers. Evidence that low level activity of this kind is also taking place at the center of our own Galaxy is discussed. The implications for galactic evolution are discussed and a negative view of black holes is taken.

  6. Poisson equation for the Mercedes diagram in string theory at genus one

    NASA Astrophysics Data System (ADS)

    Basu, Anirban

    2016-03-01

    The Mercedes diagram has four trivalent vertices which are connected by six links such that they form the edges of a tetrahedron. This three-loop Feynman diagram contributes to the {D}12{{ R }}4 amplitude at genus one in type II string theory, where the vertices are the points of insertion of the graviton vertex operators, and the links are the scalar propagators on the toroidal worldsheet. We obtain a modular invariant Poisson equation satisfied by the Mercedes diagram, where the source terms involve one- and two-loop Feynman diagrams. We calculate its contribution to the {D}12{{ R }}4 amplitude.

  7. Derivation of error sources for experimentally derived heliostat shapes

    NASA Astrophysics Data System (ADS)

    Cumpston, Jeff; Coventry, Joe

    2017-06-01

    Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.

  8. Capture Versus Capture Zones: Clarifying Terminology Related to Sources of Water to Wells.

    PubMed

    Barlow, Paul M; Leake, Stanley A; Fienen, Michael N

    2018-03-15

    The term capture, related to the source of water derived from wells, has been used in two distinct yet related contexts by the hydrologic community. The first is a water-budget context, in which capture refers to decreases in the rates of groundwater outflow and (or) increases in the rates of recharge along head-dependent boundaries of an aquifer in response to pumping. The second is a transport context, in which capture zone refers to the specific flowpaths that define the three-dimensional, volumetric portion of a groundwater flow field that discharges to a well. A closely related issue that has become associated with the source of water to wells is streamflow depletion, which refers to the reduction in streamflow caused by pumping, and is a type of capture. Rates of capture and streamflow depletion are calculated by use of water-budget analyses, most often with groundwater-flow models. Transport models, particularly particle-tracking methods, are used to determine capture zones to wells. In general, however, transport methods are not useful for quantifying actual or potential streamflow depletion or other types of capture along aquifer boundaries. To clarify the sometimes subtle differences among these terms, we describe the processes and relations among capture, capture zones, and streamflow depletion, and provide proposed terminology to distinguish among them. Published 2018. This article is a U.S. Government work and is in the public domain in the USA. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  9. Phase 3 experiments of the JAERI/USDOE collaborative program on fusion blanket neutronics. Volume 1: Experiment

    NASA Astrophysics Data System (ADS)

    Oyama, Yukio; Konno, Chikara; Ikeda, Yujiro; Maekawa, Fujio; Kosako, Kazuaki; Nakamura, Tomoo; Maekawa, Hiroshi; Youssef, Mahmoud Z.; Kumar, Anil; Abdou, Mohamed A.

    1994-02-01

    A pseudo-line source has been realized by using an accelerator based D-T point neutron source. The pseudo-line source is obtained by time averaging of continuously moving point source or by superposition of finely distributed point sources. The line source is utilized for fusion blanket neutronics experiments with an annular geometry so as to simulate a part of a tokamak reactor. The source neutron characteristics were measured for two operational modes for the line source, continuous and step-wide modes, with the activation foil and the NE213 detectors, respectively. In order to give a source condition for a successive calculational analysis on the annular blanket experiment, the neutron source characteristics was calculated by a Monte Carlo code. The reliability of the Monte Carlo calculation was confirmed by comparison with the measured source characteristics. The shape of the annular blanket system was a rectangular with an inner cavity. The annular blanket was consist of 15 mm-thick first wall (SS304) and 406 mm-thick breeder zone with Li2O at inside and Li2CO3 at outside. The line source was produced at the center of the inner cavity by moving the annular blanket system in the span of 2 m. Three annular blanket configurations were examined; the reference blanket, the blanket covered with 25 mm thick graphite armor and the armor-blanket with a large opening. The neutronics parameters of tritium production rate, neutron spectrum and activation reaction rate were measured with specially developed techniques such as multi-detector data acquisition system, spectrum weighting function method and ramp controlled high voltage system. The present experiment provides unique data for a higher step of benchmark to test a reliability of neutronics design calculation for a realistic tokamak reactor.

  10. Evaluation of FSK models for radiative heat transfer under oxyfuel conditions

    NASA Astrophysics Data System (ADS)

    Clements, Alastair G.; Porter, Rachael; Pranzitelli, Alessandro; Pourkashanian, Mohamed

    2015-01-01

    Oxyfuel is a promising technology for carbon capture and storage (CCS) applied to combustion processes. It would be highly advantageous in the deployment of CCS to be able to model and optimise oxyfuel combustion, however the increased concentrations of CO2 and H2O under oxyfuel conditions modify several fundamental processes of combustion, including radiative heat transfer. This study uses benchmark narrow band radiation models to evaluate the influence of assumptions in global full-spectrum k-distribution (FSK) models, and whether they are suitable for modelling radiation in computational fluid dynamics (CFD) calculations of oxyfuel combustion. The statistical narrow band (SNB) and correlated-k (CK) models are used to calculate benchmark data for the radiative source term and heat flux, which are then compared to the results calculated from FSK models. Both the full-spectrum correlated k (FSCK) and the full-spectrum scaled k (FSSK) models are applied using up-to-date spectral data. The results show that the FSCK and FSSK methods achieve good agreement in the test cases. The FSCK method using a five-point Gauss quadrature scheme is recommended for CFD calculations in oxyfuel conditions, however there are still potential inaccuracies in cases with very wide variations in the ratio between CO2 and H2O concentrations.

  11. A mass balance approach to investigate arsenic cycling in a petroleum plume.

    PubMed

    Ziegler, Brady A; Schreiber, Madeline E; Cozzarelli, Isabelle M; Crystal Ng, G-H

    2017-12-01

    Natural attenuation of organic contaminants in groundwater can give rise to a series of complex biogeochemical reactions that release secondary contaminants to groundwater. In a crude oil contaminated aquifer, biodegradation of petroleum hydrocarbons is coupled with the reduction of ferric iron (Fe(III)) hydroxides in aquifer sediments. As a result, naturally occurring arsenic (As) adsorbed to Fe(III) hydroxides in the aquifer sediment is mobilized from sediment into groundwater. However, Fe(III) in sediment of other zones of the aquifer has the capacity to attenuate dissolved As via resorption. In order to better evaluate how long-term biodegradation coupled with Fe-reduction and As mobilization can redistribute As mass in contaminated aquifer, we quantified mass partitioning of Fe and As in the aquifer based on field observation data. Results show that Fe and As are spatially correlated in both groundwater and aquifer sediments. Mass partitioning calculations demonstrate that 99.9% of Fe and 99.5% of As are associated with aquifer sediment. The sediments act as both sources and sinks for As, depending on the redox conditions in the aquifer. Calculations reveal that at least 78% of the original As in sediment near the oil has been mobilized into groundwater over the 35-year lifespan of the plume. However, the calculations also show that only a small percentage of As (∼0.5%) remains in groundwater, due to resorption onto sediment. At the leading edge of the plume, where groundwater is suboxic, sediments sequester Fe and As, causing As to accumulate to concentrations 5.6 times greater than background concentrations. Current As sinks can serve as future sources of As as the plume evolves over time. The mass balance approach used in this study can be applied to As cycling in other aquifers where groundwater As results from biodegradation of an organic carbon point source coupled with Fe reduction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A mass balance approach to investigate arsenic cycling in a petroleum plume

    USGS Publications Warehouse

    Ziegler, Brady A.; Schreiber, Madeline E.; Cozzarelli, Isabelle M.; Ng. G.-H. Crystal,

    2017-01-01

    Natural attenuation of organic contaminants in groundwater can give rise to a series of complex biogeochemical reactions that release secondary contaminants to groundwater. In a crude oil contaminated aquifer, biodegradation of petroleum hydrocarbons is coupled with the reduction of ferric iron (Fe(III)) hydroxides in aquifer sediments. As a result, naturally occurring arsenic (As) adsorbed to Fe(III) hydroxides in the aquifer sediment is mobilized from sediment into groundwater. However, Fe(III) in sediment of other zones of the aquifer has the capacity to attenuate dissolved As via resorption. In order to better evaluate how long-term biodegradation coupled with Fe-reduction and As mobilization can redistribute As mass in contaminated aquifer, we quantified mass partitioning of Fe and As in the aquifer based on field observation data. Results show that Fe and As are spatially correlated in both groundwater and aquifer sediments. Mass partitioning calculations demonstrate that 99.9% of Fe and 99.5% of As are associated with aquifer sediment. The sediments act as both sources and sinks for As, depending on the redox conditions in the aquifer. Calculations reveal that at least 78% of the original As in sediment near the oil has been mobilized into groundwater over the 35-year lifespan of the plume. However, the calculations also show that only a small percentage of As (∼0.5%) remains in groundwater, due to resorption onto sediment. At the leading edge of the plume, where groundwater is suboxic, sediments sequester Fe and As, causing As to accumulate to concentrations 5.6 times greater than background concentrations. Current As sinks can serve as future sources of As as the plume evolves over time. The mass balance approach used in this study can be applied to As cycling in other aquifers where groundwater As results from biodegradation of an organic carbon point source coupled with Fe reduction.

  13. Using an improved understanding of current climate variability to develop increased drought resilience in UK irrigated agriculture

    NASA Astrophysics Data System (ADS)

    Holman, I.; Rey Vicario, D.

    2016-12-01

    Improving community preparedness for climate change can be supported by developing resilience to past events, focused on those changes of particular relevance (such as floods and droughts). However, communities' perceptions of impacts and risk can be influenced by an incomplete appreciation of historical baseline climate variability. This can arise from a number of factors including individual's age, access to long term data records and availability of local knowledge. For example, the most significant recent drought in the UK occurred in 1976/77 but does it represent the worst drought that did occur (or could have occurred) without climate change? We focus on the east of England where most irrigated agriculture is located and where many local farmers interviewed were either not in business then or have an incomplete memory of the impacts of the drought. This paper describes a comparison of an annual agroclimatic indicator closely linked to irrigation demand (maximum Potential Soil Moisture Deficit) calculated from three sources of long term observational and simulated historical weather data with recent data. These long term datasets include gridded measured / calculated datasets of precipitation and reference evapotranspiration; a dynamically downscaled 20th Century Re-analysis dataset, and two Regional Climate Model ensemble datasets (FutureFlows and the MaRIUS event set) which each provide between 110 and 3000 years of baseline weather. The comparison shows that the long term datasets provide a wider characterisation of current climate variability and affect the perception of current drought frequency and severity. The paper will show that using a more comprehensive understanding of current climate variability and drought risk as a basis for adapting irrigated systems to droughts can provide substantial increased resilience to (uncertain) climate change.

  14. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2004-01-01

    The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  15. A simplified analytical dose calculation algorithm accounting for tissue heterogeneity for low-energy brachytherapy sources.

    PubMed

    Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M; Ravi, Ananth; Pignol, Jean-Philippe

    2013-09-21

    The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.

  16. An empirical formula to calculate the full energy peak efficiency of scintillation detectors.

    PubMed

    Badawi, Mohamed S; Abd-Elzaher, Mohamed; Thabet, Abouzeid A; El-khatib, Ahmed M

    2013-04-01

    This work provides an empirical formula to calculate the FEPE for different detectors using the effective solid angle ratio derived from experimental measurements. The full energy peak efficiency (FEPE) curves of the (2″(*)2″) NaI(Tl) detector at different seven axial distances from the detector were depicted in a wide energy range from 59.53 to 1408keV using standard point sources. The distinction was based on the effects of the source energy and the source-to-detector distance. A good agreement was noticed between the measured and calculated efficiency values for the source-to-detector distances at 20, 25, 30, 35, 40, 45 and 50cm. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Final Technical Report - SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, Dalton D.

    Final technical report for research performed by Dr. Thomas G. Jenkins in collaboration with Professor Dalton D. Schnack on SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodyanics, DE-FC02-06ER54899, for the period of 8/15/06 - 8/14/11. This report centers on the Slow MHD physics campaign work performed by Dr. Jenkins while at UW-Madison and then at Tech-X Corporation. To make progress on the problem of RF induced currents affect magnetic island evolution in toroidal plasmas, a set of research approaches are outlined. Three approaches can be addressed in parallel. These are: (1) Analytically prescribed additional term in Ohm's law tomore » model the effect of localized ECCD current drive; (2) Introduce an additional evolution equation for the Ohm's law source term. Establish a RF source 'box' where information from the RF code couples to the fluid evolution; and (3) Carry out a more rigorous analytic calculation treating the additional RF terms in a closure problem. These approaches rely on the necessity of reinvigorating the computation modeling efforts of resistive and neoclassical tearing modes with present day versions of the numerical tools. For the RF community, the relevant action item is - RF ray tracing codes need to be modified so that general three-dimensional spatial information can be obtained. Further, interface efforts between the two codes require work as well as an assessment as to the numerical stability properties of the procedures to be used.« less

  18. Engineering description of the ascent/descent bet product

    NASA Technical Reports Server (NTRS)

    Seacord, A. W., II

    1986-01-01

    The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.

  19. Gravitational lens optical scalars in terms of energy-momentum distributions in the cosmological framework

    NASA Astrophysics Data System (ADS)

    Boero, Ezequiel F.; Moreschi, Osvaldo M.

    2018-04-01

    We present new results on gravitational lensing over cosmological Robertson-Walker backgrounds which extend and generalize previous works. Our expressions show the presence of new terms and factors which have been neglected in the literature on the subject. The new equations derived here for the optical scalars allow to deal with more general matter content including sources with non-Newtonian components of the energy-momentum tensor and arbitrary motion. Our treatment is within the framework of weak gravitational lenses in which first-order effects of the curvature are considered. We have been able to make all calculations without referring to the concept of deviation angle. This in turn, makes the presentation shorter but also allows for the consideration of global effects on the Robertson-Walker background that have been neglected in the literature. We also discuss two intensity magnifications that we define in this article; one coming from a natural geometrical construction in terms of the affine distance, that we here call \\tilde{μ }, and the other adapted to cosmological discussions in terms of the redshift, that we call μ΄. We show that the natural intensity magnification \\tilde{μ } coincides with the standard angular magnification (μ).

  20. Chemical characteristic and toxicity assessment of particle associated PAHs for the short-term anthropogenic activity event: During the Chinese New Year's Festival in 2013.

    PubMed

    Shi, Guo-Liang; Liu, Gui-Rong; Tian, Ying-Ze; Zhou, Xiao-Yu; Peng, Xing; Feng, Yin-Chang

    2014-06-01

    PM10 and PM2.5 samples were simultaneously collected during a period which covered the Chinese New Year's (CNY) Festival. The concentrations of particulate matter (PM) and 16 polycyclic aromatic hydrocarbons (PAHs) were measured. The possible source contributions and toxicity risks were estimated for Festival and non-Festival periods. According to the diagnostic ratios and Multilinear Engine 2 (ME2), three sources were identified and their contributions were calculated: vehicle emission (48.97% for PM10, 53.56% for PM2.5), biomass & coal combustion (36.83% for PM10, 28.76% for PM2.5), and cook emission (22.29% for PM10, 27.23% for PM2.5). An interesting result was found: although the PAHs are not directly from the fireworks display, they were still indirectly influenced by biomass combustion which is affiliated with the fireworks display. Additionally, toxicity risks of different sources were estimated by Multilinear Engine 2-BaP equivalent (ME2-BaPE): vehicle emission (54.01% for PM10, 55.42% for PM2.5), cook emission (25.59% for PM10, 29.05% for PM2.5), and biomass & coal combustion source (20.90% for PM10, 14.28% for PM2.5). It is worth to be noticed that the toxicity contribution of cook emission was considerable in Festival period. The findings can provide useful information to protect the urban human health, as well as develop the effective air control strategies in special short-term anthropogenic activity event. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  2. SU-F-T-54: Determination of the AAPM TG-43 Brachytherapy Dosimetry Parameters for A New Titanium-Encapsulated Yb-169 Source by Monte Carlo Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynoso, F; Washington University School of Medicine, St. Louis, MO; Munro, J

    2016-06-15

    Purpose: To determine the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source designed to maximize the dose enhancement during gold nanoparticle-aided radiation therapy (GNRT). Methods: An existing Monte Carlo (MC) model of the titanium-encapsulated Yb-169 source, which was described in the current investigators’ published MC optimization study, was modified based on the source manufacturer’s detailed specifications, resulting in an accurate model of the titanium-encapsulated Yb-169 source that was actually manufactured. MC calculations were then performed using the MCNP5 code system and the modified source model, in order to obtain a complete set of the AAPM TG-43 parametersmore » for the new Yb-169 source. Results: The MC-calculated dose rate constant for the new titanium-encapsulated Yb-169 source was 1.05 ± 0.03 cGy per hr U, indicating about 10% decrease from the values reported for the conventional stainless steel-encapsulated Yb-169 sources. The source anisotropy and radial dose function for the new source were found similar to those reported for the conventional Yb-169 sources. Conclusion: In this study, the AAPM TG-43 brachytherapy dosimetry parameters of a new titanium-encapsulated Yb-169 source were determined by MC calculations. The current results suggested that the use of titanium, instead of stainless steel, to encapsulate the Yb-169 core would not lead to any major change in the dosimetric characteristics of the Yb-169 source, while it would allow more low energy photons being transmitted through the source filter thereby leading to an increased dose enhancement during GNRT. Supported by DOD/PCRP grant W81XWH-12-1-0198 This investigation was supported by DOD/PCRP grant W81XWH-12-1- 0198.« less

  3. Calculated effects of backscattering on skin dosimetry for nuclear fuel fragments.

    PubMed

    Aydarous, A Sh

    2008-01-01

    The size of hot particles contained in nuclear fallout ranges from 10 nm to 20 microm for the worldwide weapons fallout. Hot particles from nuclear power reactors can be significantly bigger (100 microm to several millimetres). Electron backscattering from such particles is a prominent secondary effect in beta dosimetry for radiological protection purposes, such as skin dosimetry. In this study, the effect of electron backscattering due to hot particles contamination on skin dose is investigated. These include parameters such as detector area, source radius, source energy, scattering material and source density. The Monte-Carlo Neutron Particle code (MCNP4C) was used to calculate the depth dose distribution for 10 different beta sources and various materials. The backscattering dose factors (BSDF) were then calculated. A significant dependence is shown for the BSDF magnitude upon detector area, source radius and scatterers. It is clearly shown that the BSDF increases with increasing detector area. For high Z scatterers, the BSDF can reach as high as 40 and 100% for sources with radii 0.1 and 0.0001 cm, respectively. The variation of BSDF with source radius, source energy and source density is discussed.

  4. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  5. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  6. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  7. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  8. 26 CFR 1.737-3 - Basis adjustments; Recovery rules.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...

  9. APT: Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ

    2012-08-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  10. On the high Mach number shock structure singularity caused by overreach of Maxwellian molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myong, R. S., E-mail: myong@gnu.ac.kr

    2014-05-15

    The high Mach number shock structure singularity arising in moment equations of the Boltzmann equation was investigated. The source of the singularity is shown to be the unbalanced treatment between two high order kinematic and dissipation terms caused by the overreach of Maxwellian molecule assumption. In compressive gaseous flow, the high order stress-strain coupling term of quadratic nature will grow far faster than the strain term, resulting in an imbalance with the linear dissipation term and eventually a blow-up singularity in high thermal nonequilibrium. On the other hand, the singularity arising from unbalanced treatment does not occur in the casemore » of velocity shear and expansion flows, since the high order effects are cancelled under the constraint of the free-molecular asymptotic behavior. As an alternative method to achieve the balanced treatment, Eu's generalized hydrodynamics, consistent with the second law of thermodynamics, was revisited. After introducing the canonical distribution function in exponential form and applying the cumulant expansion to the explicit calculation of the dissipation term, a natural platform suitable for the balanced treatment was derived. The resulting constitutive equation with the nonlinear factor was then shown to be well-posed for all regimes, effectively removing the high Mach number shock structure singularity.« less

  11. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 2: Computational implementation and first results

    NASA Astrophysics Data System (ADS)

    Peruzza, Laura; Azzaro, Raffaele; Gee, Robin; D'Amico, Salvatore; Langer, Horst; Lombardo, Giuseppe; Pace, Bruno; Pagani, Marco; Panzera, Francesco; Ordaz, Mario; Suarez, Miguel Leonardo; Tusa, Giuseppina

    2017-11-01

    This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA) for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017) and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude-scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014). Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent) and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M > 6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M < 6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered in this study, we present a different viewpoint that, in our opinion, is relevant for retrofitting the existing buildings and for driving impending interventions of risk reduction.

  12. Development of a clinical prediction model to calculate patient life expectancy: the measure of actuarial life expectancy (MALE).

    PubMed

    Clarke, M G; Kennedy, K P; MacDonagh, R P

    2009-01-01

    To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.

  13. Rapid Acute Dose Assessment Using MCNP6

    NASA Astrophysics Data System (ADS)

    Owens, Andrew Steven

    Acute radiation doses due to physical contact with a high-activity radioactive source have proven to be an occupational hazard. Multiple radiation injuries have been reported due to manipulating a radioactive source with bare hands or by placing a radioactive source inside a shirt or pants pocket. An effort to reconstruct the radiation dose must be performed to properly assess and medically manage the potential biological effects from such doses. Using the reference computational phantoms defined by the International Commission on Radiological Protection (ICRP) and the Monte Carlo N-Particle transport code (MCNP6), dose rate coefficients are calculated to assess doses for common acute doses due to beta and photon radiation sources. The research investigates doses due to having a radioactive source in either a breast pocket or pants back pocket. The dose rate coefficients are calculated for discrete energies and can be used to interpolate for any given energy of photon or beta emission. The dose rate coefficients allow for quick calculation of whole-body dose, organ dose, and/or skin dose if the source, activity, and time of exposure are known. Doses are calculated with the dose rate coefficients and compared to results from the International Atomic Energy Agency (IAEA) reports from accidents that occurred in Gilan, Iran and Yanango, Peru. Skin and organ doses calculated with the dose rate coefficients appear to agree, but there is a large discrepancy when comparing whole-body doses assessed using biodosimetry and whole-body doses assessed using the dose rate coefficients.

  14. Search for neutrino transitions to sterile states using an intense beta source

    NASA Astrophysics Data System (ADS)

    Oralbaev, A. Yu.; Skorokhvatov, M. D.; Titov, O. A.

    2017-11-01

    The results of beta spectrum calculations for two 144Pr decay branches are presented, which are of interest for reconstructing the spectrum of antineutrinos from the 144Ce-144Pr source to be used in the SOX experiment on the search for sterile neutrinos. The main factors affecting the beta spectrum are analyzed, their calculation methods are given, and calculations are compared with experiment.

  15. Scoping Calculations of Power Sources for Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Difilippo, F. C.

    1994-01-01

    This technical memorandum describes models and calculational procedures to fully characterize the nuclear island of power sources for nuclear electric propulsion. Two computer codes were written: one for the gas-cooled NERVA derivative reactor and the other for liquid metal-cooled fuel pin reactors. These codes are going to be interfaced by NASA with the balance of plant in order to make scoping calculations for mission analysis.

  16. Solar quiet day ionospheric source current in the West African region.

    PubMed

    Obiekezie, Theresa N; Okeke, Francisca N

    2013-05-01

    The Solar Quiet (Sq) day source current were calculated using the magnetic data obtained from a chain of 10 magnetotelluric stations installed in the African sector during the French participation in the International Equatorial Electrojet Year (IEEY) experiment in Africa. The components of geomagnetic field recorded at the stations from January-December in 1993 during the experiment were separated into the source and (induced) components of Sq using Spherical Harmonics Analysis (SHA) method. The range of the source current was calculated and this enabled the viewing of a full year's change in the source current system of Sq.

  17. Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J. Kenneth

    2000-10-15

    A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.

  18. Impact of partial nitritation degree and C/N ratio on simultaneous Sludge Fermentation, Denitrification and Anammox process.

    PubMed

    Wang, Bo; Peng, Yongzhen; Guo, Yuanyuan; Yuan, Yue; Zhao, Mengyue; Wang, Shuying

    2016-11-01

    This study presents a novel process (i.e. PN/SFDA) to remove nitrogen from low C/N domestic wastewater. The process mainly involves two reactors, a pre-Sequencing Batch Reactor for partial nitritation (termed as PN-SBR) and an anoxic reactor for integrated Denitrification and Anammox with carbon sources produced from Sludge Fermentation (termed as SFDA). During long-term Runs, NO2(-)/NH4(+) ratio (i.e. NO2(-)-N/NH4(+)-N calculated by mole) in the PN-SBR effluent was gradually increased from 0.2 to 37 by extending aerobic duration, meaning that partial nitritation turning to full nitritation could be achieved. Impact of partial nitritation degree on SFDA process was investigated and the result showed that, NO2(-)/NH4(+) ratios between 2 and 10 were appropriate for the co-existence of denitrification and anammox together in the SFDA reactor, and denitrification instead of anammox contributed greater for nitrogen removal. Further batch tests indicated that anammox collaborated well with denitrification at low C/N (1.0 in this study). Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Excitation of Earth Rotation Variations "Observed" by Time-Variable Gravity

    NASA Technical Reports Server (NTRS)

    Chao, Ben F.; Cox, C. M.

    2005-01-01

    Time variable gravity measurements have been made over the past two decades using the space geodetic technique of satellite laser ranging, and more recently by the GRACE satellite mission with improved spatial resolutions. The degree-2 harmonic components of the time-variable gravity contain important information about the Earth s length-of-day and polar motion excitation functions, in a way independent to the traditional "direct" Earth rotation measurements made by, for example, the very-long-baseline interferometry and GPS. In particular, the (degree=2, order= 1) components give the mass term of the polar motion excitation; the (2,O) component, under certain mass conservation conditions, gives the mass term of the length-of-day excitation. Combining these with yet another independent source of angular momentum estimation calculated from global geophysical fluid models (for example the atmospheric angular momentum, in both mass and motion terms), in principle can lead to new insights into the dynamics, particularly the role or the lack thereof of the cores, in the excitation processes of the Earth rotation variations.

  20. A method to calculate the gamma ray detection efficiency of a cylindrical NaI (Tl) crystal

    NASA Astrophysics Data System (ADS)

    Ahmadi, S.; Ashrafi, S.; Yazdansetad, F.

    2018-05-01

    Given a wide range application of NaI(Tl) detector in industrial and medical sectors, computation of the related detection efficiency in different distances of a radioactive source, especially for calibration purposes, is the subject of radiation detection studies. In this work, a 2in both in radius and height cylindrical NaI (Tl) scintillator was used, and by changing the radial, axial, and diagonal positions of an isotropic 137Cs point source relative to the detector, the solid angles and the interaction probabilities of gamma photons with the detector's sensitive area have been calculated. The calculations present the geometric and intrinsic efficiency as the functions of detector's dimensions and the position of the source. The calculation model is in good agreement with experiment, and MCNPX simulation.

  1. A study on the uncertainty based on Meteorological fields on Source-receptor Relationships for Total Nitrate in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Sunwoo, Y.; Park, J.; Kim, S.; Ma, Y.; Chang, I.

    2010-12-01

    Northeast Asia hosts more than one third of world population and the emission of pollutants trends to increase rapidly, because of economic growth and the increase of the consumption in high energy intensity. In case of air pollutants, especially, its characteristics of emissions and transportation become issued nationally, in terms of not only environmental aspects, but also long-range transboundary transportation. In meteorological characteristics, westerlies area means what air pollutants that emitted from China can be delivered to South Korea. Therefore, considering meteorological factors can be important to understand air pollution phenomena. In this study, we used MM5(Fifth-Generation Mesoscale Model) and WRF(Weather Research and Forecasting Model) to produce the meteorological fields. We analyzed the feature of physics option in each model and the difference due to characteristic of WRF and MM5. We are trying to analyze the uncertainty of source-receptor relationships for total nitrate according to meteorological fields in the Northeast Asia. We produced the each meteorological fields that apply the same domain, same initial and boundary conditions, the best similar physics option. S-R relationships in terms of amount and fractional number for total nitrate (sum of N from HNO3, nitrate and PAN) were calculated by EMEP method 3.

  2. RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, S.L.; Miller, L.A.; Monroe, D.K.

    1998-04-01

    This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less

  3. IAQ MODEL FOR WINDOWS - RISK VERSION 1.0 USER MANUAL

    EPA Science Inventory

    The manual describes the use of the computer model, RISK, to calculate individual exposure to indoor air pollutants from sources. The model calculates exposure due to individual, as opposed to population, activity patterns and source use. The model also provides the capability to...

  4. HP-25 PROGRAMMABLE POCKET CALCULATOR APPLIED TO AIR POLLUTION MEASUREMENT STUDIES: STATIONARY SOURCES

    EPA Science Inventory

    The report should be useful to persons concerned with Air Pollution Measurement Studies of Stationary Industrial Sources. It gives detailed descriptions of 22 separate programs, written specifically for the Hewlett Packard Model HP-25 manually programmable pocket calculator. Each...

  5. HP-65 PROGRAMMABLE POCKET CALCULATOR APPLIED TO AIR POLLUTION MEASUREMENT STUDIES: STATIONARY SOURCES

    EPA Science Inventory

    The handbook is intended for persons concerned with air pollution measurement studies of stationary industrial sources. It gives detailed descriptions of 22 different programs written specifically for the Hewlett Packard Model HP-65 card-programmable pocket calculator. For each p...

  6. Anthropogenic Sources of Arsenic and Copper to Sediments of a Suburban Lake, 1964-1998

    NASA Astrophysics Data System (ADS)

    Rice, K. C.; Conko, K. M.; Hornberger, G. M.

    2002-05-01

    Nonpoint-source pollution from urbanization is becoming a widespread problem. Long-term monitoring data are necessary to document geochemical processes in urban settings and changes in sources of chemical contaminants over time. In the absence of long-term data, lake-sediment cores can be used to reconstruct past processes, because they serve as integrators of sources of pollutants from the contributing airshed and catchment. Lake Anne is a 10.9-ha man-made lake in a 235-ha suburban catchment in Reston, Virginia, with a population density of 1,116 people/km2. Three sediment cores, collected in 1996 and 1997, indicate increasing concentrations of arsenic and copper since 1964, when the lake was formed. The cores were compared to a core collected from a forested catchment in the same airshed that showed no increases in concentrations of these elements. Neither an increase in atmospheric deposition nor diagenesis and remobilization were responsible for the trends in the Lake Anne cores. Mass balances of sediment, arsenic, and copper were calculated using 1998 data on precipitation, streamwater, road runoff, and a laboratory leaching experiment on pressure-treated lumber. Sources of arsenic to the lake in 1998 were in-lake leaching of pressure-treated lumber (52%) and streamwater (47%). Road runoff was a greater (93%) source of copper than leaching of pressure-treated lumber (4%). Atmospheric deposition was an insignificant source (<3%) of both elements. Urbanization of the catchment was confirmed as a major cause of the increasing arsenic and copper in the lake cores through an annual historical reconstruction of the deposition of sediment, arsenic, and copper to the lake for 1964-1997. Aerial photography indicated that the area of roads and parking lots in the catchment increased to 26% by 1997 and that the number of docks on the lake also increased over time. The increased mass of arsenic and copper in the lake sediments corresponded to the increased amount of pressure-treated lumber in the lake, and the mass of copper also corresponded to the increase in paved surfaces in the catchment.

  7. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    PubMed

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  8. A Cost to Benefit Analysis of a Next Generation Electric Power Distribution System

    NASA Astrophysics Data System (ADS)

    Raman, Apurva

    This thesis provides a cost to benefit analysis of the proposed next generation of distribution systems- the Future Renewable Electric Energy Distribution Management (FREEDM) system. With the increasing penetration of renewable energy sources onto the grid, it becomes necessary to have an infrastructure that allows for easy integration of these resources coupled with features like enhanced reliability of the system and fast protection from faults. The Solid State Transformer (SST) and the Fault Isolation Device (FID) make for the core of the FREEDM system and have huge investment costs. Some key features of the FREEDM system include improved power flow control, compact design and unity power factor operation. Customers may observe a reduction in the electricity bill by a certain fraction for using renewable sources of generation. There is also a possibility of huge subsidies given to encourage use of renewable energy. This thesis is an attempt to quantify the benefits offered by the FREEDM system in monetary terms and to calculate the time in years required to gain a return on investments made. The elevated cost of FIDs needs to be justified by the advantages they offer. The result of different rates of interest and how they influence the payback period is also studied. The payback periods calculated are observed for viability. A comparison is made between the active power losses on a certain distribution feeder that makes use of distribution level magnetic transformers versus one that makes use of SSTs. The reduction in the annual active power losses in the case of the feeder using SSTs is translated onto annual savings in terms of cost when compared to the conventional case with magnetic transformers. Since the FREEDM system encourages operation at unity power factor, the need for installing capacitor banks for improving the power factor is eliminated and this reflects in savings in terms of cost. The FREEDM system offers enhanced reliability when compared to a conventional system. The payback periods observed support the concept of introducing the FREEDM system.

  9. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  10. WE-A-17A-07: Evaluation of a Grid-Based Boltzmann Solver for Nuclear Medicine Voxel-Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikell, J; Kappadath, S; Wareing, T

    Purpose: Grid-based Boltzmann solvers (GBBS) have been successfully implemented in radiation oncology clinics as dose calculations for e×ternal photon beams and 192Ir sealed-source brachytherapy. We report on the evaluation of a GBBS for nuclear medicine vo×el-based absorbed doses. Methods: Vo×el-S-values were calculated for monoenergetic betas and photons (1, 0.1, 0.01 MeV), 90Y, and 131I for 3 mm vo×el sizes using Monte Carlo (DOS×YZnrc) and GBBS (Attila 8.1-beta5, Transpire). The source distribution was uniform throughout a single vo×el. The material was an infinite 1.04 g/cc soft tissue slab. To e×plore convergence properties of the GBBS 3 tetrahedral meshes, 3 energy groupmore » structures, 3 different square Chebyschev-Legendre quadrature set orders (Sn), and 4×2013;7 spherical harmonic e×pansion terms (Pn) were investigated for a total of 168 discretizations per source. The mesh, energy group, and quadrature sets are 8×, 3×, and 16×, respectively, finer than the corresponding coarse discretization. GBBS cross sections were generated with full electronphoton-coupling using the vendors e×tended CEP×S code. For accuracy, percent differences (%Δ) in source vo×el absorbed doses between MC and GBBS are reported for the coarsest and finest discretization. For convergence, ratios of the two finest discretization solutions are reported along each variable. Results: For 1 MeV, 0.1 MeV, 0.01 MeV, Y90, and I-131 beta sources the %Δ in the source vo×el for (coarsest,finest) discretization were (+2.0,−6.4), (−8.0, −7.5), (−13.8, −13.4), (+0.9,−5.5), and (− 10.1,−9.0) respectively. The corresponding %Δ for photons were (+33.7,−7.1), (−9.4, −9.8), (−17.4, −15.2), and (−1.7,−7.7), respectively. For betas, the convergence ratio of mesh, energy, Sn, and Pn ranged from 0.991–1.000. For gammas, the convergence ratio of mesh, Sn, and Pn ranged from 0.998–1.003 while the ratio for energy ranged from 0.964–1.001. Conclusions: GBBS is promising for nuclear medicine vo×el-based dose calculations. Ongoing work includes evaluating GBBS in bone, lung, and realistic clinical PET/SPECT-based activity distributions. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  11. Organic Field Effect Transistors for Large Format Electronics

    DTIC Science & Technology

    2003-06-19

    calculated output characteristics for a p-channel substrate insulator Organic layer Source Drain Gate 6 pentacene OFET with 2µm source to drain spacing...conventional transistors. Figure 3. Calculated output characteristics of a pentacene OFET with image charge induced contact barrier...Cross section view of a part of an OFET in the vicinity of a source or drain contact. local ordering due to surface energy effects. The development of

  12. A comprehensive Probabilistic Tsunami Hazard Assessment for the city of Naples (Italy)

    NASA Astrophysics Data System (ADS)

    Anita, G.; Tonini, R.; Selva, J.; Sandri, L.; Pierdominici, S.; Faenza, L.; Zaccarelli, L.

    2012-12-01

    A comprehensive Probabilistic Tsunami Hazard Assessment (PTHA) should consider different tsunamigenic sources (seismic events, slide failures, volcanic eruptions) to calculate the hazard on given target sites. This implies a multi-disciplinary analysis of all natural tsunamigenic sources, in a multi-hazard/risk framework, which considers also the effects of interaction/cascade events. Our approach shows the ongoing effort to analyze the comprehensive PTHA for the city of Naples (Italy) including all types of sources located in the Tyrrhenian Sea, as developed within the Italian project ByMuR (Bayesian Multi-Risk Assessment). The project combines a multi-hazard/risk approach to treat the interactions among different hazards, and a Bayesian approach to handle the uncertainties. The natural potential tsunamigenic sources analyzed are: 1) submarine seismic sources located on active faults in the Tyrrhenian Sea and close to the Southern Italian shore line (also we consider the effects of the inshore seismic sources and the associated active faults which we provide their rapture properties), 2) mass failures and collapses around the target area (spatially identified on the basis of their propensity to failure), and 3) volcanic sources mainly identified by pyroclastic flows and collapses from the volcanoes in the Neapolitan area (Vesuvius, Campi Flegrei and Ischia). All these natural sources are here preliminary analyzed and combined, in order to provide a complete picture of a PTHA for the city of Naples. In addition, the treatment of interaction/cascade effects is formally discussed in the case of significant temporary variations in the short-term PTHA due to an earthquake.

  13. Distribution functions of air-scattered gamma rays above isotropic plane sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael, J A; Lamonds, H A

    1967-06-01

    Using the moments method of Spencer and Fano and a reconstruction technique suggested by Berger, the authors have calculated energy and angular distribution functions for air-scattered gamma rays emitied from infinite-plane isotropic monoenergetic sources as iunctions of source energy, radiation incidence angle at the detector, and detector altitude. Incremental and total buildup factors have been calculated for both number and exposure. The results are presented in tabular form for a detector located at altitudes of 3, 50, 100, 200, 300, 400, 500, and 1000 feet above source planes of 15 discrete energies spanning the range of 0.1 to 3.0 MeV.more » Calculational techniques including results of sensitivity studies are discussed and plots of typical results are presented. (auth)« less

  14. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  15. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  16. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  17. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  18. 10 CFR 50.67 - Accident source term.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...

  19. SU-G-201-17: Verification of Dose Distributions From High-Dose-Rate Brachytherapy Ir-192 Source Using a Multiple-Array-Diode-Detector (MapCheck2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpool, K; De La Fuente Herman, T; Ahmad, S

    Purpose: To investigate quantitatively the accuracy of dose distributions for the Ir-192 high-dose-rate (HDR) brachytherapy source calculated by the Brachytherapy-Planning system (BPS) and measured using a multiple-array-diode-detector in a heterogeneous medium. Methods: A two-dimensional diode-array-detector system (MapCheck2) was scanned with a catheter and the CT-images were loaded into the Varian-Brachytherapy-Planning which uses TG-43-formalism for dose calculation. Treatment plans were calculated for different combinations of one dwell-position and varying irradiation times and different-dwell positions and fixed irradiation time with the source placed 12mm from the diode-array plane. The calculated dose distributions were compared to the measured doses with MapCheck2 delivered bymore » an Ir-192-source from a Nucletron-Microselectron-V2-remote-after-loader. The linearity of MapCheck2 was tested for a range of dwell-times (2–600 seconds). The angular effect was tested with 30 seconds irradiation delivered to the central-diode and then moving the source away in increments of 10mm. Results: Large differences were found between calculated and measured dose distributions. These differences are mainly due to absence of heterogeneity in the dose calculation and diode-artifacts in the measurements. The dose differences between measured and calculated due to heterogeneity ranged from 5%–12% depending on the position of the source relative to the diodes in MapCheck2 and different heterogeneities in the beam path. The linearity test of the diode-detector showed 3.98%, 2.61%, and 2.27% over-response at short irradiation times of 2, 5, and 10 seconds, respectively, and within 2% for 20 to 600 seconds (p-value=0.05) which depends strongly on MapCheck2 noise. The angular dependency was more pronounced at acute angles ranging up to 34% at 5.7 degrees. Conclusion: Large deviations between measured and calculated dose distributions for HDR-brachytherapy with Ir-192 may be improved when considering medium heterogeneity and dose-artifact of the diodes. This study demonstrates that multiple-array-diode-detectors provide practical and accurate dosimeter to verify doses delivered from the brachytherapy Ir-192-source.« less

  20. Effects of sound source directivity on auralizations

    NASA Astrophysics Data System (ADS)

    Sheets, Nathan W.; Wang, Lily M.

    2002-05-01

    Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.

  1. VNAP2: A Computer Program for Computation of Two-dimensional, Time-dependent, Compressible, Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Cline, M. C.

    1981-01-01

    A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  2. Valuation of social and health effects of transport-related air pollution in Madrid (Spain).

    PubMed

    Monzón, Andrés; Guerrero, María-José

    2004-12-01

    Social impacts of pollutants from mobile sources are a key element in urban design and traffic planning. One of the most relevant impacts is health effects associated with high pollution periods. Madrid is a city that suffers chronic congestion levels and some periods of very stable atmospheric conditions; as a result, pollution levels exceed air quality standards for certain pollutants. This paper focuses on the social evaluation of transport-related emissions. A new methodology to evaluate those impacts in monetary terms has been designed and applied to Madrid. The method takes into account costs associated with losses in working time, mortality and human suffering; calculated using an impact pathway approach linked to CORINAIR emissions. This also allows the calculation of social costs associated with greenhouse gas impacts. As costs have been calculated individually by effect and mode of transport, they can be used to design pricing policies based on real social costs. This paper concludes that the health and social costs of transport-related air pollution in Madrid is 357 Meuro. In these circumstances, the recent public health tax applied in Madrid is clearly correct and sensible with a fair pricing policy on car use.

  3. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  4. Modeling diffuse phosphorus emissions to assist in best management practice designing

    NASA Astrophysics Data System (ADS)

    Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne

    2010-05-01

    A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.

  5. TU-H-CAMPUS-IeP1-05: A Framework for the Analytic Calculation of Patient-Specific Dose Distribution Due to CBCT Scan for IGRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youn, H; Jeon, H; Nam, J

    Purpose: To investigate the feasibility of an analytic framework to estimate patients’ absorbed dose distribution owing to daily cone-beam CT scan for image-guided radiation treatment. Methods: To compute total absorbed dose distribution, we separated the framework into primary and scattered dose calculations. Using the source parameters such as voltage, current, and bowtie filtration, for the primary dose calculation, we simulated the forward projection from the source to each voxel of an imaging object including some inhomogeneous inserts. Then we calculated the primary absorbed dose at each voxel based on the absorption probability deduced from the HU values and Beer’s law.more » In sequence, all voxels constructing the phantom were regarded as secondary sources to radiate scattered photons for scattered dose calculation. Details of forward projection were identical to that of the previous step. The secondary source intensities were given by using scatter-to- primary ratios provided by NIST. In addition, we compared the analytically calculated dose distribution with their Monte Carlo simulation results. Results: The suggested framework for absorbed dose estimation successfully provided the primary and secondary dose distributions of the phantom. Moreover, our analytic dose calculations and Monte Carlo calculations were well agreed each other even near the inhomogeneous inserts. Conclusion: This work indicated that our framework can be an effective monitor to estimate a patient’s exposure owing to cone-beam CT scan for image-guided radiation treatment. Therefore, we expected that the patient’s over-exposure during IGRT might be prevented by our framework.« less

  6. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach.

    PubMed

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-03-22

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.

  7. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    PubMed Central

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  8. Silver nanoparticles (AgNPs) as a contrast agent for imaging of animal tissue using swept-source optical coherence tomography (SSOCT)

    NASA Astrophysics Data System (ADS)

    Mondal, Indranil; Raj, Shipra; Roy, Poulomi; Poddar, Raju

    2018-01-01

    We present noninvasive three-dimensional depth-resolved imaging of animal tissue with a swept-source optical coherence tomography system at 1064 nm center wavelength and silver nanoparticles (AgNPs) as a potential contrast agent. A swept-source laser light source is used to enable an imaging rate of 100 kHz (100 000 A-scans s-1). Swept-source optical coherence tomography is a new variant of the optical coherence tomography (OCT) technique, offering unique advantages in terms of sensitivity, reduction of motion artifacts, etc. To enhance the contrast of an OCT image, AgNPs are utilized as an exogeneous contrast agent. AgNPs are synthesized using a modified Tollens method and characterization is done by UV-vis spectroscopy, dynamic light scattering, scanning electron microscopy and energy dispersive x-ray spectroscopy. In vitro imaging of chicken breast tissue, with and without the application of AgNPs, is performed. The effect of AgNPs is studied with different exposure times. A mathematical model is also built to calculate changes in the local scattering coefficient of tissue from OCT images. A quantitative estimation of scattering coefficient and contrast is performed for tissues with and without application of AgNPs. Significant improvement in contrast and increase in scattering coefficient with time is observed.

  9. Magnetohydrodynamic simulations of hypersonic flow over a cylinder using axial- and transverse-oriented magnetic dipoles.

    PubMed

    Guarendi, Andrew N; Chandy, Abhilash J

    2013-01-01

    Numerical simulations of magnetohydrodynamic (MHD) hypersonic flow over a cylinder are presented for axial- and transverse-oriented dipoles with different strengths. ANSYS CFX is used to carry out calculations for steady, laminar flows at a Mach number of 6.1, with a model for electrical conductivity as a function of temperature and pressure. The low magnetic Reynolds number (<1) calculated based on the velocity and length scales in this problem justifies the quasistatic approximation, which assumes negligible effect of velocity on magnetic fields. Therefore, the governing equations employed in the simulations are the compressible Navier-Stokes and the energy equations with MHD-related source terms such as Lorentz force and Joule dissipation. The results demonstrate the ability of the magnetic field to affect the flowfield around the cylinder, which results in an increase in shock stand-off distance and reduction in overall temperature. Also, it is observed that there is a noticeable decrease in drag with the addition of the magnetic field.

  10. Magnetohydrodynamic Simulations of Hypersonic Flow over a Cylinder Using Axial- and Transverse-Oriented Magnetic Dipoles

    PubMed Central

    Guarendi, Andrew N.; Chandy, Abhilash J.

    2013-01-01

    Numerical simulations of magnetohydrodynamic (MHD) hypersonic flow over a cylinder are presented for axial- and transverse-oriented dipoles with different strengths. ANSYS CFX is used to carry out calculations for steady, laminar flows at a Mach number of 6.1, with a model for electrical conductivity as a function of temperature and pressure. The low magnetic Reynolds number (≪1) calculated based on the velocity and length scales in this problem justifies the quasistatic approximation, which assumes negligible effect of velocity on magnetic fields. Therefore, the governing equations employed in the simulations are the compressible Navier-Stokes and the energy equations with MHD-related source terms such as Lorentz force and Joule dissipation. The results demonstrate the ability of the magnetic field to affect the flowfield around the cylinder, which results in an increase in shock stand-off distance and reduction in overall temperature. Also, it is observed that there is a noticeable decrease in drag with the addition of the magnetic field. PMID:24307870

  11. Jet-induced medium excitation in γ-hadron correlation at RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wei; Cao, Shanshan; Luo, Tan

    Both jet transport and jet-induced medium excitation are investigated simultaneously within the coupled Linear Boltzmann Transport and hydro (CoLBT-hydro) model. In this coupled approach, energy-momentum deposition from propagating jet shower partons in the elastic and radiation processes is taken as a source term in hydrodynamics and the hydro background for LBT simulation is updated for next time step. We use CoLBT-hydro model to simulate γ-jet events of Au+Au collisions at RHIC. Hadron spectra from both the hadronization of jet shower partons and jet-induced medium excitation are calculated and compared to experimental data. Parton energy loss of jet shower partons leadsmore » to the suppression of hadron yields at large z T = p h T/p γ T while medium excitations leads to enhancement of hadron yields at small z T. Meanwhile, a significant broadening of low p T hadron yields and the depletion of soft hadrons in the γ direction are observed in the calculation of γ-hadron angular correlation.« less

  12. Jet-induced medium excitation in γ-hadron correlation at RHIC

    DOE PAGES

    Chen, Wei; Cao, Shanshan; Luo, Tan; ...

    2017-09-25

    Both jet transport and jet-induced medium excitation are investigated simultaneously within the coupled Linear Boltzmann Transport and hydro (CoLBT-hydro) model. In this coupled approach, energy-momentum deposition from propagating jet shower partons in the elastic and radiation processes is taken as a source term in hydrodynamics and the hydro background for LBT simulation is updated for next time step. We use CoLBT-hydro model to simulate γ-jet events of Au+Au collisions at RHIC. Hadron spectra from both the hadronization of jet shower partons and jet-induced medium excitation are calculated and compared to experimental data. Parton energy loss of jet shower partons leadsmore » to the suppression of hadron yields at large z T = p h T/p γ T while medium excitations leads to enhancement of hadron yields at small z T. Meanwhile, a significant broadening of low p T hadron yields and the depletion of soft hadrons in the γ direction are observed in the calculation of γ-hadron angular correlation.« less

  13. A new exact method for line radiative transfer

    NASA Astrophysics Data System (ADS)

    Elitzur, Moshe; Asensio Ramos, Andrés

    2006-01-01

    We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.

  14. Radiochemical data collected on events from which radioactivity escaped beyond the borders of the Nevada test range complex. [NONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, H.G.

    1981-02-12

    This report identifies all nuclear events in Nevada that are known to have sent radioactivity beyond the borders of the test range complex. There have been 177 such tests, representing seven different types: nuclear detonations in the atmosphere, nuclear excavation events, nuclear safety events, underground nuclear events that inadvertently seeped or vented to the atmosphere, dispersion of plutonium and/or uranium by chemical high explosives, nuclear rocket engine tests, and nuclear ramjet engine tests. The source term for each of these events is given, together with the data base from which it was derived (except where the data are classified). Themore » computer programs used for organizing and processing the data base and calculating radionuclide production are described and included, together with the input and output data and details of the calculations. This is the basic formation needed to make computer modeling studies of the fallout from any of these 177 events.« less

  15. Simplified analysis about horizontal displacement of deep soil under tunnel excavation

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyan; Gu, Shuancheng; Huang, Rongbin

    2017-11-01

    Most of the domestic scholars focus on the study about the law of the soil settlement caused by subway tunnel excavation, however, studies on the law of horizontal displacement are lacking. And it is difficult to obtain the horizontal displacement data of any depth in the project. At present, there are many formulas for calculating the settlement of soil layers. In terms of integral solutions of Mindlin classic elastic theory, stochastic medium theory, source-sink theory, the Peck empirical formula is relatively simple, and also has a strong applicability at home. Considering the incompressibility of rock and soil mass, based on the principle of plane strain, the calculation formula of the horizontal displacement of the soil along the cross section of the tunnel was derived by using the Peck settlement formula. The applicability of the formula is verified by comparing with the existing engineering cases, a simple and rapid analytical method for predicting the horizontal displacement is presented.

  16. General analytical solutions for DC/AC circuit-network analysis

    NASA Astrophysics Data System (ADS)

    Rubido, Nicolás; Grebogi, Celso; Baptista, Murilo S.

    2017-06-01

    In this work, we present novel general analytical solutions for the currents that are developed in the edges of network-like circuits when some nodes of the network act as sources/sinks of DC or AC current. We assume that Ohm's law is valid at every edge and that charge at every node is conserved (with the exception of the source/sink nodes). The resistive, capacitive, and/or inductive properties of the lines in the circuit define a complex network structure with given impedances for each edge. Our solution for the currents at each edge is derived in terms of the eigenvalues and eigenvectors of the Laplacian matrix of the network defined from the impedances. This derivation also allows us to compute the equivalent impedance between any two nodes of the circuit and relate it to currents in a closed circuit which has a single voltage generator instead of many input/output source/sink nodes. This simplifies the treatment that could be done via Thévenin's theorem. Contrary to solving Kirchhoff's equations, our derivation allows to easily calculate the redistribution of currents that occurs when the location of sources and sinks changes within the network. Finally, we show that our solutions are identical to the ones found from Circuit Theory nodal analysis.

  17. Calculations to support JET neutron yield calibration: Modelling of neutron emission from a compact DT neutron generator

    NASA Astrophysics Data System (ADS)

    Čufar, Aljaž; Batistoni, Paola; Conroy, Sean; Ghani, Zamir; Lengar, Igor; Milocco, Alberto; Packer, Lee; Pillon, Mario; Popovichev, Sergey; Snoj, Luka; JET Contributors

    2017-03-01

    At the Joint European Torus (JET) the ex-vessel fission chambers and in-vessel activation detectors are used as the neutron production rate and neutron yield monitors respectively. In order to ensure that these detectors produce accurate measurements they need to be experimentally calibrated. A new calibration of neutron detectors to 14 MeV neutrons, resulting from deuterium-tritium (DT) plasmas, is planned at JET using a compact accelerator based neutron generator (NG) in which a D/T beam impinges on a solid target containing T/D, producing neutrons by DT fusion reactions. This paper presents the analysis that was performed to model the neutron source characteristics in terms of energy spectrum, angle-energy distribution and the effect of the neutron generator geometry. Different codes capable of simulating the accelerator based DT neutron sources are compared and sensitivities to uncertainties in the generator's internal structure analysed. The analysis was performed to support preparation to the experimental measurements performed to characterize the NG as a calibration source. Further extensive neutronics analyses, performed with this model of the NG, will be needed to support the neutron calibration experiments and take into account various differences between the calibration experiment and experiments using the plasma as a source of neutrons.

  18. [Estimation of urban non-point source pollution loading and its factor analysis in the Pearl River Delta].

    PubMed

    Liao, Yi-Shan; Zhuo, Mu-Ning; Li, Ding-Qiang; Guo, Tai-Long

    2013-08-01

    In the Pearl Delta region, urban rivers have been seriously polluted, and the input of non-point source pollution materials, such as chemical oxygen demand (COD), into rivers cannot be neglected. During 2009-2010, the water qualities at eight different catchments in the Fenjiang River of Foshan city were monitored, and the COD loads for eight rivulet sewages were calculated in respect of different rainfall conditions. Interesting results were concluded in our paper. The rainfall and landuse type played important roles in the COD loading, with greater influence of rainfall than landuse type. Consequently, a COD loading formula was constructed that was defined as a function of runoff and landuse type that were derived SCS model and land use map. Loading of COD could be evaluated and predicted with the constructed formula. The mean simulation accuracy for single rainfall event was 75.51%. Long-term simulation accuracy was better than that of single rainfall. In 2009, the estimated COD loading and its loading intensity were 8 053 t and 339 kg x (hm2 x a)(-1), and the industrial land was regarded as the main source of COD pollution area. The severe non-point source pollution such as COD in Fenjiang River must be paid more attention in the future.

  19. Analog performance of vertical nanowire TFETs as a function of temperature and transport mechanism

    NASA Astrophysics Data System (ADS)

    Martino, Marcio Dalla Valle; Neves, Felipe; Ghedini Der Agopian, Paula; Martino, João Antonio; Vandooren, Anne; Rooyackers, Rita; Simoen, Eddy; Thean, Aaron; Claeys, Cor

    2015-10-01

    The goal of this work is to study the analog performance of tunnel field effect transistors (TFETs) and its susceptibility to temperature variation and to different dominant transport mechanisms. The experimental input characteristic of nanowire TFETs with different source compositions (100% Si and Si1-xGex) has been presented, leading to the extraction of the Activation Energy for each bias condition. These first results have been connected to the prevailing transport mechanism for each configuration, namely band-to-band tunneling (BTBT) or trap assisted tunneling (TAT). Afterward, this work analyzes the analog behavior, with the intrinsic voltage gain calculated in terms of Early voltage, transistor efficiency, transconductance and output conductance. Comparing the results for devices with different source compositions, it is interesting to note how the analog trends vary depending on the source characteristics and the prevailing transport mechanisms. This behavior results in a different suitability analysis depending on the working temperature. In other words, devices with full-Silicon source and non-abrupt junction profile present the worst intrinsic voltage gain at room temperature, but the best results for high temperatures. This was possible since, among the 4 studied devices, this configuration was the only one with a positive intrinsic voltage gain dependence on the temperature variation.

  20. Method and apparatus for controlling the solenoid current of a solenoid valve which controls the amount of suction of air in an internal combustion engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiuchi, T.; Sakurai, H.

    1988-09-20

    This patent describes an apparatus for controlling the solenoid current of a solenoid valve which controls suction air in an internal combustion engine. The apparatus consists of: (a) engine rotational speed detector means for detecting engine rotational speed; (b) aimed idle speed setting means for generating a signal corresponding to a predetermined idling speed; (c) first calculating means coupled to the engine rotational speed detector means and the aimed idle speed setting means for calculating a feedback control term (Ifb(n)) as a function of an integration term (Iai), a proportion term (Ip), and a differentiation term (Id); (d) first determiningmore » and storing means coupled to the first calculating means, for determining an integration term (Iai(n)) of the the feedback control term (Ifb(n)) and for determining a determined value (Ixref) in accordance therewith; (e) changeover means coupled to the first calculating means and the first determining and storing means for selecting the output of one of the first calculating means or the first determining and storing means; (f) first signal generating means coupled to the changeover means for generating a solenoid current control value (Icmd) as a function of the output of the changeover means.« less

  1. Method and apparatus for controlling the solenoid current of a solenoid valve which controls the amount of suction of air in an internal combustion engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiuchi, T.; Yasuoka, A.

    1988-09-13

    This patent describes apparatus for controlling the solenoid current of a selenoid valve which controls the amount of suction air in an internal combustion engine, the apparatus comprising: (a) engine rotational speed detector means for detecting engine rotational speed; (b) aimed idle speed setting means for generating a signal corresponding to a predetermined idling speed; (c) first calculating means coupled to the engine rotational speed detector means and the aimed idle speed setting means for calculating a feedback control term Ifb(n) as a function of an integration term (Iai), a proportion term (Ip), and a differentiation term (Id); (d) firstmore » determining and storing means coupled to the first calculating means, for determining an integration term (Iai(n)) of the feedback control term (Ifb(n)) and for determining a determined value (Ixref) in accordance therewith; (e) changeover means coupled to the first calculating means and the first determining and storing means for selecting the output of one of the first calculating means or the first determining and storing means; (f) first signal generating means coupled to the changeover means for generating a solenoid current control value (Icmd) as a function of the output of the changeover.« less

  2. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation

    PubMed Central

    Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    Purpose To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. Materials and methods A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. Results The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. Conclusion A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm. PMID:28886048

  3. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    PubMed

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  4. Post-reionization Kinetic Sunyaev-Zel'dovich Signal in the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Park, Hyunbae; Alvarez, Marcelo A.; Bond, John Richard

    2017-06-01

    Using Illustris, a state-of-art cosmological simulation of gravity, hydrodynamics, and star-formation, we revisit the calculation the angular power spectrum of the kinetic Sunyaev-Zel'dovich effect from the post-reionization (z < 6) epoch by Shaw et al. (2012). We not only report the updated value given by the analytical model used in previous studies, but go over the simplifying assumptions made in the model. The assumptions include using gas density for free electron density and neglecting the connected term arising due to the fourth order nature of momentum power spectrum that sources the signal. With these assumptions, Illustris gives slightly (˜ 10%) larger signal than in their work. Then, the signal is reduced by ˜ 20% when using actual free electron density in the calculation instead of gas density. This is because larger neutral fraction in dense regions results in loss of total free electron and suppression of fluctuations in free electron density. We find that the connected term can take up to half of the momentum power spectrum at z < 2. Due to a strong suppression of low-z signal by baryonic physics, the extra contribution from the connected term to ˜ 10% level although it may have been underestimated due to the finite box-size of Illustris. With these corrections, our result is very close to the original result of Shaw et al. (2012), which is well described by a simple power-law, D_l = 1.38[l/3000]0.21 μK^2, at 3000 < l < 10000.

  5. Developing Oxidized Nitrogen Atmospheric Deposition Source Attribution from CMAQ for Air-Water Trading for Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Dennis, R. L.; Napelenok, S. L.; Linker, L. C.; Dudek, M.

    2012-12-01

    Estuaries are adversely impacted by excess reactive nitrogen, Nr, from many point and nonpoint sources, including atmospheric deposition to the watershed and the estuary itself as a nonpoint source. For effective mitigation, trading among sources of Nr is being considered. The Chesapeake Bay Program is working to bring air into its trading scheme, which requires some special air computations. Airsheds are much larger than watersheds; thus, wide-spread or national emissions controls are put in place to achieve major reductions in atmospheric Nr deposition. The tributary nitrogen load reductions allocated to the states to meet the TMDL target for Chesapeake Bay are large and not easy to attain via controls on water point and nonpoint sources. It would help the TMDL process to take advantage of air emissions reductions that would occur with State Implementation Plans that go beyond the national air rules put in place to help meet national ambient air quality standards. There are still incremental benefits from these local or state-level controls on atmospheric emissions. The additional air deposition reductions could then be used to offset water quality controls (air-water trading). What is needed is a source to receptor transfer function that connects air emissions from a state to deposition to a tributary. There is a special source attribution version of the Community Multiscale Air Quality model, CMAQ, (termed DDM-3D) that can estimate the fraction of deposition contributed by labeled emissions (labeled by source or region) to the total deposition across space. We use the CMAQ DDM-3D to estimate simplified state-level delta-emissions to delta-atmospheric-deposition transfer coefficients for each major emission source sector within a state, since local air regulations are promulgated at the state level. The CMAQ 4.7.1 calculations are performed at a 12 km grid size over the airshed domain covering Chesapeake Bay for 2020 CAIR emissions. For results, we first present the fractional contributions of Bay state NOx emissions to the oxidized nitrogen deposition to the Chesapeake Bay watershed and the Bay. We then present example tables of the fractional contributions of Bay state NOx emissions from mobile, off road, power plant and industrial emissions to key tributaries: the Potomac, Susquehanna and James Rivers. Finally, we go through an example for a mobile source NOx reductions in Pennsylvania to show how the tributary load offset would be calculated using the factors generated by CMAQ DDM-3D.

  6. Surface motion of a fluid planet induced by impacts

    NASA Astrophysics Data System (ADS)

    Ni, Sidao; Ahrens, Thomas J.

    2006-10-01

    In order to approximate the free-surface motion of an Earth-sized planet subjected to a giant impact, we have described the excitation of body and surface waves in a spherical compressible fluid planet without gravity or intrinsic material attenuation for a buried explosion source. Using the mode summation method, we obtained an analytical solution for the surface motion of the fluid planet in terms of an infinite series involving the products of spherical Bessel functions and Legendre polynomials. We established a closed form expression for the mode summation excitation coefficient for a spherical buried explosion source, and then calculated the surface motion for different spherical explosion source radii a (for cases of a/R = 0.001 to 0.035, R is the radius of the Earth) We also studied the effect of placing the explosion source at different radii r0 (for cases of r0/R = 0.90 to 0.96) from the centre of the planet. The amplitude of the quasi-surface waves depends substantially on a/R, and slightly on r0/R. For example, in our base-line case, a/R = 0.03, r0/R = 0.96, the free-surface velocity above the source is 0.26c, whereas antipodal to the source, the peak free surface velocity is 0.19c. Here c is the acoustic velocity of the fluid planet. These results can then be applied to studies of atmosphere erosion via blow-off caused by asteroid impacts.

  7. Solar quiet day ionospheric source current in the West African region

    PubMed Central

    Obiekezie, Theresa N.; Okeke, Francisca N.

    2012-01-01

    The Solar Quiet (Sq) day source current were calculated using the magnetic data obtained from a chain of 10 magnetotelluric stations installed in the African sector during the French participation in the International Equatorial Electrojet Year (IEEY) experiment in Africa. The components of geomagnetic field recorded at the stations from January–December in 1993 during the experiment were separated into the source and (induced) components of Sq using Spherical Harmonics Analysis (SHA) method. The range of the source current was calculated and this enabled the viewing of a full year’s change in the source current system of Sq. PMID:25685434

  8. Piecewise synonyms for enhanced UMLS source terminology integration.

    PubMed

    Huang, Kuo-Chuan; Geller, James; Halper, Michael; Cimino, James J

    2007-10-11

    The UMLS contains more than 100 source vocabularies and is growing via the integration of others. When integrating a new source, the source terms already in the UMLS must first be found. The easiest approach to this is simple string matching. However, string matching usually does not find all concepts that should be found. A new methodology, based on the notion of piecewise synonyms, for enhancing the process of concept discovery in the UMLS is presented. This methodology is supported by first creating a general synonym dictionary based on the UMLS. Each multi-word source term is decomposed into its component words, allowing for the generation of separate synonyms for each word from the general synonym dictionary. The recombination of these synonyms into new terms creates an expanded pool of matching candidates for terms from the source. The methodology is demonstrated with respect to an existing UMLS source. It shows a 34% improvement over simple string matching.

  9. A new time-independent formulation of fractional release

    NASA Astrophysics Data System (ADS)

    Ostermöller, Jennifer; Bönisch, Harald; Jöckel, Patrick; Engel, Andreas

    2017-03-01

    The fractional release factor (FRF) gives information on the amount of a halocarbon that is released at some point into the stratosphere from its source form to the inorganic form, which can harm the ozone layer through catalytic reactions. The quantity is of major importance because it directly affects the calculation of the ozone depletion potential (ODP). In this context time-independent values are needed which, in particular, should be independent of the trends in the tropospheric mixing ratios (tropospheric trends) of the respective halogenated trace gases. For a given atmospheric situation, such FRF values would represent a molecular property.We analysed the temporal evolution of FRF from ECHAM/MESSy Atmospheric Chemistry (EMAC) model simulations for several halocarbons and nitrous oxide between 1965 and 2011 on different mean age levels and found that the widely used formulation of FRF yields highly time-dependent values. We show that this is caused by the way that the tropospheric trend is handled in the widely used calculation method of FRF.Taking into account chemical loss in the calculation of stratospheric mixing ratios reduces the time dependence in FRFs. Therefore we implemented a loss term in the formulation of the FRF and applied the parameterization of a mean arrival time to our data set.We find that the time dependence in the FRF can almost be compensated for by applying a new trend correction in the calculation of the FRF. We suggest that this new method should be used to calculate time-independent FRFs, which can then be used e.g. for the calculation of ODP.

  10. Modeling of radiative properties of Sn plasmas for extreme-ultraviolet source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasaki, Akira; Sunahara, Atsushi; Furukawa, Hiroyuki

    Atomic processes in Sn plasmas are investigated for application to extreme-ultraviolet (EUV) light sources used in microlithography. We develop a full collisional radiative (CR) model of Sn plasmas based on calculated atomic data using Hebrew University Lawrence Livermore Atomic Code (HULLAC). Resonance and satellite lines from singly and multiply excited states of Sn ions, which contribute significantly to the EUV emission, are identified and included in the model through a systematic investigation of their effect on the emission spectra. The wavelengths of the 4d-4f+4p-4d transitions of Sn{sup 5+} to Sn{sup 13+} are investigated, because of their importance for determining themore » conversion efficiency of the EUV source, in conjunction with the effect of configuration interaction in the calculation of atomic structure. Calculated emission spectra are compared with those of charge exchange spectroscopy and of laser produced plasma EUV sources. The comparison is also carried out for the opacity of a radiatively heated Sn sample. A reasonable agreement is obtained between calculated and experimental EUV emission spectra observed under the typical condition of EUV sources with the ion density and ionization temperature of the plasma around 10{sup 18} cm{sup -3} and 20 eV, respectively, by applying a wavelength correction to the resonance and satellite lines. Finally, the spectral emissivity and opacity of Sn plasmas are calculated as a function of electron temperature and ion density. The results are useful for radiation hydrodynamics simulations for the optimization of EUV sources.« less

  11. A multicentre audit of HDR/PDR brachytherapy absolute dosimetry in association with the INTERLACE trial (NCT015662405).

    PubMed

    Díez, P; Aird, E G A; Sander, T; Gouldstone, C A; Sharpe, P H G; Lee, C D; Lowe, G; Thomas, R A S; Simnor, T; Bownes, P; Bidmead, M; Gandon, L; Eaton, D; Palmer, A L

    2017-11-09

    A UK multicentre audit to evaluate HDR and PDR brachytherapy has been performed using alanine absolute dosimetry. This is the first national UK audit performing an absolute dose measurement at a clinically relevant distance (20 mm) from the source. It was performed in both INTERLACE (a phase III multicentre trial in cervical cancer) and non-INTERLACE brachytherapy centres treating gynaecological tumours. Forty-seven UK centres (including the National Physical Laboratory) were visited. A simulated line source was generated within each centre's treatment planning system and dwell times calculated to deliver 10 Gy at 20 mm from the midpoint of the central dwell (representative of Point A of the Manchester system). The line source was delivered in a water-equivalent plastic phantom (Barts Solid Water) encased in blocks of PMMA (polymethyl methacrylate) and charge measured with an ion chamber at 3 positions (120° apart, 20 mm from the source). Absorbed dose was then measured with alanine at the same positions and averaged to reduce source positional uncertainties. Charge was also measured at 50 mm from the source (representative of Point B of the Manchester system). Source types included 46 HDR and PDR 192 Ir sources, (7 Flexisource, 24 mHDR-v2, 12 GammaMed HDR Plus, 2 GammaMed PDR Plus, 1 VS2000) and 1 HDR 60 Co source, (Co0.A86). Alanine measurements when compared to the centres' calculated dose showed a mean difference (±SD) of  +1.1% (±1.4%) at 20 mm. Differences were also observed between source types and dose calculation algorithm. Ion chamber measurements demonstrated significant discrepancies between the three holes mainly due to positional variation of the source within the catheter (0.4%-4.9% maximum difference between two holes). This comprehensive audit of absolute dose to water from a simulated line source showed all centres could deliver the prescribed dose to within 5% maximum difference between measurement and calculation.

  12. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballester, Facundo, E-mail: Facundo.Ballester@uv.es; Carlsson Tedgren, Åsa; Granero, Domingo

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual watermore » phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then compared against dose calculated using TG-43 and MBDCA methods. Results: TG-43 and PSS datasets were generated for the generic source, the PSS data for use with the ACE algorithm. The dose-rate constant values obtained from seven MC simulations, performed independently using different codes, were in excellent agreement, yielding an average of 1.1109 ± 0.0004 cGy/(h U) (k = 1, Type A uncertainty). MC calculated dose-rate distributions for the two plans were also found to be in excellent agreement, with differences within type A uncertainties. Differences between commercial MBDCA and MC results were test, position, and calculation parameter dependent. On average, however, these differences were within 1% for ACUROS and 2% for ACE at clinically relevant distances. Conclusions: A hypothetical, generic HDR {sup 192}Ir source was designed and implemented in two commercially available TPSs employing different MBDCAs. Reference dose distributions for this source were benchmarked and used for the evaluation of MBDCA calculations employing a virtual, cubic water phantom in the form of a CT DICOM image series. The implementation of a generic source of identical design in all TPSs using MBDCAs is an important step toward supporting univocal commissioning procedures and direct comparisons between TPSs.« less

  13. Two-dimensional extended fluid model for a dc glow discharge with nonlocal ionization source term

    NASA Astrophysics Data System (ADS)

    Rafatov, Ismail; Bogdanov, Eugeny; Kudryavtsev, Anatoliy

    2013-09-01

    Numerical techniques applied to the gas discharge plasma modelling are generally grouped into fluid and kinetic (particle) methods, and their combinations which lead to the hybrid models. Hybrid models usually employ Monte Carlo method to simulate fast electron dynamics, while slow plasma species are described as fluids. However, since fast electrons contribution to these models is limited to deriving the ionization rate distribution, their effect can be expressed by the analytical approximation of the ionization source function, and then integrating it into the fluid model. In the context of this approach, we incorporated effect of fast electrons into the ``extended fluid model'' of glow discharge, using two spatial dimensions. Slow electrons, ions and excited neutral species are described by the fluid plasma equations. Slow electron transport (diffusion and mobility) coefficients as well as electron induced reaction rates are determined from the solutions of the electron Boltzmann equation. The self-consistent electric field is calculated using the Poisson equation. We carried out test calculations for the discharge in argon gas. Comparison with the experimental data as well as with the hybrid model results exhibits good applicability of the proposed model. The work was supported by the joint research grant from the Scientific and Technical Research Council of Turkey (TUBITAK) 212T164 and Russian Foundation for Basic Research (RFBR).

  14. Ultraviolet, X-ray, and infrared observations of HDE 226868 equals Cygnus X-1

    NASA Technical Reports Server (NTRS)

    Treves, A.; Chiappetti, L.; Tanzi, E. G.; Tarenghi, M.; Gursky, H.; Dupree, A. K.; Hartmann, L. W.; Raymond, J.; Davis, R. J.; Black, J.

    1980-01-01

    During April, May, and July of 1978, HDE 226868, the optical counterpart of Cygnus X-1, was repeatedly observed in the ultraviolet with the IUE satellite. Some X-ray and infrared observations have been made during the same period. The general shape of the spectrum is that expected from a late O supergiant. Strong absorption features are apparent in the ultraviolet, some of which have been identified. The equivalent widths of the most prominent lines appear to be modulated with the orbital phase. This modulation is discussed in terms of the ionization contours calculated by Hatchett and McCray, for a binary X-ray source in the stellar wind of the companion.

  15. Neutrino tomography - Tevatron mapping versus the neutrino sky. [for X-rays of earth interior

    NASA Technical Reports Server (NTRS)

    Wilson, T. L.

    1984-01-01

    The feasibility of neutrino tomography of the earth's interior is discussed, taking the 80-GeV W-boson mass determined by Arnison (1983) and Banner (1983) into account. The opacity of earth zones is calculated on the basis of the preliminary reference earth model of Dziewonski and Anderson (1981), and the results are presented in tables and graphs. Proposed tomography schemes are evaluated in terms of the well-posedness of the inverse-Radon-transform problems involved, the neutrino generators and detectors required, and practical and economic factors. The ill-posed schemes are shown to be infeasible; the well-posed schemes (using Tevatrons or the neutrino sky as sources) are considered feasible but impractical.

  16. Io's Magnetospheric Interaction: An MHD Model with Day-Night Asymmetry

    NASA Technical Reports Server (NTRS)

    Kabin, K.; Combi, M. R.; Gombosi, T. I.; DeZeeuw, D. L.; Hansen, K. C.; Powell, K. G.

    2001-01-01

    In this paper we present the results of all improved three-dimensional MHD model for Io's interaction with Jupiter's magnetosphere. We have included the day-night asymmetry into the spatial distribution of our mass-loading, which allowed us to reproduce several smaller features or the Galileo December 1995 data set. The calculation is performed using our newly modified description of the pick-up processes that accounts for the effects of the corotational electric field existing in the Jovian magnetosphere. This change in the formulation of the source terms for the MHD equations resulted in significant improvements in the comparison with the Galileo measurements. We briefly discuss the limitations of our model and possible future improvements.

  17. The structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayder, M. E.; Povinelli, Louis A.

    1993-01-01

    Large-eddy simulation of a supersonic jet is presented with emphasis on capturing the unsteady features of the flow pertinent to sound emission. A high-accuracy numerical scheme is used to solve the filtered, unsteady, compressible Navier-Stokes equations while modelling the subgrid-scale turbulence. For random inflow disturbance, the wave-like feature of the large-scale structure is demonstrated. The large-scale structure was then enhanced by imposing harmonic disturbances to the inflow. The limitation of using the full Navier-Stokes equation to calculate the far-field sound is discussed. Application of Lighthill's acoustic analogy is given with the objective of highlighting the difficulties that arise from the non-compactness of the source term.

  18. Computational optical palpation: a finite-element approach to micro-scale tactile imaging using a compliant sensor

    PubMed Central

    Sampson, David D.; Kennedy, Brendan F.

    2017-01-01

    High-resolution tactile imaging, superior to the sense of touch, has potential for future biomedical applications such as robotic surgery. In this paper, we propose a tactile imaging method, termed computational optical palpation, based on measuring the change in thickness of a thin, compliant layer with optical coherence tomography and calculating tactile stress using finite-element analysis. We demonstrate our method on test targets and on freshly excised human breast fibroadenoma, demonstrating a resolution of up to 15–25 µm and a field of view of up to 7 mm. Our method is open source and readily adaptable to other imaging modalities, such as ultrasonography and confocal microscopy. PMID:28250098

  19. The IEA/ORAU Long-Term Global Energy- CO2 Model: Personal Computer Version A84PC

    DOE Data Explorer

    Edmonds, Jae A.; Reilly, John M.; Boden, Thomas A. [CDIAC; Reynolds, S. E. [CDIAC; Barns, D. W.

    1995-01-01

    The IBM A84PC version of the Edmonds-Reilly model has the capability to calculate both CO2 and CH4 emission estimates by source and region. Population, labor productivity, end-use energy efficiency, income effects, price effects, resource base, technological change in energy production, environmental costs of energy production, market-penetration rate of energy-supply technology, solar and biomass energy costs, synfuel costs, and the number of forecast periods may be interactively inspected and altered producing a variety of global and regional CO2 and CH4 emission scenarios for 1975 through 2100. Users are strongly encouraged to see our instructions for downloading, installing, and running the model.

  20. Low birth weight and air pollution in California: Which sources and components drive the risk?

    PubMed

    Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Kleeman, Michael J; Bartell, Scott M; Cockburn, Myles; Escobedo, Loraine; Wu, Jun

    2016-01-01

    Intrauterine growth restriction has been associated with exposure to air pollution, but there is a need to clarify which sources and components are most likely responsible. This study investigated the associations between low birth weight (LBW, <2500g) in term born infants (≥37 gestational weeks) and air pollution by source and composition in California, over the period 2001-2008. Complementary exposure models were used: an empirical Bayesian kriging model for the interpolation of ambient pollutant measurements, a source-oriented chemical transport model (using California emission inventories) that estimated fine and ultrafine particulate matter (PM2.5 and PM0.1, respectively) mass concentrations (4km×4km) by source and composition, a line-source roadway dispersion model at fine resolution, and traffic index estimates. Birth weight was obtained from California birth certificate records. A case-cohort design was used. Five controls per term LBW case were randomly selected (without covariate matching or stratification) from among term births. The resulting datasets were analyzed by logistic regression with a random effect by hospital, using generalized additive mixed models adjusted for race/ethnicity, education, maternal age and household income. In total 72,632 singleton term LBW cases were included. Term LBW was positively and significantly associated with interpolated measurements of ozone but not total fine PM or nitrogen dioxide. No significant association was observed between term LBW and primary PM from all sources grouped together. A positive significant association was observed for secondary organic aerosols. Exposure to elemental carbon (EC), nitrates and ammonium were also positively and significantly associated with term LBW, but only for exposure during the third trimester of pregnancy. Significant positive associations were observed between term LBW risk and primary PM emitted by on-road gasoline and diesel or by commercial meat cooking sources. Primary PM from wood burning was inversely associated with term LBW. Significant positive associations were also observed between term LBW and ultrafine particle numbers modeled with the line-source roadway dispersion model, traffic density and proximity to roadways. This large study based on complementary exposure metrics suggests that not only primary pollution sources (traffic and commercial meat cooking) but also EC and secondary pollutants are risk factors for term LBW. Copyright © 2016 Elsevier Ltd. All rights reserved.

Top