Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions
NASA Astrophysics Data System (ADS)
Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.
2018-05-01
Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.
A Test Suite for 3D Radiative Hydrodynamics Simulations of Protoplanetary Disks
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Durisen, R. H.; Nordlund, A.; Lord, J.
2006-12-01
Radiative hydrodynamics simulations of protoplanetary disks with different treatments for radiative cooling demonstrate disparate evolutions (see Durisen et al. 2006, PPV chapter). Some of these differences include the effects of convection and metallicity on disk cooling and the susceptibility of the disk to fragmentation. Because a principal reason for these differences may be the treatment of radiative cooling, the accuracy of cooling algorithms must be evaluated. In this paper we describe a radiative transport test suite, and we challenge all researchers who use radiative hydrodynamics to study protoplanetary disk evolution to evaluate their algorithms with these tests. The test suite can be used to demonstrate an algorithm's accuracy in transporting the correct flux through an atmosphere and in reaching the correct temperature structure, to test the algorithm's dependence on resolution, and to determine whether the algorithm permits of inhibits convection when expected. In addition, we use this test suite to demonstrate the accuracy of a newly developed radiative cooling algorithm that combines vertical rays with flux-limited diffusion. This research was supported in part by a Graduate Student Researchers Program fellowship.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swesty, F. Douglas; Myra, Eric S.
It is now generally agreed that multidimensional, multigroup, neutrino-radiation hydrodynamics (RHD) is an indispensable element of any realistic model of stellar-core collapse, core-collapse supernovae, and proto-neutron star instabilities. We have developed a new, two-dimensional, multigroup algorithm that can model neutrino-RHD flows in core-collapse supernovae. Our algorithm uses an approach similar to the ZEUS family of algorithms, originally developed by Stone and Norman. However, this completely new implementation extends that previous work in three significant ways: first, we incorporate multispecies, multigroup RHD in a flux-limited-diffusion approximation. Our approach is capable of modeling pair-coupled neutrino-RHD, and includes effects of Pauli blocking inmore » the collision integrals. Blocking gives rise to nonlinearities in the discretized radiation-transport equations, which we evolve implicitly in time. We employ parallelized Newton-Krylov methods to obtain a solution of these nonlinear, implicit equations. Our second major extension to the ZEUS algorithm is the inclusion of an electron conservation equation that describes the evolution of electron-number density in the hydrodynamic flow. This permits calculating deleptonization of a stellar core. Our third extension modifies the hydrodynamics algorithm to accommodate realistic, complex equations of state, including those having nonconvex behavior. In this paper, we present a description of our complete algorithm, giving sufficient details to allow others to implement, reproduce, and extend our work. Finite-differencing details are presented in appendices. We also discuss implementation of this algorithm on state-of-the-art, parallel-computing architectures. Finally, we present results of verification tests that demonstrate the numerical accuracy of this algorithm on diverse hydrodynamic, gravitational, radiation-transport, and RHD sample problems. We believe our methods to be of general use in a variety of model settings where radiation transport or RHD is important. Extension of this work to three spatial dimensions is straightforward.« less
Faster Heavy Ion Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.
2013-01-01
The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.
Unstructured Polyhedral Mesh Thermal Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, T.S.; Zika, M.R.; Madsen, N.K.
2000-07-27
Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.
MOD3D: a model for incorporating MODTRAN radiative transfer into 3D simulations
NASA Astrophysics Data System (ADS)
Berk, Alexander; Anderson, Gail P.; Gossage, Brett N.
2001-08-01
MOD3D, a rapid and accurate radiative transport algorithm, is being developed for application to 3D simulations. MOD3D couples to optical property databases generated by the MODTRAN4 Correlated-k (CK) band model algorithm. The Beer's Law dependence of the CK algorithm provides for proper coupling of illumination and line-of-sight paths. Full 3D spatial effects are modeled by scaling and interpolating optical data to local conditions. A C++ version of MOD3D has been integrated into JMASS for calculation of path transmittances, thermal emission and single scatter solar radiation. Results from initial validation efforts are presented.
An Improved Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.
2000-01-01
A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.
NASA Astrophysics Data System (ADS)
Foucart, Francois
2018-04-01
General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.
F--Ray: A new algorithm for efficient transport of ionizing radiation
NASA Astrophysics Data System (ADS)
Mao, Yi; Zhang, J.; Wandelt, B. D.; Shapiro, P. R.; Iliev, I. T.
2014-04-01
We present a new algorithm for the 3D transport of ionizing radiation, called F
High Performance Radiation Transport Simulations on TITAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M
2012-01-01
In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less
An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.
2000-01-01
A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.
The Application of Neutron Transport Green's Functions to Threat Scenario Simulation
NASA Astrophysics Data System (ADS)
Thoreson, Gregory G.; Schneider, Erich A.; Armstrong, Hirotatsu; van der Hoeven, Christopher A.
2015-02-01
Radiation detectors provide deterrence and defense against nuclear smuggling attempts by scanning vehicles, ships, and pedestrians for radioactive material. Understanding detector performance is crucial to developing novel technologies, architectures, and alarm algorithms. Detection can be modeled through radiation transport simulations; however, modeling a spanning set of threat scenarios over the full transport phase-space is computationally challenging. Previous research has demonstrated Green's functions can simulate photon detector signals by decomposing the scenario space into independently simulated submodels. This paper presents decomposition methods for neutron and time-dependent transport. As a result, neutron detector signals produced from full forward transport simulations can be efficiently reconstructed by sequential application of submodel response functions.
Multiagency Urban Search Experiment Detector and Algorithm Test Bed
NASA Astrophysics Data System (ADS)
Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.
2017-07-01
In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Computing Interactions Of Free-Space Radiation With Matter
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.
1995-01-01
High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.
An 'adding' algorithm for the Markov chain formalism for radiation transfer
NASA Technical Reports Server (NTRS)
Esposito, L. W.
1979-01-01
An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.
NASA Astrophysics Data System (ADS)
Yeh, Peter C. Y.; Lee, C. C.; Chao, T. C.; Tung, C. J.
2017-11-01
Intensity-modulated radiation therapy is an effective treatment modality for the nasopharyngeal carcinoma. One important aspect of this cancer treatment is the need to have an accurate dose algorithm dealing with the complex air/bone/tissue interface in the head-neck region to achieve the cure without radiation-induced toxicities. The Acuros XB algorithm explicitly solves the linear Boltzmann transport equation in voxelized volumes to account for the tissue heterogeneities such as lungs, bone, air, and soft tissues in the treatment field receiving radiotherapy. With the single beam setup in phantoms, this algorithm has already been demonstrated to achieve the comparable accuracy with Monte Carlo simulations. In the present study, five nasopharyngeal carcinoma patients treated with the intensity-modulated radiation therapy were examined for their dose distributions calculated using the Acuros XB in the planning target volume and the organ-at-risk. Corresponding results of Monte Carlo simulations were computed from the electronic portal image data and the BEAMnrc/DOSXYZnrc code. Analysis of dose distributions in terms of the clinical indices indicated that the Acuros XB was in comparable accuracy with Monte Carlo simulations and better than the anisotropic analytical algorithm for dose calculations in real patients.
2014-01-01
and 50 kT, to within 30% of first-principles code ( MCNP ) for complicated cities and 10% for simpler cities. 15. SUBJECT TERMS Radiation Transport...Use of MCNP for Dose Calculations .................................................................... 3 2.3 MCNP Open-Field Absorbed Dose...Calculations .................................................. 4 2.4 The MCNP Urban Model
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Durisen, Richard H.; Nordlund, Åke; Lord, Jesse
2007-08-01
Recent three-dimensional radiative hydrodynamics simulations of protoplanetary disks report disparate disk behaviors, and these differences involve the importance of convection to disk cooling, the dependence of disk cooling on metallicity, and the stability of disks against fragmentation and clump formation. To guarantee trustworthy results, a radiative physics algorithm must demonstrate the capability to handle both the high and low optical depth regimes. We develop a test suite that can be used to demonstrate an algorithm's ability to relax to known analytic flux and temperature distributions, to follow a contracting slab, and to inhibit or permit convection appropriately. We then show that the radiative algorithm employed by Mejía and Boley et al. and the algorithm employed by Cai et al. pass these tests with reasonable accuracy. In addition, we discuss a new algorithm that couples flux-limited diffusion with vertical rays, we apply the test suite, and we discuss the results of evolving the Boley et al. disk with this new routine. Although the outcome is significantly different in detail with the new algorithm, we obtain the same qualitative answers. Our disk does not cool fast due to convection, and it is stable to fragmentation. We find an effective α~10-2. In addition, transport is dominated by low-order modes.
Numerically robust and efficient nonlocal electron transport in 2D DRACO simulations
NASA Astrophysics Data System (ADS)
Cao, Duc; Chenhall, Jeff; Moses, Greg; Delettrez, Jacques; Collins, Tim
2013-10-01
An improved implicit algorithm based on Schurtz, Nicolai and Busquet (SNB) algorithm for nonlocal electron transport is presented. Validation with direct drive shock timing experiments and verification with the Goncharov nonlocal model in 1D LILAC simulations demonstrate the viability of this efficient algorithm for producing 2D lagrangian radiation hydrodynamics direct drive simulations. Additionally, simulations provide strong incentive to further modify key parameters within the SNB theory, namely the ``mean free path.'' An example 2D polar drive simulation to study 2D effects of the nonlocal flux as well as mean free path modifications will also be presented. This research was supported by the University of Rochester Laboratory for Laser Energetics.
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.
2011-02-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (˜0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe extended mission. For the upcoming EJSM mission with an expected launch date of 2020, the transport suite is used to compute the depth dose profile for the traditional aluminum silicon as a standard shield target combination, as well as simulating the shielding response of a high charge number (Z) material such as tantalum (Ta). Finally, a shield optimization algorithm is discussed which can guide the instrument designers and fabrication personnel with the choice of graded-Z shield selection and analysis.
PBMC: Pre-conditioned Backward Monte Carlo code for radiative transport in planetary atmospheres
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2017-08-01
PBMC (Pre-Conditioned Backward Monte Carlo) solves the vector Radiative Transport Equation (vRTE) and can be applied to planetary atmospheres irradiated from above. The code builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. In accounting for the polarization in the sampling of photon propagation directions and pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions, PBMC avoids the unstable and biased solutions of classical BMC algorithms for conservative, optically-thick, strongly-polarizing media such as Rayleigh atmospheres.
NASA Astrophysics Data System (ADS)
Whalen, Daniel; Norman, Michael L.
2006-02-01
Radiation hydrodynamical transport of ionization fronts (I-fronts) in the next generation of cosmological reionization simulations holds the promise of predicting UV escape fractions from first principles as well as investigating the role of photoionization in feedback processes and structure formation. We present a multistep integration scheme for radiative transfer and hydrodynamics for accurate propagation of I-fronts and ionized flows from a point source in cosmological simulations. The algorithm is a photon-conserving method that correctly tracks the position of I-fronts at much lower resolutions than nonconservative techniques. The method applies direct hierarchical updates to the ionic species, bypassing the need for the costly matrix solutions required by implicit methods while retaining sufficient accuracy to capture the true evolution of the fronts. We review the physics of ionization fronts in power-law density gradients, whose analytical solutions provide excellent validation tests for radiation coupling schemes. The advantages and potential drawbacks of direct and implicit schemes are also considered, with particular focus on problem time-stepping, which if not properly implemented can lead to morphologically plausible I-front behavior that nonetheless departs from theory. We also examine the effect of radiation pressure from very luminous central sources on the evolution of I-fronts and flows.
A Dynamic/Anisotropic Low Earth Orbit (LEO) Ionizing Radiation Model
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; West, Katie J.; Nealy, John E.; Wilson, John W.; Abrahms, Briana L.; Luetke, Nathan J.
2006-01-01
The International Space Station (ISS) provides the proving ground for future long duration human activities in space. Ionizing radiation measurements in ISS form the ideal tool for the experimental validation of ionizing radiation environmental models, nuclear transport code algorithms, and nuclear reaction cross sections. Indeed, prior measurements on the Space Transportation System (STS; Shuttle) have provided vital information impacting both the environmental models and the nuclear transport code development by requiring dynamic models of the Low Earth Orbit (LEO) environment. Previous studies using Computer Aided Design (CAD) models of the evolving ISS configurations with Thermo Luminescent Detector (TLD) area monitors, demonstrated that computational dosimetry requires environmental models with accurate non-isotropic as well as dynamic behavior, detailed information on rack loading, and an accurate 6 degree of freedom (DOF) description of ISS trajectory and orientation.
NASA Technical Reports Server (NTRS)
Kato, S.; Smith, G. L.; Barker, H. W.
2001-01-01
An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.
Inverse transport calculations in optical imaging with subspace optimization algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu
2014-09-15
Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Fluorescence molecular imaging based on the adjoint radiative transport equation
NASA Astrophysics Data System (ADS)
Asllanaj, Fatmir; Addoum, Ahmad; Rodolphe Roche, Jean
2018-07-01
A new reconstruction algorithm for fluorescence diffuse optical tomography of biological tissues is proposed. The radiative transport equation in the frequency domain is used to model light propagation. The adjoint method studied in this work provides an efficient way for solving the inverse problem. The methodology is applied to a 2D tissue-like phantom subjected to a collimated laser beam. Indocyanine Green is used as fluorophore. Reconstructed images of the spatial fluorophore absorption distribution is assessed taking into account the residual fluorescence in the medium. We show that illuminating the tissue surface from a collimated centered direction near the inclusion gaves a better reconstruction quality. Two closely positioned inclusions can be accurately localized. Additionally, their fluorophore absorption coefficients can be quantified. However, the algorithm failes to reconstruct smaller or deeper inclusions. This is due to light attenuation in the medium. Reconstructions with noisy data are also achieved with a reasonable accuracy.
Radiation anomaly detection algorithms for field-acquired gamma energy spectra
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen
2015-08-01
The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.
NASA Astrophysics Data System (ADS)
Cao, Duc; Moses, Gregory; Delettrez, Jacques
2015-08-01
An implicit, non-local thermal conduction algorithm based on the algorithm developed by Schurtz, Nicolai, and Busquet (SNB) [Schurtz et al., Phys. Plasmas 7, 4238 (2000)] for non-local electron transport is presented and has been implemented in the radiation-hydrodynamics code DRACO. To study the model's effect on DRACO's predictive capability, simulations of shot 60 303 from OMEGA are completed using the iSNB model, and the computed shock speed vs. time is compared to experiment. Temperature outputs from the iSNB model are compared with the non-local transport model of Goncharov et al. [Phys. Plasmas 13, 012702 (2006)]. Effects on adiabat are also examined in a polar drive surrogate simulation. Results show that the iSNB model is not only capable of flux-limitation but also preheat prediction while remaining numerically robust and sacrificing little computational speed. Additionally, the results provide strong incentive to further modify key parameters within the SNB theory, namely, the newly introduced non-local mean free path. This research was supported by the Laboratory for Laser Energetics of the University of Rochester.
Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.
This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths
NASA Astrophysics Data System (ADS)
Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.
2018-04-01
We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.
NASA Astrophysics Data System (ADS)
Sauer, D. N.; Vázquez-Navarro, M.; Gasteiger, J.; Chouza, F.; Weinzierl, B.
2016-12-01
Mineral dust is the major species of airborne particulate matter by mass in the atmosphere. Each year an estimated 200-3000 Tg of dust are emitted from the North African desert and arid regions alone. A large fraction of the dust is lifted into the free troposphere and gets transported in extended dust layers westward over the Atlantic Ocean into the Caribbean Sea. Especially over the dark surface of the ocean, those dust layers exert a significant effect on the atmospheric radiative balance though aerosol-radiation interactions. During the Saharan Aerosol Long-range Transport and Aerosol-Cloud-Interaction Experiment (SALTRACE) in summer 2013 airborne in-situ aerosol measurements on both sides of the Atlantic Ocean, near the African coast and the Caribbean were performed. In this study we use data about aerosol microphysical properties acquired between Cabo Verde and Senegal to derive the aerosol optical properties and the resulting radiative forcing using the radiative transfer package libRadtran. We compare the results to values retrieved from MSG/SEVIRI data using the RRUMS algorithm. The RRUMS algorithm can derive shortwave and longwave top-of-atmosphere outgoing fluxes using only information issued from the narrow-band MSG/SEVIRI channels. A specific calibration based on collocated Terra/CERES measurements ensures a correct retrieval of the upwelling flux from the dust covered pixels. The comparison of radiative forcings based on in-situ data to satellite-retrieved values enables us to extend the radiative forcing estimates from small-scale in-situ measurements to large scale satellite coverage over the Atlantic Ocean.
NASA Technical Reports Server (NTRS)
Pallmann, A. J.; Dannevik, W. P.; Frisella, S. P.
1973-01-01
Radiative-conductive heat transfer has been investigated for the ground-atmosphere system of the planet Mars. The basic goal was the quantitative determination of time dependent vertical distributions of temperature and static stability for Southern-Hemispheric summer season and middle and polar latitudes, for both dust-free and dust-laden atmospheric conditions. The numerical algorithm which models at high spatial and temporal resolution the thermal energy transports in the dual ground-atmosphere system, is based on solution of the applicable heating rate equation, including radiative and molecular-conductive heat transport terms. The two subsystems are coupled by an internal thermal boundary condition applied at the ground-atmosphere interface level. Initial data and input parameters are based on Mariner 4, 6, 7, and 9 measurements and the JPL Mars Scientific Model. Numerical experiments were run for dust-free and dust-laden conditions in the midlatitudes, as well as ice-free and ice-covered polar regions. Representative results and their interpretation are presented. Finally, the theoretical framework of the generalized problem with nonconservative Mie scattering and explicit thermal-convective heat transfer is formulated, and applicable solution algorithms are outlined.
NASA Astrophysics Data System (ADS)
Lord, Jesse W.; Boley, A. C.; Durisen, R. H.
2006-12-01
We present a comparison between two three-dimensional radiative hydrodynamics simulations of a gravitationally unstable 0.07 Msun protoplanetary disk around a 0.5 Msun star. The first simulation is the radiatively cooled disk described in Boley et al. (2006, ApJ, 651). This simulation employed an algorithm that uses 3D flux-limited diffusion wherever the vertical Rosseland optical depth is greater than 2/3, which defines the optically thick region. The optically thin atmosphere of the disk, which cools according to its emissivity, is coupled to the optically thick region through an Eddington-like boundary condition. The second simulation employed an algorithm that uses a combination of solving the radiative transfer equation along rays in the z direction and flux limited diffusion in the r and phi directions on a cylindrical grid. We compare the following characteristics of the disk simulations: the mass transport and torques induced by gravitational instabilities, the effective temperature profiles of the disks, the gravitational and Reynolds stresses measured in the disk and those expected in an alpha-disk, and the amplitudes of the Fourier modes. This work has been supported by the National Science Foundation through grant AST-0452975 (astronomy REU program to Indiana University).
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.
1989-01-01
Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.
MODA: a new algorithm to compute optical depths in multidimensional hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Perego, Albino; Gafton, Emanuel; Cabezón, Rubén; Rosswog, Stephan; Liebendörfer, Matthias
2014-08-01
Aims: We introduce the multidimensional optical depth algorithm (MODA) for the calculation of optical depths in approximate multidimensional radiative transport schemes, equally applicable to neutrinos and photons. Motivated by (but not limited to) neutrino transport in three-dimensional simulations of core-collapse supernovae and neutron star mergers, our method makes no assumptions about the geometry of the matter distribution, apart from expecting optically transparent boundaries. Methods: Based on local information about opacities, the algorithm figures out an escape route that tends to minimize the optical depth without assuming any predefined paths for radiation. Its adaptivity makes it suitable for a variety of astrophysical settings with complicated geometry (e.g., core-collapse supernovae, compact binary mergers, tidal disruptions, star formation, etc.). We implement the MODA algorithm into both a Eulerian hydrodynamics code with a fixed, uniform grid and into an SPH code where we use a tree structure that is otherwise used for searching neighbors and calculating gravity. Results: In a series of numerical experiments, we compare the MODA results with analytically known solutions. We also use snapshots from actual 3D simulations and compare the results of MODA with those obtained with other methods, such as the global and local ray-by-ray method. It turns out that MODA achieves excellent accuracy at a moderate computational cost. In appendix we also discuss implementation details and parallelization strategies.
3D-radiative transfer in terrestrial atmosphere: An efficient parallel numerical procedure
NASA Astrophysics Data System (ADS)
Bass, L. P.; Germogenova, T. A.; Nikolaeva, O. V.; Kokhanovsky, A. A.; Kuznetsov, V. S.
2003-04-01
Light propagation and scattering in terrestrial atmosphere is usually studied in the framework of the 1D radiative transfer theory [1]. However, in reality particles (e.g., ice crystals, solid and liquid aerosols, cloud droplets) are randomly distributed in 3D space. In particular, their concentrations vary both in vertical and horizontal directions. Therefore, 3D effects influence modern cloud and aerosol retrieval procedures, which are currently based on the 1D radiative transfer theory. It should be pointed out that the standard radiative transfer equation allows to study these more complex situations as well [2]. In recent year the parallel version of the 2D and 3D RADUGA code has been developed. This version is successfully used in gammas and neutrons transport problems [3]. Applications of this code to radiative transfer in atmosphere problems are contained in [4]. Possibilities of code RADUGA are presented in [5]. The RADUGA code system is an universal solver of radiative transfer problems for complicated models, including 2D and 3D aerosol and cloud fields with arbitrary scattering anisotropy, light absorption, inhomogeneous underlying surface and topography. Both delta type and distributed light sources can be accounted for in the framework of the algorithm developed. The accurate numerical procedure is based on the new discrete ordinate SWDD scheme [6]. The algorithm is specifically designed for parallel supercomputers. The version RADUGA 5.1(P) can run on MBC1000M [7] (768 processors with 10 Gb of hard disc memory for each processor). The peak productivity is equal 1 Tfl. Corresponding scalar version RADUGA 5.1 is working on PC. As a first example of application of the algorithm developed, we have studied the shadowing effects of clouds on neighboring cloudless atmosphere, depending on the cloud optical thickness, surface albedo, and illumination conditions. This is of importance for modern satellite aerosol retrieval algorithms development. [1] Sobolev, V. V., 1972: Light scattering in planetary atmosphere, M.:Nauka. [2] Evans, K. F., 1998: The spherical harmonic discrete ordinate method for three dimensional atmospheric radiative transfer, J. Atmos. Sci., 55, 429 446. [3] L.P. Bass, T.A. Germogenova, V.S. Kuznetsov, O.V. Nikolaeva. RADUGA 5.1 and RADUGA 5.1(P) codes for stationary transport equation solution in 2D and 3D geometries on one and multiprocessors computers. Report on seminar “Algorithms and Codes for neutron physical of nuclear reactor calculations” (Neutronica 2001), Obninsk, Russia, 30 October 2 November 2001. [4] T.A. Germogenova, L.P. Bass, V.S. Kuznetsov, O.V. Nikolaeva. Mathematical modeling on parallel computers solar and laser radiation transport in 3D atmosphere. Report on International Symposium CIS countries “Atmosphere radiation”, 18 21 June 2002, St. Peterburg, Russia, p. 15 16. [5] L.P. Bass, T.A. Germogenova, O.V. Nikolaeva, V.S. Kuznetsov. Radiative Transfer Universal 2D 3D Code RADUGA 5.1(P) for Multiprocessor Computer. Abstract. Poster report on this Meeting. [6] L.P. Bass, O.V. Nikolaeva. Correct calculation of Angular Flux Distribution in Strongly Heterogeneous Media and Voids. Proc. of Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, New York, October 5 9, 1997, p. 995 1004. [7] http://www/jscc.ru
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better control than forward integration over the planet region contributing to the solution, and this presents a clear advantage when estimating the disk-integrated signal at moderate and large phase angles. A one-slab, plane-parallel version of the PBMC algorithm is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A72
Multigroup Radiation-Hydrodynamics with a High-Order, Low-Order Method
Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron; ...
2016-12-09
Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging [or high-order (HO)–low-order (LO)] algorithm for solving large varieties of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian Applications Project to achieve algorithmic acceleration of radiationhydrodynamics simulations in production software. By starting from the thermal radiative transfer equations with a simple material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that the corresponding LO system for the Monte Carlo HO solver is closely related to the originalmore » LO system without material-motion corrections. We test the implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions as well as agreement with the semianalytic solution. We also test the approach on a more challenging two-dimensional problem and demonstrate accuracy enhancements and algorithmic speedups. This paper extends a recent conference paper by including multigroup effects.« less
Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets
NASA Astrophysics Data System (ADS)
Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.
2018-04-01
Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.
The effect of dopants on laser imprint mitigation
NASA Astrophysics Data System (ADS)
Phillips, Lee; Gardner, John H.; Bodner, Stephen E.; Colombant, Denis; Dahlburg, Jill
1999-11-01
An intact implosion of a pellet for direct-drive ICF requires that the perturbations imprinted by the laser be kept below some threshold. We report on simulations of targets that incorporate very small concentrations of a high-Z dopant in the ablator, to increase the electron density in the ablating plasma, causing the laser to be absorbed far enough from the solid ablator to achieve a substantial degree of thermal smoothing. These calculations were performed using NRL's FAST radiation hydrodynamics code(J.H. Gardner, A.J. Schmitt, et al., Phys. Plasmas) 5, 1935 (1998), incorporating the flux-corrected transport algorithm and opacities generated by an STA code, with non-LTE radiation transport based on the Busquet method.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS
NASA Astrophysics Data System (ADS)
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.
2017-06-01
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E
2017-06-21
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
Multilevel acceleration of scattering-source iterations with application to electron transport
Drumm, Clif; Fan, Wesley
2017-08-18
Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less
Continuous energy adjoint transport for photons in PHITS
NASA Astrophysics Data System (ADS)
Malins, Alex; Machida, Masahiko; Niita, Koji
2017-09-01
Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.
Development of deterministic transport methods for low energy neutrons for shielding in space
NASA Technical Reports Server (NTRS)
Ganapol, Barry
1993-01-01
Transport of low energy neutrons associated with the galactic cosmic ray cascade is analyzed in this dissertation. A benchmark quality analytical algorithm is demonstrated for use with BRYNTRN, a computer program written by the High Energy Physics Division of NASA Langley Research Center, which is used to design and analyze shielding against the radiation created by the cascade. BRYNTRN uses numerical methods to solve the integral transport equations for baryons with the straight-ahead approximation, and numerical and empirical methods to generate the interaction probabilities. The straight-ahead approximation is adequate for charged particles, but not for neutrons. As NASA Langley improves BRYNTRN to include low energy neutrons, a benchmark quality solution is needed for comparison. The neutron transport algorithm demonstrated in this dissertation uses the closed-form Green's function solution to the galactic cosmic ray cascade transport equations to generate a source of neutrons. A basis function expansion for finite heterogeneous and semi-infinite homogeneous slabs with multiple energy groups and isotropic scattering is used to generate neutron fluxes resulting from the cascade. This method, called the FN method, is used to solve the neutral particle linear Boltzmann transport equation. As a demonstration of the algorithm coded in the programs MGSLAB and MGSEMI, neutron and ion fluxes are shown for a beam of fluorine ions at 1000 MeV per nucleon incident on semi-infinite and finite aluminum slabs. Also, to demonstrate that the shielding effectiveness against the radiation from the galactic cosmic ray cascade is not directly proportional to shield thickness, a graph of transmitted total neutron scalar flux versus slab thickness is shown. A simple model based on the nuclear liquid drop assumption is used to generate cross sections for the galactic cosmic ray cascade. The ENDF/B V database is used to generate the total and scattering cross sections for neutrons in aluminum. As an external verification, the results from MGSLAB and MGSEMI were compared to ANISN/PC, a routinely used neutron transport code, showing excellent agreement. In an application to an aluminum shield, the FN method seems to generate reasonable results.
Verification of ARES transport code system with TAKEDA benchmarks
NASA Astrophysics Data System (ADS)
Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue
2015-10-01
Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.
NASA Technical Reports Server (NTRS)
Menzel, Paul; Prins, Elaine
1995-01-01
This study attempts to assess the extent of burning and associated aerosol transport regimes in South America and the South Atlantic using geostationary satellite observations, in order to explore the possible roles of biomass burning in climate change and more directly in atmospheric chemistry and radiative transfer processes. Modeling and analysis efforts have suggested that the direct and indirect radiative effects of aerosols from biomass burning may play a major role in the radiative balance of the earth and are an important factor in climate change calculations. One of the most active regions of biomass burning is located in South America, associated with deforestation in the selva (forest), grassland management, and other agricultural practices. As part of the NASA Aerosol Interdisciplinary Program, we are utilizing GOES-7 (1988) and GOES-8 (1995) visible and multispectral infrared data (4, 11, and 12 microns) to document daily biomass burning activity in South America and to distinguish smoke/aerosols from other multi-level clouds and low-level moisture. This study catalogues the areal extent and transport of smoke/aerosols throughout the region and over the Atlantic Ocean for the 1988 (July-September) and 1995 (June-October) biomass burning seasons. The smoke/haze cover estimates are compared to the locations of fires to determine the source and verify the haze is actually associated with biomass burning activities. The temporal resolution of the GOES data (half-hourly in South America) makes it possible to determine the prevailing circulation and transport of aerosols by considering a series of visible and infrared images and tracking the motion of smoke, haze and adjacent clouds. The study area extends from 40 to 70 deg W and 0 to 40 deg S with aerosol coverage extending over the Atlantic Ocean when necessary. Fire activity is estimated with the GOES Automated Biomass Burning Algorithm (ABBA). To date, our efforts have focused on GOES-7 and GOES-8 ABBA development, algorithm development for aerosol monitoring, data acquisition and archiving, and participation in the SCAR-C and SCAR-B field programs which have provided valuable information for algorithm testing and validation. Implementation of the initial version of the GEOS-8 ABBA on case studies in North, Central, and South America has demonstrated the improved capability for monitoring diurnal fire activity and smoke/aerosol transport with the GOES-8 throughout the Western Hemisphere.
Radiation Protection Effectiveness of Polymeric Based Shielding Materials at Low Earth Orbit
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; Stewart-Sloan, Charlotte R.; Wilson, John W.; Adams, Daniel O.
2008-01-01
Correlations of limited ionizing radiation measurements onboard the Space Transportation System (STS; shuttle) and the International Space Station (ISS) with numerical simulations of charged particle transport through spacecraft structure have indicated that usage of hydrogen rich polymeric materials improves the radiation shielding performance of space structures as compared to the traditionally used aluminum alloys. We discuss herein the radiation shielding correlations between measurements on board STS-81 (Atlantis, 1997) using four polyethylene (PE) spheres of varying radii, and STS-89 (Endeavour, 1998) using aluminum alloy spheres; with numerical simulations of charged particle transport using the Langley Research Center (LaRC)-developed High charge (Z) and Energy TRaNsport (HZETRN) algorithm. In the simulations, the Galactic Cosmic Ray (GCR) component of the ionizing radiation environment at Low Earth Orbit (LEO) covering ions in the 1< or equals Z< or equals 28 range is represented by O'Neill's (2004) model. To compute the transmission coefficient for GCR ions at LEO, O'Neill's model is coupled with the angular dependent LaRC cutoff model. The trapped protons/electrons component of LEO environment is represented by a LaRC-developed time dependent procedure which couples the AP8min/AP8max, Deep River Neutron Monitor (DRNM) and F10.7 solar radio frequency measurements. The albedo neutron environment resulting from interaction of GCR ions with upper atmosphere is modeled through extrapolation of the Atmospheric Ionizing Radiation (AIR) measurements. With the validity of numerical simulations through correlation with PE and aluminum spheres measurements established, we further present results from the expansion of the simulations through the selection of high hydrogen content commercially available polymeric constituents such as PE foam core and Spectra fiber(Registered TradeMark) composite face sheet to assess their radiation shield properties as compared to generic PE.
A method for optimizing the cosine response of solar UV diffusers
NASA Astrophysics Data System (ADS)
Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki
2013-07-01
Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron
Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging [or high-order (HO)–low-order (LO)] algorithm for solving large varieties of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian Applications Project to achieve algorithmic acceleration of radiationhydrodynamics simulations in production software. By starting from the thermal radiative transfer equations with a simple material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that the corresponding LO system for the Monte Carlo HO solver is closely related to the originalmore » LO system without material-motion corrections. We test the implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions as well as agreement with the semianalytic solution. We also test the approach on a more challenging two-dimensional problem and demonstrate accuracy enhancements and algorithmic speedups. This paper extends a recent conference paper by including multigroup effects.« less
NASA Astrophysics Data System (ADS)
Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie
2012-03-01
Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.
Development of a three-dimensional, regional, coupled wave, current, and sediment-transport model
Warner, J.C.; Sherwood, C.R.; Signell, R.P.; Harris, C.K.; Arango, H.G.
2008-01-01
We are developing a three-dimensional numerical model that implements algorithms for sediment transport and evolution of bottom morphology in the coastal-circulation model Regional Ocean Modeling System (ROMS v3.0), and provides a two-way link between ROMS and the wave model Simulating Waves in the Nearshore (SWAN) via the Model-Coupling Toolkit. The coupled model is applicable for fluvial, estuarine, shelf, and nearshore (surfzone) environments. Three-dimensional radiation-stress terms have been included in the momentum equations, along with effects of a surface wave roller model. The sediment-transport algorithms are implemented for an unlimited number of user-defined non-cohesive sediment classes. Each class has attributes of grain diameter, density, settling velocity, critical stress threshold for erosion, and erodibility constant. Suspended-sediment transport in the water column is computed with the same advection-diffusion algorithm used for all passive tracers and an additional algorithm for vertical settling that is not limited by the CFL criterion. Erosion and deposition are based on flux formulations. A multi-level bed framework tracks the distribution of every size class in each layer and stores bulk properties including layer thickness, porosity, and mass, allowing computation of bed morphology and stratigraphy. Also tracked are bed-surface properties including active-layer thickness, ripple geometry, and bed roughness. Bedload transport is calculated for mobile sediment classes in the top layer. Bottom-boundary layer submodels parameterize wave-current interactions that enhance bottom stresses and thereby facilitate sediment transport and increase bottom drag, creating a feedback to the circulation. The model is demonstrated in a series of simple test cases and a realistic application in Massachusetts Bay.
Transported PDF Modeling of Nonpremixed Turbulent CO/H-2/N-2 Jet Flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, xinyu; Haworth, D. C.; Huckaby, E. David
2012-01-01
Turbulent CO/H{sub 2}/N{sub 2} (“syngas”) flames are simulated using a transported composition probability density function (PDF) method. A consistent hybrid Lagrangian particle/Eulerian mesh algorithm is used to solve the modeled PDF transport equation. The model includes standard k–ϵ turbulence, gradient transport for scalars, and Euclidean minimum spanning tree (EMST) mixing. Sensitivities of model results to variations in the turbulence model, the treatment of radiation heat transfer, the choice of chemical mechanism, and the PDF mixing model are explored. A baseline model reproduces the measured mean and rms temperature, major species, and minor species profiles reasonably well, and captures the scalingmore » that is observed in the experiments. Both our results and the literature suggest that further improvements can be realized with adjustments in the turbulence model, the radiation heat transfer model, and the chemical mechanism. Although radiation effects are relatively small in these flames, consideration of radiation is important for accurate NO prediction. Chemical mechanisms that have been developed specifically for fuels with high concentrations of CO and H{sub 2} perform better than a methane mechanism that was not designed for this purpose. It is important to account explicitly for turbulence–chemistry interactions, although the details of the mixing model do not make a large difference in the results, within reasonable limits.« less
A point kernel algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan
2017-11-01
Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.
Importance biasing scheme implemented in the PRIZMA code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
Zemax simulations describing collective effects in transition and diffraction radiation.
Bisesto, F G; Castellano, M; Chiadroni, E; Cianchi, A
2018-02-19
Transition and diffraction radiation from charged particles is commonly used for diagnostics purposes in accelerator facilities as well as THz sources for spectroscopy applications. Therefore, an accurate analysis of the emission process and the transport optics is crucial to properly characterize the source and precisely retrieve beam parameters. In this regard, we have developed a new algorithm, based on Zemax, to simulate both transition and diffraction radiation as generated by relativistic electron bunches, therefore considering collective effects. In particular, unlike other previous works, we take into account electron beam physical size and transverse momentum, reproducing some effects visible on the produced radiation, not observable in a single electron analysis. The simulation results have been compared with two experiments showing an excellent agreement.
A conservative MHD scheme on unstructured Lagrangian grids for Z-pinch hydrodynamic simulations
NASA Astrophysics Data System (ADS)
Wu, Fuyuan; Ramis, Rafael; Li, Zhenghong
2018-03-01
A new algorithm to model resistive magnetohydrodynamics (MHD) in Z-pinches has been developed. Two-dimensional axisymmetric geometry with azimuthal magnetic field Bθ is considered. Discretization is carried out using unstructured meshes made up of arbitrarily connected polygons. The algorithm is fully conservative for mass, momentum, and energy. Matter energy and magnetic energy are managed separately. The diffusion of magnetic field is solved using a derivative of the Symmetric-Semi-Implicit scheme, Livne et al. (1985) [23], where unconditional stability is obtained without needing to solve large sparse systems of equations. This MHD package has been integrated into the radiation-hydrodynamics code MULTI-2D, Ramis et al. (2009) [20], that includes hydrodynamics, laser energy deposition, heat conduction, and radiation transport. This setup allows to simulate Z-pinch configurations relevant for Inertial Confinement Fusion.
Optical properties reconstruction using the adjoint method based on the radiative transfer equation
NASA Astrophysics Data System (ADS)
Addoum, Ahmad; Farges, Olivier; Asllanaj, Fatmir
2018-01-01
An efficient algorithm is proposed to reconstruct the spatial distribution of optical properties in heterogeneous media like biological tissues. The light transport through such media is accurately described by the radiative transfer equation in the frequency-domain. The adjoint method is used to efficiently compute the objective function gradient with respect to optical parameters. Numerical tests show that the algorithm is accurate and robust to retrieve simultaneously the absorption μa and scattering μs coefficients for lowly and highly absorbing medium. Moreover, the simultaneous reconstruction of μs and the anisotropy factor g of the Henyey-Greenstein phase function is achieved with a reasonable accuracy. The main novelty in this work is the reconstruction of g which might open the possibility to image this parameter in tissues as an additional contrast agent in optical tomography.
An abstract approach to evaporation models in rarefied gas dynamics
NASA Astrophysics Data System (ADS)
Greenberg, W.; van der Mee, C. V. M.
1984-03-01
Strong evaporation models involving 1D stationary problems with linear self-adjoint collision operators and solutions in abstract Hilbert spaces are investigated analytically. An efficient algorithm for locating the transition from existence to nonexistence of solutions is developed and applied to the 1D and 3D BGK model equations and the 3D BGK model in moment form, demonstrating the nonexistence of stationary evaporation states with supersonic drift velocities. Applications to similar models in electron and phonon transport, radiative transfer, and neutron transport are suggested.
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
NASA Astrophysics Data System (ADS)
Almansa, Julio; Salvat-Pujol, Francesc; Díaz-Londoño, Gloria; Carnicer, Artur; Lallena, Antonio M.; Salvat, Francesc
2016-02-01
The Fortran subroutine package PENGEOM provides a complete set of tools to handle quadric geometries in Monte Carlo simulations of radiation transport. The material structure where radiation propagates is assumed to consist of homogeneous bodies limited by quadric surfaces. The PENGEOM subroutines (a subset of the PENELOPE code) track particles through the material structure, independently of the details of the physics models adopted to describe the interactions. Although these subroutines are designed for detailed simulations of photon and electron transport, where all individual interactions are simulated sequentially, they can also be used in mixed (class II) schemes for simulating the transport of high-energy charged particles, where the effect of soft interactions is described by the random-hinge method. The definition of the geometry and the details of the tracking algorithm are tailored to optimize simulation speed. The use of fuzzy quadric surfaces minimizes the impact of round-off errors. The provided software includes a Java graphical user interface for editing and debugging the geometry definition file and for visualizing the material structure. Images of the structure are generated by using the tracking subroutines and, hence, they describe the geometry actually passed to the simulation code.
Turbulent Flow past High Temperature Surfaces
NASA Astrophysics Data System (ADS)
Mehmedagic, Igbal; Thangam, Siva; Carlucci, Pasquale; Buckley, Liam; Carlucci, Donald
2014-11-01
Flow over high-temperature surfaces subject to wall heating is analyzed with applications to projectile design. In this study, computations are performed using an anisotropic Reynolds-stress model to study flow past surfaces that are subject to radiative flux. The model utilizes a phenomenological treatment of the energy spectrum and diffusivities of momentum and heat to include the effects of wall heat transfer and radiative exchange. The radiative transport is modeled using Eddington approximation including the weighted effect of nongrayness of the fluid. The time-averaged equations of motion and energy are solved using the modeled form of transport equations for the turbulence kinetic energy and the scalar form of turbulence dissipation with an efficient finite-volume algorithm. The model is applied for available test cases to validate its predictive capabilities for capturing the effects of wall heat transfer. Computational results are compared with experimental data available in the literature. Applications involving the design of projectiles are summarized. Funded in part by U.S. Army, ARDEC.
Predicting commuter flows in spatial networks using a radiation model based on temporal ranges
NASA Astrophysics Data System (ADS)
Ren, Yihui; Ercsey-Ravasz, Mária; Wang, Pu; González, Marta C.; Toroczkai, Zoltán
2014-11-01
Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and human mobility. Here we show a first-principles based method for traffic prediction using a cost-based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events.
OLTARIS: On-Line Tool for the Assessment of Radiation in Space
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Blattnig, Steve R.; Clowdsley, Martha S.; Qualls, Garry D.; Sandridge, Chris A.; Simonsen, Lisa C.; Norbury, John W.; Slaba, Tony C.; Walker, Steve A.; Badavi, Francis F.;
2009-01-01
The On-Line Tool for the Assessment of Radiation In Space (OLTARIS) is a World Wide Web based tool that assesses the effects of space radiation to humans in items such as spacecraft, habitats, rovers, and spacesuits. This document explains the basis behind the interface and framework used to input the data, perform the assessment, and output the results to the user as well as the physics, engineering, and computer science used to develop OLTARIS. The physics is based on the HZETRN2005 and NUCFRG2 research codes. The OLTARIS website is the successor to the SIREST website from the early 2000 s. Modifications have been made to the code to enable easy maintenance, additions, and configuration management along with a more modern web interface. Over all, the code has been verified, tested, and modified to enable faster and more accurate assessments. The next major areas of modification are more accurate transport algorithms, better uncertainty estimates, and electronic response functions. Improvements in the existing algorithms and data occur continuously and are logged in the change log section of the website.
An Improved Neutron Transport Algorithm for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.
2010-01-01
Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriarty, M.P.
1993-01-15
The heat transport subsystem for a liquid metal cooled thermionic space nuclear power system was modelled using algorithms developed in support of previous nuclear power system study programs, which date back to the SNAP-10A flight system. The model was used to define the optimum dimensions of the various components in the heat transport subsystem subjected to the constraints of minimizing mass and achieving a launchable package that did not require radiator deployment. The resulting design provides for the safe and reliable cooling of the nuclear reactor in a proven lightweight design.
NASA Astrophysics Data System (ADS)
Moriarty, Michael P.
1993-01-01
The heat transport subsystem for a liquid metal cooled thermionic space nuclear power system was modelled using algorithms developed in support of previous nuclear power system study programs, which date back to the SNAP-10A flight system. The model was used to define the optimum dimensions of the various components in the heat transport subsystem subjected to the constraints of minimizing mass and achieving a launchable package that did not require radiator deployment. The resulting design provides for the safe and reliable cooling of the nuclear reactor in a proven lightweight design.
An Improved Neutron Transport Algorithm for HZETRN2006
NASA Astrophysics Data System (ADS)
Slaba, Tony
NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
Boost-phase discrimination research activities
NASA Technical Reports Server (NTRS)
Cooper, David M.; Deiwert, George S.
1989-01-01
Theoretical research in two areas was performed. The aerothermodynamics research focused on the hard-body and rocket plume flows. Analytical real gas models to describe finite rate chemistry were developed and incorporated into the three-dimensional flow codes. New numerical algorithms capable of treating multi-species reacting gas equations and treating flows with large gradients were also developed. The computational chemistry research focused on the determination of spectral radiative intensity factors, transport properties and reaction rates. Ab initio solutions to the Schrodinger equation provided potential energy curves transition moments (radiative probabilities and strengths) and potential energy surfaces. These surfaces were then coupled with classical particle reactive trajectories to compute reaction cross-sections and rates.
Yang, Y M; Bednarz, B
2013-02-21
Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.
NASA Astrophysics Data System (ADS)
Yang, Y. M.; Bednarz, B.
2013-02-01
Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.
NASA Astrophysics Data System (ADS)
Jardin, A.; Mazon, D.; Malard, P.; O'Mullane, M.; Chernyshova, M.; Czarski, T.; Malinowski, K.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.
2017-08-01
The tokamak WEST aims at testing ITER divertor high heat flux component technology in long pulse operation. Unfortunately, heavy impurities like tungsten (W) sputtered from the plasma facing components can pollute the plasma core by radiation cooling in the soft x-ray (SXR) range, which is detrimental for the energy confinement and plasma stability. SXR diagnostics give valuable information to monitor impurities and study their transport. The WEST SXR diagnostic is composed of two new cameras based on the Gas Electron Multiplier (GEM) technology. The WEST GEM cameras will be used for impurity transport studies by performing 2D tomographic reconstructions with spectral resolution in tunable energy bands. In this paper, we characterize the GEM spectral response and investigate W density reconstruction thanks to a synthetic diagnostic recently developed and coupled with a tomography algorithm based on the minimum Fisher information (MFI) inversion method. The synthetic diagnostic includes the SXR source from a given plasma scenario, the photoionization, electron cloud transport and avalanche in the detection volume using Magboltz, and tomographic reconstruction of the radiation from the GEM signal. Preliminary studies of the effect of transport on the W ionization equilibrium and on the reconstruction capabilities are also presented.
Faster and more accurate transport procedures for HZETRN
NASA Astrophysics Data System (ADS)
Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.
2010-12-01
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Faster and more accurate transport procedures for HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stavrov, Andrei; Yamamoto, Eugene
Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection formore » source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the authors and based on some experimental test results. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co-57, Ba-133 and other). New variant of ASIA is based on physical principles and does not require a lot of special tests to attain statistical data for its parameters. That is why this system can be easily installed into any RPM with plastic detectors. This algorithm was tested for 1,395 passages of different transports (cars, trucks and trailers) without radioactive sources. It also was tested for 4,015 passages of these transports with radioactive sources of different activity (Co-57, Ba-133, Cs-137, Co-60, Ra-226, Th-232) and these sources masked by NORM (K-40) as well. (authors)« less
NASA Technical Reports Server (NTRS)
Pallman, A. J.
1974-01-01
Time dependent vertical distributions of atmospheric temperature and static stability were determined by a radiative-convective-conductive heat transfer model attuned to Mariner 9 IRIS radiance data. Of particular interest were conditions of both the dust-laden and dust-free atmosphere in the middle latitudes on Mars during the late S.H. summer season. The numerical model simulates at high spatial and temporal resolution (52 atmospheric and 30 subsurface levels; with a time-step of 7.5 min.) the heat transports in the ground-atmosphere system. The algorithm is based on the solution of the appropriate heating rate equation which includes radiative, molecular-conductive and convective heat transfer terms. Ground and atmosphere are coupled by an internal thermal boundary condition.
Three-temperature plasma shock solutions with gray radiation diffusion
Johnson, Bryan M.; Klein, Richard I.
2016-04-19
Here we discuss the effects of radiation on the structure of shocks in a fully ionized plasma are investigated by solving the steady-state fluid equations for ions, electrons, and radiation. The electrons and ions are assumed to have the same bulk velocity but separate temperatures, and the radiation is modeled with the gray diffusion approximation. Both electron and ion conduction are included, as well as ion viscosity. When the material is optically thin, three-temperature behavior occurs. When the diffusive flux of radiation is important but radiation pressure is not, two-temperature behavior occurs, with the electrons strongly coupled to the radiation.more » Since the radiation heats the electrons on length scales that are much longer than the electron–ion Coulomb coupling length scale, these solutions resemble radiative shock solutions rather than plasma shock solutions that neglect radiation. When radiation pressure is important, all three components are strongly coupled. Results with constant values for the transport and coupling coefficients are compared to a full numerical simulation with a good match between the two, demonstrating that steady shock solutions constitute a straightforward and comprehensive verification test methodology for multi-physics numerical algorithms.« less
Three-temperature plasma shock solutions with gray radiation diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Bryan M.; Klein, Richard I.
Here we discuss the effects of radiation on the structure of shocks in a fully ionized plasma are investigated by solving the steady-state fluid equations for ions, electrons, and radiation. The electrons and ions are assumed to have the same bulk velocity but separate temperatures, and the radiation is modeled with the gray diffusion approximation. Both electron and ion conduction are included, as well as ion viscosity. When the material is optically thin, three-temperature behavior occurs. When the diffusive flux of radiation is important but radiation pressure is not, two-temperature behavior occurs, with the electrons strongly coupled to the radiation.more » Since the radiation heats the electrons on length scales that are much longer than the electron–ion Coulomb coupling length scale, these solutions resemble radiative shock solutions rather than plasma shock solutions that neglect radiation. When radiation pressure is important, all three components are strongly coupled. Results with constant values for the transport and coupling coefficients are compared to a full numerical simulation with a good match between the two, demonstrating that steady shock solutions constitute a straightforward and comprehensive verification test methodology for multi-physics numerical algorithms.« less
2007-08-01
In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods) rather than linear trajectories. The...coupling the radiative transport solution into heat transfer and damage models. 15. SUBJECT TERMS: B-Splines, Ray-Tracing, Eikonal Equation...multi-layer biological tissue model. In the approach, photon trajectories are computed using a solution of the Eikonal equation (ray-tracing methods
NASA Astrophysics Data System (ADS)
Wang, J.; Christopher, S. A.; Nair, U. S.; Reid, J. S.; Prins, E. M.
2005-12-01
Considerable efforts including various field experiments have been carried out in the last decade for studying the regional climatic impact of smoke aerosols produced by biomass burning activities in Africa and South America. In contrast, only few investigations have been conducted for Central American Biomass Burning (CABB) region. Using a coupled aerosol-radiation-meteorology model called RAMS-AROMA together with various ground-based observations, we present a comprehensive analysis of the smoke direct radiative impacts on the surface energy budget, boundary layer evolution, and e precipitation process during the CABB events in Spring 2003. Quantitative estimates are also made regarding the transboundary carbon mass to the U.S. in the form of smoke particles. Buult upon the Regional Atmospheric Modeling System (RAMS) mesoscale model, the RAMS AROMA has several features including Assimilation and Radiation Online Modeling of Aerosols (AROMA) algorithms. The model simulates smoke transport by using hourly smoke emission inventory from the Fire Locating and Modeling of Burning Emissions (FLAMBE) geostationary satellite database. It explicitly considers the smoke effects on the radiative transfer at each model time step and model grid, thereby coupling the dynamical processes and aerosol transport. Comparison with ground-based observation show that the simulation realistically captured the smoke transport timeline and distribution from daily to hourly scales. The effects of smoke radiative extinction on the decrease of 2m air temperature (2mT), diurnal temperature range (DTR), and boundary layer height over the land surface are also quantified. Warming due to smoke absorption of solar radiation can be found in the lower troposphere over the ocean, but not near the underlying land surface. The increase of boundary layer stability produces a positive feedback where more smoke particles are trapped in the lower boundary layer. These changes in temperature, surface energy budget and the atmospheric lapse rate have important ramification for the simulation of precipitations.
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.
2009-04-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated on such a kriged field. Stochastic modelling aims at reproducing the structure of the data. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. However, while stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. Because radiative transfer through clouds is a highly nonlinear process it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately as well as the correlations in the cloud field because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. However, up to now we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. The algorithm is tested on cloud fields from large eddy simulations (LES). On these clouds a measurement is simulated. From the pseudo-measurement we estimated the distribution and power spectrum. Furthermore, the pseudo-measurement is kriged to a field the size of the final surrogate cloud. The distribution, spectrum and the kriged field are the inputs to the algorithm. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero. We work with four types of pseudo-measurements: one zenith pointing measurement (which together with the wind produces a line measurement), five zenith pointing measurements, a slow and a fast azimuth scan (which together with the wind produce spirals). Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the new surrogate clouds. For comparison also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results already show that these new surrogate clouds reproduce the structure of the original clouds very well and the minima and maxima are located where the pseudo-measurements sees them. The main limitation seems to be the amount of data, which is especially very limited in case of just one zenith pointing measurement.
Directional radiation pattern in structural-acoustic coupled system
NASA Astrophysics Data System (ADS)
Seo, Hee-Seon; Kim, Yang-Hann
2005-07-01
In this paper we demonstrate the possibility of designing a radiator using structural-acoustic interaction by predicting the pressure distribution and radiation pattern of a structural-acoustic coupling system that is composed by a wall and two spaces. If a wall separates spaces, then the wall's role in transporting the acoustic characteristics of the spaces is important. The spaces can be categorized as bounded finite space and unbounded infinite space. The wall considered in this study composes two plates and an opening, and the wall separates one space that is highly reverberant and the other that is unbounded without any reflection. This rather hypothetical circumstance is selected to study the general coupling problem between the finite and infinite acoustic domains. We developed an equation that predicts the energy distribution and energy flow in the two spaces separated by a wall, and its computational examples are presented. Three typical radiation patterns that include steered, focused, and omnidirected are presented. A designed radiation pattern is also presented by using the optimal design algorithm.
Multiscale high-order/low-order (HOLO) algorithms and applications
NASA Astrophysics Data System (ADS)
Chacón, L.; Chen, G.; Knoll, D. A.; Newman, C.; Park, H.; Taitano, W.; Willert, J. A.; Womeldorff, G.
2017-02-01
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.
MODTRAN4 radiative transfer modeling for atmospheric correction
NASA Astrophysics Data System (ADS)
Berk, Alexander; Anderson, Gail P.; Bernstein, Lawrence S.; Acharya, Prabhat K.; Dothe, H.; Matthew, Michael W.; Adler-Golden, Steven M.; Chetwynd, James H.; Richtsmeier, Steven C.; Pukall, Brian; Allred, Clark L.; Jeong, Laila S.; Hoke, Michael L.
1999-10-01
MODTRAN4, the latest publicly released version of MODTRAN, provides many new and important options for modeling atmospheric radiation transport. A correlated-k algorithm improves multiple scattering, eliminates Curtis-Godson averaging, and introduces Beer's Law dependencies into the band model. An optimized 15 cm(superscript -1) band model provides over a 10-fold increase in speed over the standard MODTRAN 1 cm(superscript -1) band model with comparable accuracy when higher spectral resolution results are unnecessary. The MODTRAN ground surface has been upgraded to include the effects of Bidirectional Reflectance Distribution Functions (BRDFs) and Adjacency. The BRDFs are entered using standard parameterizations and are coupled into line-of-sight surface radiance calculations.
Design Considerations of a Virtual Laboratory for Advanced X-ray Sources
NASA Astrophysics Data System (ADS)
Luginsland, J. W.; Frese, M. H.; Frese, S. D.; Watrous, J. J.; Heileman, G. L.
2004-11-01
The field of scientific computation has greatly advanced in the last few years, resulting in the ability to perform complex computer simulations that can predict the performance of real-world experiments in a number of fields of study. Among the forces driving this new computational capability is the advent of parallel algorithms, allowing calculations in three-dimensional space with realistic time scales. Electromagnetic radiation sources driven by high-voltage, high-current electron beams offer an area to further push the state-of-the-art in high fidelity, first-principles simulation tools. The physics of these x-ray sources combine kinetic plasma physics (electron beams) with dense fluid-like plasma physics (anode plasmas) and x-ray generation (bremsstrahlung). There are a number of mature techniques and software packages for dealing with the individual aspects of these sources, such as Particle-In-Cell (PIC), Magneto-Hydrodynamics (MHD), and radiation transport codes. The current effort is focused on developing an object-oriented software environment using the Rational© Unified Process and the Unified Modeling Language (UML) to provide a framework where multiple 3D parallel physics packages, such as a PIC code (ICEPIC), a MHD code (MACH), and a x-ray transport code (ITS) can co-exist in a system-of-systems approach to modeling advanced x-ray sources. Initial software design and assessments of the various physics algorithms' fidelity will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
Laser Blow-Off Impurity Injection Experiments at the HSX Stellarator
NASA Astrophysics Data System (ADS)
Castillo, J. F.; Bader, A.; Likin, K. M.; Anderson, D. T.; Anderson, F. S. B.; Kumar, S. T. A.; Talmadge, J. N.
2017-10-01
Results from the HSX laser blow-off experiment are presented and compared to a synthetic diagnostic implemented in the STRAHL impurity transport modeling code in order to measure the impurity transport diffusivity and convective velocity. A laser blow-off impurity injection system is used to rapidly deposit a small, controlled quantity of aluminum into the confinement volume. Five AXUV photodiode arrays are used to take time-resolved measurements of the impurity radiation. The spatially one-dimensional impurity transport code STRAHL is used to calculate a time-dependent plasma emissivity profile. Modeled intensity signals calculated from a synthetic diagnostic code provide direct comparison between plasma simulation and experimental results. An optimization algorithm with impurity transport coefficients acting as free parameters is used to fit the model to experimental data. This work is supported by US DOE Grant DE-FG02-93ER54222.
Modeling and Simulation of Radiative Compressible Flows in Aerodynamic Heating Arc-Jet Facility
NASA Technical Reports Server (NTRS)
Bensassi, Khalil; Laguna, Alejandro A.; Lani, Andrea; Mansour, Nagi N.
2016-01-01
Numerical simulations of an arc heated flow inside NASA's 20 [MW] Aerodynamics heating facility (AHF) are performed in order to investigate the three-dimensional swirling flow and the current distribution inside the wind tunnel. The plasma is considered in Local Thermodynamics Equilibrium(LTE) and is composed of Air-Argon gas mixture. The governing equations are the Navier-Stokes equations that include source terms corresponding to Joule heating and radiative cooling. The former is obtained by solving an electric potential equation, while the latter is calculated using an innovative massively parallel ray-tracing algorithm. The fully coupled system is closed by the thermodynamics relations and transport properties which are obtained from Chapman-Enskog method. A novel strategy was developed in order to enable the flow solver and the radiation calculation to be preformed independently and simultaneously using a different number of processors. Drastic reduction in the computational cost was achieved using this strategy. Details on the numerical methods used for space discretization, time integration and ray-tracing algorithm will be presented. The effect of the radiative cooling on the dynamics of the flow will be investigated. The complete set of equations were implemented within the COOLFluiD Framework. Fig. 1 shows the geometry of the Anode and part of the constrictor of the Aerodynamics heating facility (AHF). Fig. 2 shows the velocity field distribution along (x-y) plane and the streamline in (z-y) plane.
Two new algorithms to combine kriging with stochastic modelling
NASA Astrophysics Data System (ADS)
Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens
2010-05-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero during successive iterations. A second algorithm, which we call step-wise kriging, pursues the same aim. Each time the kriging algorithm estimates a value, noise is added to it, after which this new point is accounted for in the estimation of all the later points. In this way, the autocorrelation of the step-krigged field is close to that found in the pseudo measurements. The amount of noise is determined by the kriging uncertainty. The algorithms are tested on cloud fields from large eddy simulations (LES). On these clouds, a measurement is simulated. From these pseudo-measurements, we estimated the power spectrum for the surrogates, the semi-variogram for the (stepwise) kriging and the distribution. Furthermore, the pseudo-measurement is kriged. Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the two new types of stochastic clouds. For comparison, also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results show that both algorithms reproduce the structure of the original clouds well, and the minima and maxima are located where the pseudo-measurements see them. The main problem for the quality of the structure and the root mean square error is the amount of data, which is especially very limited in case of just one zenith pointing measurement.
Fully kinetic particle simulations of high pressure streamer propagation
NASA Astrophysics Data System (ADS)
Rose, David; Welch, Dale; Thoma, Carsten; Clark, Robert
2012-10-01
Streamer and leader formation in high pressure devices is a dynamic process involving a hierarchy of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. We have performed 2D and 3D fully EM implicit particle-in-cell simulation model of gas breakdown leading to streamer formation under DC and RF fields. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm [D. R. Welch, et al., J. Comp. Phys. 227, 143 (2007)] that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge. These models are being applied to the analysis of high-pressure gas switches [D. V. Rose, et al., Phys. Plasmas 18, 093501 (2011)] and gas-filled RF accelerator cavities [D. V. Rose, et al. Proc. IPAC12, to appear].
Evaluation of Long-term Aerosol Data Records from SeaWiFS over Land and Ocean
NASA Astrophysics Data System (ADS)
Bettenhausen, C.; Hsu, C.; Jeong, M.; Huang, J.
2010-12-01
Deserts around the globe produce mineral dust aerosols that may then be transported over cities, across continents, or even oceans. These aerosols affect the Earth’s energy balance through direct and indirect interactions with incoming solar radiation. They also have a biogeochemical effect as they deliver scarce nutrients to remote ecosystems. Large dust storms regularly disrupt air traffic and are a general nuisance to those living in transport regions. In the past, measuring dust aerosols has been incomplete at best. Satellite retrieval algorithms were limited to oceans or vegetated surfaces and typically neglected desert regions due to their high surface reflectivity in the mid-visible and near-infrared wavelengths, which have been typically used for aerosol retrievals. The Deep Blue aerosol retrieval algorithm was developed to resolve these shortcomings by utilizing the blue channels from instruments such as the Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) to infer aerosol properties over these highly reflective surfaces. The surface reflectivity of desert regions is much lower in the blue channels and thus it is easier to separate the aerosol and surface signals than at the longer wavelengths used in other algorithms. More recently, the Deep Blue algorithm has been expanded to retrieve over vegetated surfaces and oceans as well. A single algorithm can now follow dust from source to sink. In this work, we introduce the SeaWiFS instrument and the Deep Blue aerosol retrieval algorithm. We have produced global aerosol data records over land and ocean from 1997 through 2009 using the Deep Blue algorithm and SeaWiFS data. We describe these data records and validate them with data from the Aerosol Robotic Network (AERONET). We also show the relative performance compared to the current MODIS Deep Blue operational aerosol data in desert regions. The current results are encouraging and this dataset will be useful to future studies in understanding the effects of dust aerosols on global processes, long-term aerosol trends, quantifying dust emissions, transport, and inter-annual variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacón, L., E-mail: chacon@lanl.gov; Chen, G.; Knoll, D.A.
We review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. The HOLOmore » approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Reduced discretization error in HZETRN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu
2013-02-01
The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less
Verbeke, J. M.; Petit, O.
2016-06-01
From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less
Estimating solar radiation for plant simulation models
NASA Technical Reports Server (NTRS)
Hodges, T.; French, V.; Leduc, S.
1985-01-01
Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.
Satellite Monitoring of Long-Range Transport of Asian Dust Storms from Sources to Sinks
NASA Astrophysics Data System (ADS)
Hsu, N.; Tsay, S.; Jeong, M.; King, M.; Holben, B.
2007-05-01
Among the many components that contribute to air pollution, airborne mineral dust plays an important role due to its biogeochemical impact on the ecosystem and its radiative-forcing effect on the climate system. In East Asia, dust storms frequently accompany the cold and dry air masses that occur as part of spring-time cold front systems. China's capital, Beijing, and other large cities are on the primary pathway of these dust storm plumes, and their passage over such popu-lation centers causes flight delays, pushes grit through windows and doors, and forces people indoors. Furthermore, during the spring these anthropogenic and natural air pollutants, once generated over the source regions, can be transported out of the boundary layer into the free troposphere and can travel thousands of kilometers across the Pacific into the United States and beyond. In this paper, we will demonstrate the capability of a new satellite algorithm to retrieve aerosol optical thickness and single scattering albedo over bright-reflecting surfaces such as urban areas and deserts. Such retrievals have been dif-ficult to perform using previously available algorithms that use wavelengths from the mid-visible to the near IR because they have trouble separating the aerosol signal from the contribution due to the bright surface reflectance. The new algorithm, called Deep Blue, utilizes blue-wavelength measurements from instruments such as SeaWiFS and MODIS to infer the properties of aerosols, since the surface reflectance over land in the blue part of the spectrum is much lower than for longer wavelength channels. Deep Blue algorithm has recently been integrated into the MODIS processing stream and began to provide aerosol products over land as part of the opera-tional MYD04 products. In this talk, we will show the comparisons of the MODIS Deep Blue products with data from AERONET sunphotometers on a global ba-sis. The results indicate reasonable agreements between these two. These new satellite products will allow scientists to determine quantitatively the aerosol properties near sources and their evolution along transport pathway using high spatial resolution measurements from SeaWiFS and MODIS-like instruments. We will also utilize the multiyear satellite measurements from MODIS and SeaWiFS to investigate the interannual variability of source strength, pathway, and radia-tive forcing associated with these dust outbreaks in East Asia.
Calculation of heat flux through a wall containing a cavity: Comparison of several models
NASA Astrophysics Data System (ADS)
Park, J. E.; Kirkpatrick, J. R.; Tunstall, J. N.; Childs, K. W.
1986-02-01
This paper describes the calculation of the heat transfer through the standard stud wall structure of a residential building. The wall cavity contains no insulation. Results from five test cases are presented. The first four represent progressively more complicated approximations to the heat transfer through and within a hollow wall structure. The fifth adds the model components necessary to severely inhibit the radiative energy transport across the empty cavity. Flow within the wall cavity is calculated from the Navier-Stokes equations and the energy conservation equation for an ideal gas using an improvement to the Implicit-Compressible Eulerian (ICE) algorithm of Harlow and Amsden. An algorithm is described to efficiently couple the fluid flow calculations to the radiation-conduction model for the solid portions of the system. Results indicate that conduction through still plates contributes less than 2% of the total heat transferred through a composite wall. All of the other elements (conduction through wall board, sheathing, and siding; convection from siding and wallboard to am bients; and radiation across the wall cavity) are required to accurately predict the heat transfer through a wall. Addition of a foil liner on one inner surface of the wall cavity reduces the total heat transferred by almost 50%.
Preliminary analyses of space radiation protection for lunar base surface systems
NASA Technical Reports Server (NTRS)
Nealy, John E.; Wilson, John W.; Townsend, Lawrence W.
1989-01-01
Radiation shielding analyses are performed for candidate lunar base habitation modules. The study primarily addresses potential hazards due to contributions from the galactic cosmic rays. The NASA Langley Research Center's high energy nucleon and heavy ion transport codes are used to compute propagation of radiation through conventional and regolith shield materials. Computed values of linear energy transfer are converted to biological dose-equivalent using quality factors established by the International Commision of Radiological Protection. Special fluxes of heavy charged particles and corresponding dosimetric quantities are computed for a series of thicknesses in various shield media and are used as an input data base for algorithms pertaining to specific shielded geometries. Dosimetric results are presented as isodose contour maps of shielded configuration interiors. The dose predictions indicate that shielding requirements are substantial, and an abbreviated uncertainty analysis shows that better definition of the space radiation environment as well as improvement in nuclear interaction cross-section data can greatly increase the accuracy of shield requirement predictions.
NASA Astrophysics Data System (ADS)
Kim, Jeong-Gyu; Kim, Woong-Tae; Ostriker, Eve C.; Skinner, M. Aaron
2017-12-01
We present an implementation of an adaptive ray-tracing (ART) module in the Athena hydrodynamics code that accurately and efficiently handles the radiative transfer involving multiple point sources on a three-dimensional Cartesian grid. We adopt a recently proposed parallel algorithm that uses nonblocking, asynchronous MPI communications to accelerate transport of rays across the computational domain. We validate our implementation through several standard test problems, including the propagation of radiation in vacuum and the expansions of various types of H II regions. Additionally, scaling tests show that the cost of a full ray trace per source remains comparable to that of the hydrodynamics update on up to ∼ {10}3 processors. To demonstrate application of our ART implementation, we perform a simulation of star cluster formation in a marginally bound, turbulent cloud, finding that its star formation efficiency is 12% when both radiation pressure forces and photoionization by UV radiation are treated. We directly compare the radiation forces computed from the ART scheme with those from the M1 closure relation. Although the ART and M1 schemes yield similar results on large scales, the latter is unable to resolve the radiation field accurately near individual point sources.
Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics
NASA Astrophysics Data System (ADS)
Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.
2006-06-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stavrov, Andrei; Yamamoto, Eugene
Radiation Portal Monitors (RPM) with plastic detectors represent the main instruments used for primary border (customs) radiation control. RPM are widely used because they are simple, reliable, relatively inexpensive and have a high sensitivity. However, experience using the RPM in various countries has revealed the systems have some grave shortcomings. There is a dramatic decrease of the probability of detection of radioactive sources under high suppression of the natural gamma background (radiation control of heavy cargoes, containers and, especially, trains). NORM (Naturally Occurring Radioactive Material) existing in objects under control trigger the so-called 'nuisance alarms', requiring a secondary inspection formore » source verification. At a number of sites, the rate of such alarms is so high it significantly complicates the work of customs and border officers. This paper presents a brief description of new variant of algorithm ASIA-New (New Advanced Source Identification Algorithm), which was developed by the Rapiscan company. It also demonstrates results of different tests and the capability of a new system to overcome the shortcomings stated above. New electronics and ASIA-New enables RPM to detect radioactive sources under a high background suppression (tested at 15-30%) and to verify the detected NORM (KCl) and the artificial isotopes (Co- 57, Ba-133 and other). New variant of ASIA is based on physical principles, a phenomenological approach and analysis of some important parameter changes during the vehicle passage through the monitor control area. Thanks to this capability main advantage of new system is that this system can be easily installed into any RPM with plastic detectors. Taking into account that more than 4000 RPM has been installed worldwide their upgrading by ASIA-New may significantly increase probability of detection and verification of radioactive sources even masked by NORM. This algorithm was tested for 1,395 passages of different transports (cars, trucks and trailers) without radioactive sources. It also was tested for 4,015 passages of these transports with radioactive sources of different activity (Co-57, Ba-133, Cs-137, Co-60, Ra-226, Th-232) and these sources masked by NORM (K-40) as well. (authors)« less
Multiscale high-order/low-order (HOLO) algorithms and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
Multiscale high-order/low-order (HOLO) algorithms and applications
Chacon, Luis; Chen, Guangye; Knoll, Dana Alan; ...
2016-11-11
Here, we review the state of the art in the formulation, implementation, and performance of so-called high-order/low-order (HOLO) algorithms for challenging multiscale problems. HOLO algorithms attempt to couple one or several high-complexity physical models (the high-order model, HO) with low-complexity ones (the low-order model, LO). The primary goal of HOLO algorithms is to achieve nonlinear convergence between HO and LO components while minimizing memory footprint and managing the computational complexity in a practical manner. Key to the HOLO approach is the use of the LO representations to address temporal stiffness, effectively accelerating the convergence of the HO/LO coupled system. Themore » HOLO approach is broadly underpinned by the concept of nonlinear elimination, which enables segregation of the HO and LO components in ways that can effectively use heterogeneous architectures. The accuracy and efficiency benefits of HOLO algorithms are demonstrated with specific applications to radiation transport, gas dynamics, plasmas (both Eulerian and Lagrangian formulations), and ocean modeling. Across this broad application spectrum, HOLO algorithms achieve significant accuracy improvements at a fraction of the cost compared to conventional approaches. It follows that HOLO algorithms hold significant potential for high-fidelity system scale multiscale simulations leveraging exascale computing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malkov, Victor N.; Rogers, David W.O.
The coupling of MRI and radiation treatment systems for the application of magnetic resonance guided radiation therapy necessitates a reliable magnetic field capable Monte Carlo (MC) code. In addition to the influence of the magnetic field on dose distributions, the question of proper calibration has arisen due to the several percent variation of ion chamber and solid state detector responses in magnetic fields when compared to the 0 T case (Reynolds et al., Med Phys, 2013). In the absence of a magnetic field, EGSnrc has been shown to pass the Fano cavity test (a rigorous benchmarking tool of MC codes)more » at the 0.1 % level (Kawrakow, Med.Phys, 2000), and similar results should be required of magnetic field capable MC algorithms. To properly test such developing MC codes, the Fano cavity theorem has been adapted to function in a magnetic field (Bouchard et al., PMB, 2015). In this work, the Fano cavity test is applied in a slab and ion-chamber-like geometries to test the transport options of an implemented magnetic field algorithm in EGSnrc. Results show that the deviation of the MC dose from the expected Fano cavity theory value is highly sensitive to the choice of geometry, and the ion chamber geometry appears to pass the test more easily than larger slab geometries. As magnetic field MC codes begin to be used for dose simulations and correction factor calculations, care must be taken to apply the most rigorous Fano test geometries to ensure reliability of such algorithms.« less
Data Fusion for a Vision-Radiological System for Source Tracking and Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev
2015-07-01
A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less
Advances in Monte-Carlo code TRIPOLI-4®'s treatment of the electromagnetic cascade
NASA Astrophysics Data System (ADS)
Mancusi, Davide; Bonin, Alice; Hugot, François-Xavier; Malouch, Fadhel
2018-01-01
TRIPOLI-4® is a Monte-Carlo particle-transport code developed at CEA-Saclay (France) that is employed in the domains of nuclear-reactor physics, criticality-safety, shielding/radiation protection and nuclear instrumentation. The goal of this paper is to report on current developments, validation and verification made in TRIPOLI-4 in the electron/positron/photon sector. The new capabilities and improvements concern refinements to the electron transport algorithm, the introduction of a charge-deposition score, the new thick-target bremsstrahlung option, the upgrade of the bremsstrahlung model and the improvement of electron angular straggling at low energy. The importance of each of the developments above is illustrated by comparisons with calculations performed with other codes and with experimental data.
Science Accomplishments from a Decade of Aura OMI/MLS Tropospheric Ozone Measurements
NASA Technical Reports Server (NTRS)
Ziemke, Jerald R.; Douglass, Anne R.; Joiner, Joanna; Duncan, Bryan N.; Olsen, Mark A.; Oman, Luke D.; Witte, Jacquelyn C.; Liu, X.; Wargan, K.; Schoeberl, Mark R.;
2014-01-01
Measurements of tropospheric ozone from combined Aura OMI and MLS instruments have yielded a large number of new and important science discoveries over the last decade. These discoveries have generated a much greater understanding of biomass burning, lightning NO, and stratosphere-troposphere exchange sources of tropospheric ozone, ENSO dynamics and photochemistry, intra-seasonal variability-Madden-Julian Oscillation including convective transport, radiative forcing, measuring ozone pollution from space, improvements to ozone retrieval algorithms, and evaluation of chemical-transport and chemistry-climate models. The OMI-MLS measurements have been instrumental in giving us better understanding of the dynamics and chemistry involving tropospheric ozone and the many drivers affecting the troposphere in general. This discussion will provide an overview focusing on our main science results.
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.
2015-02-01
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
Three dimensional modeling of cirrus during the 1991 FIRE IFO 2: Detailed process study
NASA Technical Reports Server (NTRS)
Jensen, Eric J.; Toon, Owen B.; Westphal, Douglas L.
1993-01-01
A three-dimensional model of cirrus cloud formation and evolution, including microphysical, dynamical, and radiative processes, was used to simulate cirrus observed in the FIRE Phase 2 Cirrus field program (13 Nov. - 7 Dec. 1991). Sulfate aerosols, solution drops, ice crystals, and water vapor are all treated as interactive elements in the model. Ice crystal size distributions are fully resolved based on calculations of homogeneous freezing of solution drops, growth by water vapor deposition, evaporation, aggregation, and vertical transport. Visible and infrared radiative fluxes, and radiative heating rates are calculated using the two-stream algorithm described by Toon et al. Wind velocities, diffusion coefficients, and temperatures were taken from the MAPS analyses and the MM4 mesoscale model simulations. Within the model, moisture is transported and converted to liquid or vapor by the microphysical processes. The simulated cloud bulk and microphysical properties are shown in detail for the Nov. 26 and Dec. 5 case studies. Comparisons with lidar, radar, and in situ data are used to determine how well the simulations reproduced the observed cirrus. The roles played by various processes in the model are described in detail. The potential modes of nucleation are evaluated, and the importance of small-scale variations in temperature and humidity are discussed. The importance of competing ice crystal growth mechanisms (water vapor deposition and aggregation) are evaluated based on model simulations. Finally, the importance of ice crystal shape for crystal growth and vertical transport of ice are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comstock, Jennifer M.; Protat, Alain; McFarlane, Sally A.
2013-05-22
Ground-based radar and lidar observations obtained at the Department of Energy’s Atmospheric Radiation Measurement Program’s Tropical Western Pacific site located in Darwin, Australia are used to retrieve ice cloud properties in anvil and cirrus clouds. Cloud microphysical properties derived from four different retrieval algorithms (two radar-lidar and two radar only algorithms) are compared by examining mean profiles and probability density functions of effective radius (Re), ice water content (IWC), extinction, ice number concentration, ice crystal fall speed, and vertical air velocity. Retrieval algorithm uncertainty is quantified using radiative flux closure exercises. The effect of uncertainty in retrieved quantities on themore » cloud radiative effect and radiative heating rates are presented. Our analysis shows that IWC compares well among algorithms, but Re shows significant discrepancies, which is attributed primarily to assumptions of particle shape. Uncertainty in Re and IWC translates into sometimes-large differences in cloud radiative effect (CRE) though the majority of cases have a CRE difference of roughly 10 W m-2 on average. These differences, which we believe are primarily driven by the uncertainty in Re, can cause up to 2 K/day difference in the radiative heating rates between algorithms.« less
Using Machine Learning to Predict MCNP Bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grechanuk, Pavel Aleksandrovi
For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less
Quantum transport and nanoplasmonics with carbon nanorings - using HPC in computational nanoscience
NASA Astrophysics Data System (ADS)
Jack, Mark A.
2011-10-01
Central theme of this talk is the theoretical study of toroidal carbon nanostructures as a new form of metamaterial. The interference of ring-generated electromagnetic radiation in a regular array of nanorings driven by an incoming polarized wave front may lead to fascinating new optoelectronics applications. The tight-binding method is used to model charge transport in a carbon nanotorus: All transport observables can be derived from the Green's function of the device region in a non-equilibrium Green's function algorithm. We have calculated density-of-states D(E) and transmissivities T(E) between two metallic leads under a small voltage bias. Electron-phonon coupling is included for low-energy phonon modes of armchair and zigzag nanorings with atomic displacements determined by a collaborator's finite-element based code. A numerically fast and stable algorithm has been developed via parallel linear algebra matrix routines (PETSc) with MPI parallelism to reach significant speed-up. Production runs are planned on the NSF XSEDE network. This project was supported in parts by a 2010 NSF TeraGrid Fellowship and the Sunshine State Education and Research Computing Alliance (SSERCA). Two summer students were supported as 2010 and 2011 NCSI/Shodor Petascale Computing undergraduate interns.[4pt] In collaboration with Leon W. Durivage, Adam Byrd, and Mario Encinosa.
NASA Astrophysics Data System (ADS)
Rana, Verinder S.
This thesis concerns simulations of Inertial Confinement Fusion. Inertial confinement is carried out in a large scale facility at National Ignition Facility. The experiments have failed to reproduce design calculations, and so uncertainty quantification of calculations is an important asset. Uncertainties can be classified as aleatoric or epistemic. This thesis is concerned with aleatoric uncertainty quantification. Among the many uncertain aspects that affect the simulations, we have narrowed our study of possible uncertainties. The first source of uncertainty we present is the amount of pre-heating of the fuel done by hot electrons. The second source of uncertainty we consider is the effect of the algorithmic and physical transport diffusion and their effect on the hot spot thermodynamics. Physical transport mechanisms play an important role for the entire duration of the ICF capsule, so modeling them correctly becomes extremely vital. In addition, codes that simulate material mixing introduce numerical (algorithmically) generated transport across the material interfaces. This adds another layer of uncertainty in the solution through the artificially added diffusion. The third source of uncertainty we consider is physical model uncertainty. The fourth source of uncertainty we focus on a single localized surface perturbation (a divot) which creates a perturbation to the solution that can potentially enter the hot spot to diminish the thermonuclear environment. Jets of ablator material are hypothesized to enter the hot spot and cool the core, contributing to the observed lower reactions than predicted levels. A plasma transport package, Transport for Inertial Confinement Fusion (TICF) has been implemented into the Radiation Hydrodynamics code FLASH, from the University of Chicago. TICF has thermal, viscous and mass diffusion models that span the entire ICF implosion regime. We introduced a Quantum Molecular Dynamics calibrated thermal conduction model due to Hu for thermal transport. The numerical approximation uncertainties are introduced by the choice of a hydrodynamic solver for a particular flow. Solvers tend to be diffusive at material interfaces and the Front Tracking (FT) algorithm, which is an already available software code in the form of an API, helps to ameliorate such effects. The FT algorithm has also been implemented in FLASH and we use this to study the effect that divots can have on the hot spot properties.
Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.
Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technologymore » is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism Computational Sciences, Inc. and Advanced Energy Systems Inc. joined efforts to develop new physics and numerical models for LSP in several key areas to enhance the ability of LSP to model high energy density plasmas (HEDP). This final report details those efforts. Areas addressed in this research effort include: adding radiation transport to LSP, first in 2D and then fully 3D, extending the EMHD model to 3D, implementing more advanced radiation and electrode plasma boundary conditions, and installing more efficient implicit numerical algorithms to speed complex 2-D and 3-D computations. The new capabilities allow modeling of the dominant processes in high energy density plasmas, and further assist the development and optimization of plasma jet accelerators, with particular attention to MHD instabilities and plasma/wall interaction (based on physical models for ion drag friction and ablation/erosion of the electrodes). In the first funding cycle we implemented a solver for the radiation diffusion equation. To solve this equation in 2-D, we used finite-differencing and applied the parallelized sparse-matrix solvers in the PETSc library (Argonne National Laboratory) to the resulting system of equations. A database of the necessary coefficients for materials of interest was assembled using the PROPACEOS and ATBASE codes from Prism. The model was benchmarked against Prism's 1-D radiation hydrodynamics code HELIOS, and against experimental data obtained from HyperV's separately funded plasma jet accelerator development program. Work in the second funding cycle focused on extending the radiation diffusion model to full 3-D, continued development of the EMHD model, optimizing the direct-implicit model to speed up calculations, add in multiply ionized atoms, and improved the way boundary conditions are handled in LSP. These new LSP capabilities were then used, along with analytic calculations and Mach2 runs, to investigate plasma jet merging, plasma detachment and transport, restrike and advanced jet accelerator design. In addition, a strong linkage to diagnostic measurements was made by modeling plasma jet experiments on PLX to support benchmarking of the code. A large number of upgrades and improvements advancing hybrid PIC algorithms were implemented in LSP during the second funding cycle. These include development of fully 3D radiation transport algorithms, new boundary conditions for plasma-electrode interactions, and a charge conserving equation of state that permits multiply ionized high-Z ions. The final funding cycle focused on 1) mitigating the effects of a slow-growing grid instability which is most pronounced in plasma jet frame expansion problems using the two-fluid Eulerian remap algorithm, 2) extension of the Eulerian Smoothing Algorithm to allow EOS/Radiation modeling, 3) simulations of collisionless shocks formed by jet merging, 4) simulations of merging jets using high-Z gases, 5) generation of PROPACEOS EOS/Opacity databases, 6) simulations of plasma jet transport experiments, 7) simulations of plasma jet penetration through transverse magnetic fields, and 8) GPU PIC code development The tools developed during this project are applicable not only to the study of plasma jets, but also to a wide variety of HEDP plasmas of interest to DOE, including plasmas created in short-pulse laser experiments performed to study fast ignition concepts for inertial confinement fusion.« less
NASA Astrophysics Data System (ADS)
El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.
2015-10-01
The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2006-01-01
Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2013-07-01
The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical methods; algorithms for advanced architectures; and validation verification, and uncertainty quantification.
A mathematical model of the passage of an asteroid-comet body through the Earth’s atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaydurov, V., E-mail: shaidurov04@mail.ru; Siberian Federal University, 79 Svobodny pr., 660041 Krasnoyarsk; Shchepanovskaya, G.
In the paper, a mathematical model and a numerical algorithm are proposed for modeling the complex of phenomena which accompany the passage of a friable asteroid-comet body through the Earth’s atmosphere: the material ablation, the dissociation of molecules, and the radiation. The proposed model is constructed on the basis of the Navier-Stokes equations for viscous heat-conducting gas with an additional equation for the motion and propagation of a friable lumpy-dust material in air. The energy equation is modified for the relation between two its kinds: the usual energy of the translation of molecules (which defines the temperature and pressure) andmore » the combined energy of their rotation, oscillation, electronic excitation, dissociation, and radiation. For the mathematical model of atmosphere, the distribution of density, pressure, and temperature in height is taken as for the standard atmosphere. An asteroid-comet body is taken initially as a round body consisting of a friable lumpy-dust material with corresponding density and significant viscosity which far exceed those for the atmosphere gas. A numerical algorithm is proposed for solving the initial-boundary problem for the extended system of Navier-Stokes equations. The algorithm is the combination of the semi-Lagrangian approximation for Lagrange transport derivatives and the conforming finite element method for other terms. The implementation of these approaches is illustrated by a numerical example.« less
Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A
2004-07-01
To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.
Renner, Franziska
2016-09-01
Monte Carlo simulations are regarded as the most accurate method of solving complex problems in the field of dosimetry and radiation transport. In (external) radiation therapy they are increasingly used for the calculation of dose distributions during treatment planning. In comparison to other algorithms for the calculation of dose distributions, Monte Carlo methods have the capability of improving the accuracy of dose calculations - especially under complex circumstances (e.g. consideration of inhomogeneities). However, there is a lack of knowledge of how accurate the results of Monte Carlo calculations are on an absolute basis. A practical verification of the calculations can be performed by direct comparison with the results of a benchmark experiment. This work presents such a benchmark experiment and compares its results (with detailed consideration of measurement uncertainty) with the results of Monte Carlo calculations using the well-established Monte Carlo code EGSnrc. The experiment was designed to have parallels to external beam radiation therapy with respect to the type and energy of the radiation, the materials used and the kind of dose measurement. Because the properties of the beam have to be well known in order to compare the results of the experiment and the simulation on an absolute basis, the benchmark experiment was performed using the research electron accelerator of the Physikalisch-Technische Bundesanstalt (PTB), whose beam was accurately characterized in advance. The benchmark experiment and the corresponding Monte Carlo simulations were carried out for two different types of ionization chambers and the results were compared. Considering the uncertainty, which is about 0.7 % for the experimental values and about 1.0 % for the Monte Carlo simulation, the results of the simulation and the experiment coincide. Copyright © 2015. Published by Elsevier GmbH.
Smith, Wade P; Kim, Minsun; Holdsworth, Clay; Liao, Jay; Phillips, Mark H
2016-03-11
To build a new treatment planning approach that extends beyond radiation transport and IMRT optimization by modeling the radiation therapy process and prognostic indicators for more outcome-focused decision making. An in-house treatment planning system was modified to include multiobjective inverse planning, a probabilistic outcome model, and a multi-attribute decision aid. A genetic algorithm generated a set of plans embodying trade-offs between the separate objectives. An influence diagram network modeled the radiation therapy process of prostate cancer using expert opinion, results of clinical trials, and published research. A Markov model calculated a quality adjusted life expectancy (QALE), which was the endpoint for ranking plans. The Multiobjective Evolutionary Algorithm (MOEA) was designed to produce an approximation of the Pareto Front representing optimal tradeoffs for IMRT plans. Prognostic information from the dosimetrics of the plans, and from patient-specific clinical variables were combined by the influence diagram. QALEs were calculated for each plan for each set of patient characteristics. Sensitivity analyses were conducted to explore changes in outcomes for variations in patient characteristics and dosimetric variables. The model calculated life expectancies that were in agreement with an independent clinical study. The radiation therapy model proposed has integrated a number of different physical, biological and clinical models into a more comprehensive model. It illustrates a number of the critical aspects of treatment planning that can be improved and represents a more detailed description of the therapy process. A Markov model was implemented to provide a stronger connection between dosimetric variables and clinical outcomes and could provide a practical, quantitative method for making difficult clinical decisions.
Bahadori, Amir A; Van Baalen, Mary; Shavers, Mark R; Dodge, Charles; Semones, Edward J; Bolch, Wesley E
2011-03-21
The National Aeronautics and Space Administration (NASA) performs organ dosimetry and risk assessment for astronauts using model-normalized measurements of the radiation fields encountered in space. To determine the radiation fields in an organ or tissue of interest, particle transport calculations are performed using self-shielding distributions generated with the computer program CAMERA to represent the human body. CAMERA mathematically traces linear rays (or path lengths) through the computerized anatomical man (CAM) phantom, a computational stylized model developed in the early 1970s with organ and body profiles modeled using solid shapes and scaled to represent the body morphometry of the 1950 50th percentile (PCTL) Air Force male. With the increasing use of voxel phantoms in medical and health physics, a conversion from a mathematical-based to a voxel-based ray-tracing algorithm is warranted. In this study, the voxel-based ray tracer (VoBRaT) is introduced to ray trace voxel phantoms using a modified version of the algorithm first proposed by Siddon (1985 Med. Phys. 12 252-5). After validation, VoBRAT is used to evaluate variations in body self-shielding distributions for NASA phantoms and six University of Florida (UF) hybrid phantoms, scaled to represent the 5th, 50th, and 95th PCTL male and female astronaut body morphometries, which have changed considerably since the inception of CAM. These body self-shielding distributions are used to generate organ dose equivalents and effective doses for five commonly evaluated space radiation environments. It is found that dosimetric differences among the phantoms are greatest for soft radiation spectra and light vehicular shielding.
Using ACIS on the Chandra X-ray Observatory as a Particle Radiation Monitor II
NASA Technical Reports Server (NTRS)
Grant, C. E.; Ford, P. G.; Bautz, M. W.; ODell, S. L.
2012-01-01
The Advanced CCD Imaging Spectrometer is an instrument on the Chandra X-ray Observatory. CCDs are vulnerable to radiation damage, particularly by soft protons in the radiation belts and solar storms. The Chandra team has implemented procedures to protect ACIS during high-radiation events including autonomous protection triggered by an on-board radiation monitor. Elevated temperatures have reduced the effectiveness of the on-board monitor. The ACIS team has developed an algorithm which uses data from the CCDs themselves to detect periods of high radiation and a flight software patch to apply this algorithm is currently active on-board the instrument. In this paper, we explore the ACIS response to particle radiation through comparisons to a number of external measures of the radiation environment. We hope to better understand the efficiency of the algorithm as a function of the flux and spectrum of the particles and the time-profile of the radiation event.
A New Numerical Scheme for Cosmic-Ray Transport
NASA Astrophysics Data System (ADS)
Jiang, Yan-Fei; Oh, S. Peng
2018-02-01
Numerical solutions of the cosmic-ray (CR) magnetohydrodynamic equations are dogged by a powerful numerical instability, which arises from the constraint that CRs can only stream down their gradient. The standard cure is to regularize by adding artificial diffusion. Besides introducing ad hoc smoothing, this has a significant negative impact on either computational cost or complexity and parallel scalings. We describe a new numerical algorithm for CR transport, with close parallels to two-moment methods for radiative transfer under the reduced speed of light approximation. It stably and robustly handles CR streaming without any artificial diffusion. It allows for both isotropic and field-aligned CR streaming and diffusion, with arbitrary streaming and diffusion coefficients. CR transport is handled explicitly, while source terms are handled implicitly. The overall time step scales linearly with resolution (even when computing CR diffusion) and has a perfect parallel scaling. It is given by the standard Courant condition with respect to a constant maximum velocity over the entire simulation domain. The computational cost is comparable to that of solving the ideal MHD equation. We demonstrate the accuracy and stability of this new scheme with a wide variety of tests, including anisotropic streaming and diffusion tests, CR-modified shocks, CR-driven blast waves, and CR transport in multiphase media. The new algorithm opens doors to much more ambitious and hitherto intractable calculations of CR physics in galaxies and galaxy clusters. It can also be applied to other physical processes with similar mathematical structure, such as saturated, anisotropic heat conduction.
Inverse transport problems in quantitative PAT for molecular imaging
NASA Astrophysics Data System (ADS)
Ren, Kui; Zhang, Rongting; Zhong, Yimin
2015-12-01
Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.
NASA Astrophysics Data System (ADS)
Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard
2016-10-01
A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.
NASA Astrophysics Data System (ADS)
Cruz Inclán, Carlos M.; González Lazo, Eduardo; Rodríguez Rodríguez, Arturo; Guzmán Martínez, Fernando; Abreu Alfonso, Yamiel; Piñera Hernández, Ibrahin; Leyva Fabelo, Antonio
2017-09-01
The present work deals with the numerical simulation of gamma and electron radiation damage processes under high brightness and radiation particle fluency on regard to two new radiation induced atom displacement processes, which concern with both, the Monte Carlo Method based numerical simulation of the occurrence of atom displacement process as a result of gamma and electron interactions and transport in a solid matrix and the atom displacement threshold energies calculated by Molecular Dynamic methodologies. The two new radiation damage processes here considered in the framework of high brightness and particle fluency irradiation conditions are: 1) The radiation induced atom displacement processes due to a single primary knockout atom excitation in a defective target crystal matrix increasing its defect concentrations (vacancies, interstitials and Frenkel pairs) as a result of a severe and progressive material radiation damage and 2) The occurrence of atom displacements related to multiple primary knockout atom excitations for the same or different atomic species in an perfect target crystal matrix due to subsequent electron elastic atomic scattering in the same atomic neighborhood during a crystal lattice relaxation time. In the present work a review numeral simulation attempts of these two new radiation damage processes are presented, starting from the former developed algorithms and codes for Monte Carlo simulation of atom displacements induced by electron and gamma in
García-Pareja, S; Galán, P; Manzano, F; Brualla, L; Lallena, A M
2010-07-01
In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within approximately 3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Nagata, Koichi; Pethel, Timothy D
2017-07-01
Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.
Computational Modeling of Radiation Phenomenon in SiC for Nuclear Applications
NASA Astrophysics Data System (ADS)
Ko, Hyunseok
Silicon carbide (SiC) material has been investigated for promising nuclear materials owing to its superior thermo-mechanical properties, and low neutron cross-section. While the interest in SiC has been increasing, the lack of fundamental understanding in many radiation phenomena is an important issue. More specifically, these phenomena in SiC include the fission gas transport, radiation induced defects and its evolution, radiation effects on the mechanical stability, matrix brittleness of SiC composites, and low thermal conductivities of SiC composites. To better design SiC and SiC composite materials for various nuclear applications, understanding each phenomenon and its significance under specific reactor conditions is important. In this thesis, we used various modeling approaches to understand the fundamental radiation phenomena in SiC for nuclear applications in three aspects: (a) fission product diffusion through SiC, (b) optimization of thermodynamic stable self-interstitial atom clusters, (c) interface effect in SiC composite and their change upon radiation. In (a) fission product transport work, we proposed that Ag/Cs diffusion in high energy grain boundaries may be the upper boundary in unirradiated SiC at relevant temperature, and radiation enhanced diffusion is responsible for fast diffusion measured in post-irradiated fuel particles. For (b) the self-interstitial cluster work, thermodynamically stable clusters are identified as a function of cluster size, shape, and compositions using a genetic algorithm. We found that there are compositional and configurational transitions for stable clusters as the cluster size increases. For (c) the interface effect in SiC composite, we investigated recently proposed interface, which is CNT reinforced SiC composite. The analytical model suggests that CNT/SiC composites have attractive mechanical and thermal properties, and these fortify the argument that SiC composites are good candidate materials for the cladding. We used grand canonical monte carlo to optimize the interface, as a part of the stepping stone for further study using the interface.
The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0
NASA Technical Reports Server (NTRS)
Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.
2001-01-01
An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).
Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; ...
2015-01-26
We describe an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors are described. The details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstratingmore » the achieved efficiency of the algorithm are presented. Moreover, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.« less
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
NASA Astrophysics Data System (ADS)
Fauchez, T.; Platnick, S. E.; Meyer, K.; Zhang, Z.; Cornet, C.; Szczap, F.; Dubuisson, P.
2015-12-01
Cirrus clouds are an important part of the Earth radiation budget but an accurate assessment of their role remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better accuracy for thin cirrus effective radius retrievals with small effective radii. However, current global operational algorithms for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel Approximation (PPA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on ice cloud retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects in the TIR spectrum are mainly dominated by the PPA bias that primarily depends on the COT subpixel heterogeneity; for solar reflectance channels, in addition to the PPA bias, the IPA can lead to significant retrieval errors due to a significant photon horizontal transport between cloudy columns, as well as brightening and shadowing effects that are more difficult to quantify. Furthermore TIR retrievals techniques have demonstrated better retrieval accuracy for thin cirrus having small effective radii over solar reflectance techniques. The TIR range is thus particularly relevant in order to characterize, as accurately as possible, thin cirrus clouds. Heterogeneity effects in the TIR are evaluated as a function of spatial resolution in order to estimate the optimal spatial resolution for TIR retrieval applications. These investigations are performed using a cirrus 3D cloud generator (3DCloud), a 3D radiative transfer code (3DMCPOL), and two retrieval algorithms, namely the operational MODIS retrieval algorithm (MOD06) and a research-level SWT algorithm.
Mille, Matthew M; Jung, Jae Won; Lee, Choonik; Kuzmin, Gleb A; Lee, Choonsik
2018-06-01
Radiation dosimetry is an essential input for epidemiological studies of radiotherapy patients aimed at quantifying the dose-response relationship of late-term morbidity and mortality. Individualised organ dose must be estimated for all tissues of interest located in-field, near-field, or out-of-field. Whereas conventional measurement approaches are limited to points in water or anthropomorphic phantoms, computational approaches using patient images or human phantoms offer greater flexibility and can provide more detailed three-dimensional dose information. In the current study, we systematically compared four different dose calculation algorithms so that dosimetrists and epidemiologists can better understand the advantages and limitations of the various approaches at their disposal. The four dose calculations algorithms considered were as follows: the (1) Analytical Anisotropic Algorithm (AAA) and (2) Acuros XB algorithm (Acuros XB), as implemented in the Eclipse treatment planning system (TPS); (3) a Monte Carlo radiation transport code, EGSnrc; and (4) an accelerated Monte Carlo code, the x-ray Voxel Monte Carlo (XVMC). The four algorithms were compared in terms of their accuracy and appropriateness in the context of dose reconstruction for epidemiological investigations. Accuracy in peripheral dose was evaluated first by benchmarking the calculated dose profiles against measurements in a homogeneous water phantom. Additional simulations in a heterogeneous cylinder phantom evaluated the performance of the algorithms in the presence of tissue heterogeneity. In general, we found that the algorithms contained within the commercial TPS (AAA and Acuros XB) were fast and accurate in-field or near-field, but not acceptable out-of-field. Therefore, the TPS is best suited for epidemiological studies involving large cohorts and where the organs of interest are located in-field or partially in-field. The EGSnrc and XVMC codes showed excellent agreement with measurements both in-field and out-of-field. The EGSnrc code was the most accurate dosimetry approach, but was too slow to be used for large-scale epidemiological cohorts. The XVMC code showed similar accuracy to EGSnrc, but was significantly faster, and thus epidemiological applications seem feasible, especially when the organs of interest reside far away from the field edge.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2007-01-01
Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.
NASA Technical Reports Server (NTRS)
Marshall, Jochen; Milos, Frank; Fredrich, Joanne; Rasky, Daniel J. (Technical Monitor)
1997-01-01
Laser Scanning Confocal Microscopy (LSCM) has been used to obtain digital images of the complicated 3-D (three-dimensional) microstructures of rigid, fibrous thermal protection system (TPS) materials. These orthotropic materials are comprised of refractory ceramic fibers with diameters in the range of 1 to 10 microns and have open porosities of 0.8 or more. Algorithms are being constructed to extract quantitative microstructural information from the digital data so that it may be applied to specific heat and mass transport modeling efforts; such information includes, for example, the solid and pore volume fractions, the internal surface area per volume, fiber diameter distributions, and fiber orientation distributions. This type of information is difficult to obtain in general, yet it is directly relevant to many computational efforts which seek to model macroscopic thermophysical phenomena in terms of microscopic mechanisms or interactions. Two such computational efforts for fibrous TPS materials are: i) the calculation of radiative transport properties; ii) the modeling of gas permeabilities.
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brantley, Patrick; Dawson, Shawn; McKinley, Scott
2016-04-20
The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less
Monte Carlo simulations of medical imaging modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, G.P.
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less
NASA Technical Reports Server (NTRS)
Plante, I; Wu, H
2014-01-01
The code RITRACKS (Relativistic Ion Tracks) has been developed over the last few years at the NASA Johnson Space Center to simulate the effects of ionizing radiations at the microscopic scale, to understand the effects of space radiation at the biological level. The fundamental part of this code is the stochastic simulation of radiation track structure of heavy ions, an important component of space radiations. The code can calculate many relevant quantities such as the radial dose, voxel dose, and may also be used to calculate the dose in spherical and cylindrical targets of various sizes. Recently, we have incorporated DNA structure and damage simulations at the molecular scale in RITRACKS. The direct effect of radiations is simulated by introducing a slight modification of the existing particle transport algorithms, using the Binary-Encounter-Bethe model of ionization cross sections for each molecular orbitals of DNA. The simulation of radiation chemistry is done by a step-by-step diffusion-reaction program based on the Green's functions of the diffusion equation]. This approach is also used to simulate the indirect effect of ionizing radiation on DNA. The software can be installed independently on PC and tablets using the Windows operating system and does not require any coding from the user. It includes a Graphic User Interface (GUI) and a 3D OpenGL visualization interface. The calculations are executed simultaneously (in parallel) on multiple CPUs. The main features of the software will be presented.
Ma, Changxi; Hao, Wei; Pan, Fuquan; Xiang, Wang
2018-01-01
Route optimization of hazardous materials transportation is one of the basic steps in ensuring the safety of hazardous materials transportation. The optimization scheme may be a security risk if road screening is not completed before the distribution route is optimized. For road screening issues of hazardous materials transportation, a road screening algorithm of hazardous materials transportation is built based on genetic algorithm and Levenberg-Marquardt neural network (GA-LM-NN) by analyzing 15 attributes data of each road network section. A multi-objective robust optimization model with adjustable robustness is constructed for the hazardous materials transportation problem of single distribution center to minimize transportation risk and time. A multi-objective genetic algorithm is designed to solve the problem according to the characteristics of the model. The algorithm uses an improved strategy to complete the selection operation, applies partial matching cross shift and single ortho swap methods to complete the crossover and mutation operation, and employs an exclusive method to construct Pareto optimal solutions. Studies show that the sets of hazardous materials transportation road can be found quickly through the proposed road screening algorithm based on GA-LM-NN, whereas the distribution route Pareto solutions with different levels of robustness can be found rapidly through the proposed multi-objective robust optimization model and algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Thomas; Hamilton, Steven; Slattery, Stuart
Profugus is an open-source mini-application (mini-app) for radiation transport and reactor applications. It contains the fundamental computational kernels used in the Exnihilo code suite from Oak Ridge National Laboratory. However, Exnihilo is production code with a substantial user base. Furthermore, Exnihilo is export controlled. This makes collaboration with computer scientists and computer engineers difficult. Profugus is designed to bridge that gap. By encapsulating the core numerical algorithms in an abbreviated code base that is open-source, computer scientists can analyze the algorithms and easily make code-architectural changes to test performance without compromising the production code values of Exnihilo. Profugus is notmore » meant to be production software with respect to problem analysis. The computational kernels in Profugus are designed to analyze performance, not correctness. Nonetheless, users of Profugus can setup and run problems with enough real-world features to be useful as proof-of-concept for actual production work.« less
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..
Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (tomore » be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.« less
NEQAIR96,Nonequilibrium and Equilibrium Radiative Transport and Spectra Program: User's Manual
NASA Technical Reports Server (NTRS)
Whiting, Ellis E.; Park, Chul; Liu, Yen; Arnold, James O.; Paterson, John A.
1996-01-01
This document is the User's Manual for a new version of the NEQAIR computer program, NEQAIR96. The program is a line-by-line and a line-of-sight code. It calculates the emission and absorption spectra for atomic and diatomic molecules and the transport of radiation through a nonuniform gas mixture to a surface. The program has been rewritten to make it easy to use, run faster, and include many run-time options that tailor a calculation to the user's requirements. The accuracy and capability have also been improved by including the rotational Hamiltonian matrix formalism for calculating rotational energy levels and Hoenl-London factors for dipole and spin-allowed singlet, doublet, triplet, and quartet transitions. Three sample cases are also included to help the user become familiar with the steps taken to produce a spectrum. A new user interface is included that uses check location, to select run-time options and to enter selected run data, making NEQAIR96 easier to use than the older versions of the code. The ease of its use and the speed of its algorithms make NEQAIR96 a valuable educational code as well as a practical spectroscopic prediction and diagnostic code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis C. Smolarski, S.J.
Project Abstract This project was a continuation of work begun under a subcontract issued off of TSI-DOE Grant 1528746, awarded to the University of Illinois Urbana-Champaign. Dr. Anthony Mezzacappa is the Principal Investigator on the Illinois award. A separate award was issued to Santa Clara University to continue the collaboration during the time period May 2003 ? 2004. Smolarski continued to work on preconditioner technology and its interface with various iterative methods. He worked primarily with F. Dough Swesty (SUNY-Stony Brook) in continuing software development started in the 2002-03 academic year. Special attention was paid to the development and testingmore » of difference sparse approximate inverse preconditioners and their use in the solution of linear systems arising from radiation transport equations. The target was a high performance platform on which efficient implementation is a critical component of the overall effort. Smolarski also focused on the integration of the adaptive iterative algorithm, Chebycode, developed by Tom Manteuffel and Steve Ashby and adapted by Ryan Szypowski for parallel platforms, into the radiation transport code being developed at SUNY-Stony Brook.« less
NASA Astrophysics Data System (ADS)
Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham
2010-03-01
In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.
Kanematsu, Nobuyuki; Komori, Masataka; Yonai, Shunsuke; Ishizaki, Azusa
2009-04-07
The pencil-beam algorithm is valid only when elementary Gaussian beams are small enough compared to the lateral heterogeneity of a medium, which is not always true in actual radiotherapy with protons and ions. This work addresses a solution for the problem. We found approximate self-similarity of Gaussian distributions, with which Gaussian beams can split into narrower and deflecting daughter beams when their sizes have overreached lateral heterogeneity in the beam-transport calculation. The effectiveness was assessed in a carbon-ion beam experiment in the presence of steep range compensation, where the splitting calculation reproduced a detour effect amounting to about 10% in dose or as large as the lateral particle disequilibrium effect. The efficiency was analyzed in calculations for carbon-ion and proton radiations with a heterogeneous phantom model, where the beam splitting increased computing times by factors of 4.7 and 3.2. The present method generally improves the accuracy of the pencil-beam algorithm without severe inefficiency. It will therefore be useful for treatment planning and potentially other demanding applications.
King, David A.; Bachelet, Dominique M.; Symstad, Amy J.; Ferschweiler, Ken; Hobbins, Michael
2014-01-01
The potential evapotranspiration (PET) that would occur with unlimited plant access to water is a central driver of simulated plant growth in many ecological models. PET is influenced by solar and longwave radiation, temperature, wind speed, and humidity, but it is often modeled as a function of temperature alone. This approach can cause biases in projections of future climate impacts in part because it confounds the effects of warming due to increased greenhouse gases with that which would be caused by increased radiation from the sun. We developed an algorithm for linking PET to extraterrestrial solar radiation (incoming top-of atmosphere solar radiation), as well as temperature and atmospheric water vapor pressure, and incorporated this algorithm into the dynamic global vegetation model MC1. We tested the new algorithm for the Northern Great Plains, USA, whose remaining grasslands are threatened by continuing woody encroachment. Both the new and the standard temperature-dependent MC1 algorithm adequately simulated current PET, as compared to the more rigorous PenPan model of Rotstayn et al. (2006). However, compared to the standard algorithm, the new algorithm projected a much more gradual increase in PET over the 21st century for three contrasting future climates. This difference led to lower simulated drought effects and hence greater woody encroachment with the new algorithm, illustrating the importance of more rigorous calculations of PET in ecological models dealing with climate change.
Temporally separating Cherenkov radiation in a scintillator probe exposed to a pulsed X-ray beam.
Archer, James; Madden, Levi; Li, Enbang; Carolan, Martin; Petasecca, Marco; Metcalfe, Peter; Rosenfeld, Anatoly
2017-10-01
Cherenkov radiation is generated in optical systems exposed to ionising radiation. In water or plastic devices, if the incident radiation has components with high enough energy (for example, electrons or positrons with energy greater than 175keV), Cherenkov radiation will be generated. A scintillator dosimeter that collects optical light, guided by optical fibre, will have Cherenkov radiation generated throughout the length of fibre exposed to the radiation field and compromise the signal. We present a novel algorithm to separate Cherenkov radiation signal that requires only a single probe, provided the radiation source is pulsed, such as a linear accelerator in external beam radiation therapy. We use a slow scintillator (BC-444) that, in a constant beam of radiation, reaches peak light output after 1 microsecond, while the Cherenkov signal is detected nearly instantly. This allows our algorithm to separate the scintillator signal from the Cherenkov signal. The relative beam profile and depth dose of a linear accelerator 6MV X-ray field were reconstructed using the algorithm. The optimisation method improved the fit to the ionisation chamber data and improved the reliability of the measurements. The algorithm was able to remove 74% of the Cherenkov light, at the expense of only 1.5% scintillation light. Further characterisation of the Cherenkov radiation signal has the potential to improve the results and allow this method to be used as a simpler optical fibre dosimeter for quality assurance in external beam therapy. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A Novel Segment-Based Approach for Improving Classification Performance of Transport Mode Detection.
Guvensan, M Amac; Dusun, Burak; Can, Baris; Turkmen, H Irem
2017-12-30
Transportation planning and solutions have an enormous impact on city life. To minimize the transport duration, urban planners should understand and elaborate the mobility of a city. Thus, researchers look toward monitoring people's daily activities including transportation types and duration by taking advantage of individual's smartphones. This paper introduces a novel segment-based transport mode detection architecture in order to improve the results of traditional classification algorithms in the literature. The proposed post-processing algorithm, namely the Healing algorithm, aims to correct the misclassification results of machine learning-based solutions. Our real-life test results show that the Healing algorithm could achieve up to 40% improvement of the classification results. As a result, the implemented mobile application could predict eight classes including stationary, walking, car, bus, tram, train, metro and ferry with a success rate of 95% thanks to the proposed multi-tier architecture and Healing algorithm.
A unified radiative magnetohydrodynamics code for lightning-like discharge simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qiang, E-mail: cq0405@126.com; Chen, Bin, E-mail: emcchen@163.com; Xiong, Run
2014-03-15
A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs fluxmore » splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com; Suprijadi; Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic imagesmore » and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.« less
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
NASA Astrophysics Data System (ADS)
Russkova, Tatiana V.
2017-11-01
One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.
On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Tie
2017-11-01
In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.
In situ measurement of radioactive contamination of bottom sediments.
Zhukouski, A; Anshakou, O; Kutsen, S
2018-04-30
A gamma spectrometric method is presented for in situ radiation monitoring of bottom sediments with contaminated layer of unknown thickness to be determined. The method, based on the processing of experimental spectra using the results of their simulation by the Monte Carlo method, is proposed and tested in practice. A model for the transport of gamma radiation from deposited radionuclides 137 Cs and 134 Cs to a scintillation detection unit located on the upper surface of the contaminated layer of sediments is considered. The relationship between the effective radius of the contaminated site and the thickness of the layer has been studied. The thickness of the contaminated layer is determined by special analysis of experimental and thickness-dependent simulated spectra. The technique and algorithm developed are verified as a result of full-scale studies performed with the submersible gamma-spectrometer. Copyright © 2018 Elsevier Ltd. All rights reserved.
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
P1 Nonconforming Finite Element Method for the Solution of Radiation Transport Problems
NASA Technical Reports Server (NTRS)
Kang, Kab S.
2002-01-01
The simulation of radiation transport in the optically thick flux-limited diffusion regime has been identified as one of the most time-consuming tasks within large simulation codes. Due to multimaterial complex geometry, the radiation transport system must often be solved on unstructured grids. In this paper, we investigate the behavior and the benefits of the unstructured P(sub 1) nonconforming finite element method, which has proven to be flexible and effective on related transport problems, in solving unsteady implicit nonlinear radiation diffusion problems using Newton and Picard linearization methods. Key words. nonconforrning finite elements, radiation transport, inexact Newton linearization, multigrid preconditioning
Patient-specific CT dosimetry calculation: a feasibility study.
Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W
2011-11-15
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.
Radionuclide identification algorithm for organic scintillator-based radiation portal monitor
NASA Astrophysics Data System (ADS)
Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.
2017-03-01
We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.
An LNG release, transport, and fate model system for marine spills.
Spaulding, Malcolm L; Swanson, J Craig; Jayko, Kathy; Whittier, Nicole
2007-02-20
LNGMAP, a fully integrated, geographic information based modular system, has been developed to predict the fate and transport of marine spills of LNG. The model is organized as a discrete set of linked algorithms that represent the processes (time dependent release rate, spreading, transport on the water surface, evaporation from the water surface, transport and dispersion in the atmosphere, and, if ignited, burning and associated radiated heat fields) affecting LNG once it is released into the environment. A particle-based approach is employed in which discrete masses of LNG released from the source are modeled as individual masses of LNG or spillets. The model is designed to predict the gas mass balance as a function of time and to display the spatial and temporal evolution of the gas (and radiated energy field). LNGMAP has been validated by comparisons to predictions of models developed by ABS Consulting and Sandia for time dependent point releases from a draining tank, with and without burning. Simulations were in excellent agreement with those performed by ABS Consulting and consistent with Sandia's steady state results. To illustrate the model predictive capability for realistic emergency scenarios, simulations were performed for a tanker entering Block Island Sound. Three hypothetical cases were studied: the first assumes the vessel continues on course after the spill starts, the second that the vessel stops as soon as practical after the release begins (3 min), and the third that the vessel grounds at the closest site practical. The model shows that the areas of the surface pool and the incident thermal radiation field (with burning) are minimized and dispersed vapor cloud area (without burning) maximized if the vessel continues on course. For this case the surface pool area, with burning, is substantially smaller than for the without burning case because of the higher mass loss rate from the surface pool due to burning. Since the vessel speed substantially exceeds the spill spreading rate, both the thermal radiation fields and surface pool trail the vessel. The relative directions and speeds of the wind and vessel movement govern the orientation of the dispersed plume. If the vessel stops, the areas of the surface pool and incident radiation field (with burning) are maximized and the dispersed cloud area (without burning) minimized. The longer the delay in stopping the vessel, the smaller the peak values are for the pool area and the size of the thermal radiation field. Once the vessel stops, the spill pool is adjacent to the vessel and moving down current. The thermal radiation field is oriented similarly. These results may be particularly useful in contingency planning for underway vessels.
NASA Astrophysics Data System (ADS)
Emde, Claudia; Barlakas, Vasileios; Cornet, Céline; Evans, Frank; Wang, Zhen; Labonotte, Laurent C.; Macke, Andreas; Mayer, Bernhard; Wendisch, Manfred
2018-04-01
Initially unpolarized solar radiation becomes polarized by scattering in the Earth's atmosphere. In particular molecular scattering (Rayleigh scattering) polarizes electromagnetic radiation, but also scattering of radiation at aerosols, cloud droplets (Mie scattering) and ice crystals polarizes. Each atmospheric constituent produces a characteristic polarization signal, thus spectro-polarimetric measurements are frequently employed for remote sensing of aerosol and cloud properties. Retrieval algorithms require efficient radiative transfer models. Usually, these apply the plane-parallel approximation (PPA), assuming that the atmosphere consists of horizontally homogeneous layers. This allows to solve the vector radiative transfer equation (VRTE) efficiently. For remote sensing applications, the radiance is considered constant over the instantaneous field-of-view of the instrument and each sensor element is treated independently in plane-parallel approximation, neglecting horizontal radiation transport between adjacent pixels (Independent Pixel Approximation, IPA). In order to estimate the errors due to the IPA approximation, three-dimensional (3D) vector radiative transfer models are required. So far, only a few such models exist. Therefore, the International Polarized Radiative Transfer (IPRT) working group of the International Radiation Commission (IRC) has initiated a model intercomparison project in order to provide benchmark results for polarized radiative transfer. The group has already performed an intercomparison for one-dimensional (1D) multi-layer test cases [phase A, 1]. This paper presents the continuation of the intercomparison project (phase B) for 2D and 3D test cases: a step cloud, a cubic cloud, and a more realistic scenario including a 3D cloud field generated by a Large Eddy Simulation (LES) model and typical background aerosols. The commonly established benchmark results for 3D polarized radiative transfer are available at the IPRT website (http://www.meteo.physik.uni-muenchen.de/ iprt).
Radiation detection and situation management by distributed sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan, Frigo; Mielke, Angela; Cai, D Michael
Detection of radioactive materials in an urban environment usually requires large, portal-monitor-style radiation detectors. However, this may not be a practical solution in many transport scenarios. Alternatively, a distributed sensor network (DSN) could complement portal-style detection of radiological materials through the implementation of arrays of low cost, small heterogeneous sensors with the ability to detect the presence of radioactive materials in a moving vehicle over a specific region. In this paper, we report on the use of a heterogeneous, wireless, distributed sensor network for traffic monitoring in a field demonstration. Through wireless communications, the energy spectra from different radiation detectorsmore » are combined to improve the detection confidence. In addition, the DSN exploits other sensor technologies and algorithms to provide additional information about the vehicle, such as its speed, location, class (e.g. car, truck), and license plate number. The sensors are in-situ and data is processed in real-time at each node. Relevant information from each node is sent to a base station computer which is used to assess the movement of radioactive materials.« less
Method for Estimating Bilirubin Isomerization Efficiency in Phototherapy to Treat Neonatal Jaundice
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2014-11-01
We propose a method for quantitative assessment of the efficacy of phototherapy to treat neonatal jaundice using the diffuse reflectance spectrum for the newborn's skin, based on the analytical dependence of the measured spectrum on the structural and morphological parameters of the skin, affecting the optical conditions in the medium, and an algorithm for rapid calculation of the bilirubin photoisomerization rate in the skin tissues as a function of the structural and morphological parameters of the skin and the wavelength of the exciting radiation. From the results of a numerical simulation of the process of radiation transport in the skin, we assess the stability of our method to variations in the scattering properties of the skin and the concentrations of its optically active chromophores (melanin, oxyhemoglobin, deoxyhemoglobin). We show that in order to achieve the maximum efficacy of phototherapy, we should use light from the range 484-496 nm. In this case, the intensity of the exciting radiation should be selected individually for each newborn according to the bilirubin photoisomerization rate characteristic for it.
NASA Astrophysics Data System (ADS)
Takenaka, H.; Teruyuki, N.; Nakajima, T. Y.; Higurashi, A.; Hashimoto, M.; Suzuki, K.; Uchida, J.; Nagao, T. M.; Shi, C.; Inoue, T.
2017-12-01
It is important to estimate the earth's radiation budget accurately for understanding of climate. Clouds can cool the Earth by reflecting solar radiation but also maintain warmth by absorbing and emitting terrestrial radiation. similarly aerosols also have an effect on radiation budget by absorption and scattering of Solar radiation. In this study, we developed the high speed and accurate algorithm for shortwave (SW) radiation budget and it's applied to geostationary satellite for rapid analysis. It enabled highly accurate monitoring of solar radiation and photo voltaic (PV) power generation. Next step, we try to update the algorithm for retrieval of Aerosols and Clouds. It indicates the accurate atmospheric parameters for estimation of solar radiation. (This research was supported in part by CREST/EMS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, Daniel; Nagle, Scott K.; Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792
Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiationmore » dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose levels. For FBP, the relative bias and the angular standard deviation of the measured WT increased steeply with decreasing radiation dose. Except for the smallest airway, MBIR enabled significant reduction in both the relative bias and angular standard deviation of the WT, particularly at low radiation dose levels; the SSQ was reduced by 50%–96% by using MBIR. The optimal reconstruction algorithm was found to be MBIR for the seven airways being assessed, and the combined use of MBIR and optimal kV–mAs selection resulted in a radiation dose reduction of 37%–83% compared with a reference scan protocol with a dose level of 1 mGy. Conclusions: The quantification accuracy of airway WT is strongly influenced by radiation dose and reconstruction algorithm. The MBIR algorithm potentially allows the desired WT quantification accuracy to be achieved with reduced radiation dose, which may enable a wider clinical use of MDCT for the assessment of airway WT, particularly for younger patients who may be more sensitive to exposures with ionizing radiation.« less
NASA Astrophysics Data System (ADS)
Miller, Steven
1998-03-01
A generic stochastic method is presented that rapidly evaluates numerical bulk flux solutions to the one-dimensional integrodifferential radiative transport equation, for coherent irradiance of optically anisotropic suspensions of nonspheroidal bioparticles, such as blood. As Fermat rays or geodesics enter the suspension, they evolve into a bundle of random paths or trajectories due to scattering by the suspended bioparticles. Overall, this can be interpreted as a bundle of Markov trajectories traced out by a "gas" of Brownian-like point photons being scattered and absorbed by the homogeneous distribution of uncorrelated cells in suspension. By considering the cumulative vectorial intersections of a statistical bundle of random trajectories through sets of interior data planes in the space containing the medium, the effective equivalent information content and behavior of the (generally unknown) analytical flux solutions of the radiative transfer equation rapidly emerges. The fluxes match the analytical diffuse flux solutions in the diffusion limit, which verifies the accuracy of the algorithm. The method is not constrained by the diffusion limit and gives correct solutions for conditions where diffuse solutions are not viable. Unlike conventional Monte Carlo and numerical techniques adapted from neutron transport or nuclear reactor problems that compute scalar quantities, this vectorial technique is fast, easily implemented, adaptable, and viable for a wide class of biophotonic scenarios. By comparison, other analytical or numerical techniques generally become unwieldy, lack viability, or are more difficult to utilize and adapt. Illustrative calculations are presented for blood medias at monochromatic wavelengths in the visible spectrum.
NASA Astrophysics Data System (ADS)
Braiek, A.; Adili, A.; Albouchi, F.; Karkri, M.; Ben Nasrallah, S.
2016-06-01
The aim of this work is to simultaneously identify the conductive and radiative parameters of a semitransparent sample using a photothermal method associated with an inverse problem. The identification of the conductive and radiative proprieties is performed by the minimization of an objective function that represents the errors between calculated temperature and measured signal. The calculated temperature is obtained from a theoretical model built with the thermal quadrupole formalism. Measurement is obtained in the rear face of the sample whose front face is excited by a crenel of heat flux. For identification procedure, a genetic algorithm is developed and used. The genetic algorithm is a useful tool in the simultaneous estimation of correlated or nearly correlated parameters, which can be a limiting factor for the gradient-based methods. The results of the identification procedure show the efficiency and the stability of the genetic algorithm to simultaneously estimate the conductive and radiative properties of clear glass.
NASA Technical Reports Server (NTRS)
Xu, Xiaoguang; Wang, Jun; Zeng, Jing; Spurr, Robert; Liu, Xiong; Dubovik, Oleg; Li, Li; Li, Zhengqiang; Mishchenko, Michael I.; Siniuk, Aliaksandr;
2015-01-01
A new research algorithm is presented here as the second part of a two-part study to retrieve aerosol microphysical properties from the multispectral and multiangular photopolarimetric measurements taken by Aerosol Robotic Network's (AERONET's) new-generation Sun photometer. The algorithm uses an advanced UNified and Linearized Vector Radiative Transfer Model and incorporates a statistical optimization approach.While the new algorithmhas heritage from AERONET operational inversion algorithm in constraining a priori and retrieval smoothness, it has two new features. First, the new algorithmretrieves the effective radius, effective variance, and total volume of aerosols associated with a continuous bimodal particle size distribution (PSD) function, while the AERONET operational algorithm retrieves aerosol volume over 22 size bins. Second, our algorithm retrieves complex refractive indices for both fine and coarsemodes,while the AERONET operational algorithm assumes a size-independent aerosol refractive index. Mode-resolved refractive indices can improve the estimate of the single-scattering albedo (SSA) for each aerosol mode and thus facilitate the validation of satellite products and chemistry transport models. We applied the algorithm to a suite of real cases over Beijing_RADI site and found that our retrievals are overall consistent with AERONET operational inversions but can offer mode-resolved refractive index and SSA with acceptable accuracy for the aerosol composed by spherical particles. Along with the retrieval using both radiance and polarization, we also performed radiance-only retrieval to demonstrate the improvements by adding polarization in the inversion. Contrast analysis indicates that with polarization, retrieval error can be reduced by over 50% in PSD parameters, 10-30% in the refractive index, and 10-40% in SSA, which is consistent with theoretical analysis presented in the companion paper of this two-part study.
Akberov, R F; Gorshkov, A N
1997-01-01
The X-ray endoscopic semiotics of precancerous gastric mucosal changes (epithelial dysplasia, intestinal epithelial rearrangement) was examined by the results of 1574 gastric examination. A diagnostic algorithm was developed for radiation studies in the diagnosis of the above pathology.
Three numerical algorithms were compared to provide a solution of a radiative transfer equation (RTE) for plane albedo (hemispherical reflectance) in semi-infinite one-dimensional plane-parallel layer. Algorithms were based on the invariant imbedding method and two different var...
Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 2. Users manual
NASA Astrophysics Data System (ADS)
Medgyesimitschang, L. N.; Putnam, J. M.
1982-05-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation from off-surface and aperture antennas on finite-length open or closed bodies of arbitrary cross section. The near fields and antenna coupling on such bodies are computed. The theoretical development underlying the algorithm is described in Volume 1 of this report.
Algorithms imaging tests comparison following the first febrile urinary tract infection in children.
Tombesi, María M; Alconcher, Laura F; Lucarelli, Lucas; Ciccioli, Agustina
2017-08-01
To compare the diagnostic sensitivity, costs and radiation doses of imaging tests algorithms developed by the Argentine Society of Pediatrics in 2003 and 2015, against British and American guidelines after the first febrile urinary tract infection (UTI). Inclusion criteria: children ≤ 2 years old with their first febrile UTI and normal ultrasound, voiding cystourethrography and dimercaptosuccinic acid scintigraphy, according to the algorithm established by the Argentine Society of Pediatrics in 2003, treated between 2003 and 2010. The comparisons between algorithms were carried out through retrospective simulation. Eighty (80) patients met the inclusion criteria; 51 (63%) had vesicoureteral reflux (VUR); 6% of the cases were severe. Renal scarring was observed in 6 patients (7.5%). Cost: ARS 404,000. Radiation: 160 millisieverts. With the Argentine Society of Pediatrics' algorithm developed in 2015, the diagnosis of 4 VURs and 2 cases of renal scarring would have been missed. The cost of this omission would have been ARS 301,800 and 124 millisieverts of radiation. British and American guidelines would have missed the diagnosis of all VURs and all cases of renal scarring, with a related cost of ARS 23,000 and ARS 40,000, respectively and 0 radiation. Intensive protocols are highly sensitive to VUR and renal scarring, but they imply high costs and doses of radiation, and result in questionable benefits. Sociedad Argentina de Pediatría
Non-local electron transport validation using 2D DRACO simulations
NASA Astrophysics Data System (ADS)
Cao, Duc; Chenhall, Jeff; Moll, Eli; Prochaska, Alex; Moses, Gregory; Delettrez, Jacques; Collins, Tim
2012-10-01
Comparison of 2D DRACO simulations, using a modified versionfootnotetextprivate communications with M. Marinak and G. Zimmerman, LLNL. of the Schurtz, Nicolai and Busquet (SNB) algorithmfootnotetextSchurtz, Nicolai and Busquet, ``A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes,'' Phys. Plasmas 7, 4238(2000). for non-local electron transport, with direct drive shock timing experimentsfootnotetextT. Boehly, et. al., ``Multiple spherically converging shock waves in liquid deuterium,'' Phys. Plasmas 18, 092706(2011). and with the Goncharov non-local modelfootnotetextV. Goncharov, et. al., ``Early stage of implosion in inertial confinement fusion: Shock timing and perturbation evolution,'' Phys. Plasmas 13, 012702(2006). in 1D LILAC will be presented. Addition of an improved SNB non-local electron transport algorithm in DRACO allows direct drive simulations with no need for an electron conduction flux limiter. Validation with shock timing experiments that mimic the laser pulse profile of direct drive ignition targets gives a higher confidence level in the predictive capability of the DRACO code. This research was supported by the University of Rochester Laboratory for Laser Energetics.
NASA Technical Reports Server (NTRS)
Want, P. H.; Deepak, A.
1985-01-01
The utilization of stratospheric aerosol and ozone measurements obtained from the NASA developed SAM II and SAGE satellite instruments were investigated for their global scale transports. The stratospheric aerosols showed that during the stratospheric warming of the winter 1978 to 1979, the distribution of the zonal mean aerosol extinction ratio in the northern high latitude exhibited distinct changes. Dynamic processes might have played an important role in maintenance role in maintenance of this zonal mean distribution. As to the stratospheric ozone, large poleward ozone transports are shown to occur in the altitude region from 24 km to 38 km near 55N during this warming. This altitude region is shown to be a transition region of the phase relationship between ozone and temperature waves from an in-phase one above 38 km. It is shown that the ozone solar heating in the upper stratosphere might lead to enhancement of the damping rate of the planetary waves due to infrared radiation alone in agreement with theoretical analyses and an earlier observational study.
NASA Technical Reports Server (NTRS)
Aksenov, A. F.; Burnazyan, A. I.
1985-01-01
The purpose and application of the provisional standards for radiation safety of crew and passengers in civil aviation are given. The radiation effect of cosmic radiation in flight on civil aviation air transport is described. Standard levels of radiation and conditions of radiation safety are discussed.
Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao
2014-01-01
Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
Requirements Definition for ORNL Trusted Corridors Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Randy M; Hill, David E; Smith, Cyrus M
2008-02-01
The ORNL Trusted Corridors Project has several other names: SensorNet Transportation Pilot; Identification and Monitoring of Radiation (in commerce) Shipments (IMR(ic)S); and Southeastern Transportation Corridor Pilot (SETCP). The project involves acquisition and analysis of transportation data at two mobile and three fixed inspection stations in five states (Kentucky, Mississippi, South Carolina, Tennessee, and Washington DC). Collaborators include the State Police organizations that are responsible for highway safety, law enforcement, and incident response. The three states with fixed weigh-station deployments (KY, SC, TN) are interested in coordination of this effort for highway safety, law enforcement, and sorting/targeting/interdiction of potentially non-compliant vehicles/persons/cargo.more » The Domestic Nuclear Detection Office (DNDO) in the U.S. Department of Homeland Security (DHS) is interested in these deployments, as a Pilot test (SETCP) to identify Improvised Nuclear Devices (INDs) in highway transport. However, the level of DNDO integration among these state deployments is presently uncertain. Moreover, DHS issues are considered secondary by the states, which perceive this work as an opportunity to leverage these (new) dual-use technologies for state needs. In addition, present experience shows that radiation detectors alone cannot detect DHS-identified IND threats. Continued SETCP success depends on the level of integration of current state/local police operations with the new DHS task of detecting IND threats, in addition to emergency preparedness and homeland security. This document describes the enabling components for continued SETCP development and success, including: sensors and their use at existing deployments (Section 1); personnel training (Section 2); concept of operations (Section 3); knowledge discovery from the copious data (Section 4); smart data collection, integration and database development, advanced algorithms for multiple sensors, and network communications (Section 5); and harmonization of local, state, and Federal procedures and protocols (Section 6).« less
Radiation Transport Tools for Space Applications: A Review
NASA Technical Reports Server (NTRS)
Jun, Insoo; Evans, Robin; Cherng, Michael; Kang, Shawn
2008-01-01
This slide presentation contains a brief discussion of nuclear transport codes widely used in the space radiation community for shielding and scientific analyses. Seven radiation transport codes that are addressed. The two general methods (i.e., Monte Carlo Method, and the Deterministic Method) are briefly reviewed.
Path Toward a Unified Geometry for Radiation Transport
NASA Astrophysics Data System (ADS)
Lee, Kerry
The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex CAD models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN (high charge and energy transport code developed by NASA LaRC), are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The work-flow for doing radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats.
Los Alamos radiation transport code system on desktop computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
NASA Technical Reports Server (NTRS)
Yost, Christopher R.; Minnis, Patrick; Trepte, Qing Z.; Palikonda, Rabindra; Ayers, Jeffrey K.; Spangenberg, Doulas A.
2012-01-01
With geostationary satellite data it is possible to have a continuous record of diurnal cycles of cloud properties for a large portion of the globe. Daytime cloud property retrieval algorithms are typically superior to nighttime algorithms because daytime methods utilize measurements of reflected solar radiation. However, reflected solar radiation is difficult to accurately model for high solar zenith angles where the amount of incident radiation is small. Clear and cloudy scenes can exhibit very small differences in reflected radiation and threshold-based cloud detection methods have more difficulty setting the proper thresholds for accurate cloud detection. Because top-of-atmosphere radiances are typically more accurately modeled outside the terminator region, information from previous scans can help guide cloud detection near the terminator. This paper presents an algorithm that uses cloud fraction and clear and cloudy infrared brightness temperatures from previous satellite scan times to improve the performance of a threshold-based cloud mask near the terminator. Comparisons of daytime, nighttime, and terminator cloud fraction derived from Geostationary Operational Environmental Satellite (GOES) radiance measurements show that the algorithm greatly reduces the number of false cloud detections and smoothes the transition from the daytime to the nighttime clod detection algorithm. Comparisons with the Geoscience Laser Altimeter System (GLAS) data show that using this algorithm decreases the number of false detections by approximately 20 percentage points.
Application of Multivariate Modeling for Radiation Injury Assessment: A Proof of Concept
Bolduc, David L.; Villa, Vilmar; Sandgren, David J.; Ledney, G. David; Blakely, William F.; Bünger, Rolf
2014-01-01
Multivariate radiation injury estimation algorithms were formulated for estimating severe hematopoietic acute radiation syndrome (H-ARS) injury (i.e., response category three or RC3) in a rhesus monkey total-body irradiation (TBI) model. Classical CBC and serum chemistry blood parameters were examined prior to irradiation (d 0) and on d 7, 10, 14, 21, and 25 after irradiation involving 24 nonhuman primates (NHP) (Macaca mulatta) given 6.5-Gy 60Co Υ-rays (0.4 Gy min−1) TBI. A correlation matrix was formulated with the RC3 severity level designated as the “dependent variable” and independent variables down selected based on their radioresponsiveness and relatively low multicollinearity using stepwise-linear regression analyses. Final candidate independent variables included CBC counts (absolute number of neutrophils, lymphocytes, and platelets) in formulating the “CBC” RC3 estimation algorithm. Additionally, the formulation of a diagnostic CBC and serum chemistry “CBC-SCHEM” RC3 algorithm expanded upon the CBC algorithm model with the addition of hematocrit and the serum enzyme levels of aspartate aminotransferase, creatine kinase, and lactate dehydrogenase. Both algorithms estimated RC3 with over 90% predictive power. Only the CBC-SCHEM RC3 algorithm, however, met the critical three assumptions of linear least squares demonstrating slightly greater precision for radiation injury estimation, but with significantly decreased prediction error indicating increased statistical robustness. PMID:25165485
AccuRT: A versatile tool for radiative transfer simulations in the coupled atmosphere-ocean system
NASA Astrophysics Data System (ADS)
Hamre, Børge; Stamnes, Snorre; Stamnes, Knut; Stamnes, Jakob
2017-02-01
Reliable, accurate, and efficient modeling of the transport of electromagnetic radiation in turbid media has important applications in the study of the Earth's climate by remote sensing. For example, such modeling is needed to develop forward-inverse methods used to quantify types and concentrations of aerosol and cloud particles in the atmosphere, the dissolved organic and particulate biogeochemical matter in lakes, rivers, coastal, and open-ocean waters. It is also needed to simulate the performance of remote sensing detectors deployed on aircraft, balloons, and satellites as well as radiometric detectors deployed on buoys, gliders and other aquatic observing systems. Accurate radiative transfer modeling is also required to compute irradiances and scalar irradiances that are used to compute warming/cooling and photolysis rates in the atmosphere and primary production and warming/cooling rates in the water column. AccuRT is a radiative transfer model for the coupled atmosphere-water system that is designed to be a versatile tool for researchers in the ocean optics and remote sensing communities. It addresses the needs of researchers interested in analyzing irradiance and radiance measurements in the field and laboratory as well as those interested in making simulations of the top-of-the-atmosphere radiance in support of remote sensing algorithm development.
NASA Astrophysics Data System (ADS)
Zhao, Lei; Lee, Xuhui; Liu, Shoudong
2013-09-01
Solar radiation at the Earth's surface is an important driver of meteorological and ecological processes. The objective of this study is to evaluate the accuracy of the reanalysis solar radiation produced by NARR (North American Regional Reanalysis) and MERRA (Modern-Era Retrospective Analysis for Research and Applications) against the FLUXNET measurements in North America. We found that both assimilation systems systematically overestimated the surface solar radiation flux on the monthly and annual scale, with an average bias error of +37.2 Wm-2 for NARR and of +20.2 Wm-2 for MERRA. The bias errors were larger under cloudy skies than under clear skies. A postreanalysis algorithm consisting of empirical relationships between model bias, a clearness index, and site elevation was proposed to correct the model errors. Results show that the algorithm can remove the systematic bias errors for both FLUXNET calibration sites (sites used to establish the algorithm) and independent validation sites. After correction, the average annual mean bias errors were reduced to +1.3 Wm-2 for NARR and +2.7 Wm-2 for MERRA. Applying the correction algorithm to the global domain of MERRA brought the global mean surface incoming shortwave radiation down by 17.3 W m-2 to 175.5 W m-2. Under the constraint of the energy balance, other radiation and energy balance terms at the Earth's surface, estimated from independent global data products, also support the need for a downward adjustment of the MERRA surface solar radiation.
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051
Fuzzy multi objective transportation problem – evolutionary algorithm approach
NASA Astrophysics Data System (ADS)
Karthy, T.; Ganesan, K.
2018-04-01
This paper deals with fuzzy multi objective transportation problem. An fuzzy optimal compromise solution is obtained by using Fuzzy Genetic Algorithm. A numerical example is provided to illustrate the methodology.
Data Filtering of Western Hemisphere GOES Wildfire ABBA Products
NASA Astrophysics Data System (ADS)
Theisen, M.; Prins, E.; Schmidt, C.; Reid, J. S.; Hunter, J.; Westphal, D.
2002-05-01
The Fire Locating and Modeling of Burning Emissions (FLAMBE') project was developed to model biomass burning emissions, transport, and radiative effects in real time. The model relies on data from the Geostationary Operational Environment Satellites (GOES-8, GOES-10), that is generated by the Wildfire Automated Biomass Burning Algorithm (WF ABBA). In an attempt to develop the most accurate modeling system the data set needs to be filtered to distinguish the true fire pixels from false alarms. False alarms occur due to reflection of solar radiation off of standing water, surface structure variances, and heat anomalies. The Reoccurring Fire Filtering algorithm (ReFF) was developed to address such false alarms by filtering data dependent on reoccurrence, location in relation to region and satellite, as well as heat intensity. WF ABBA data for the year 2000 during the peak of the burning season were analyzed using ReFF. The analysis resulted in a 45% decrease in North America and only a 15% decrease in South America, respectively, in total fire pixel occurrence. The lower percentage decrease in South America is a result of fires burning for longer periods of time, less surface variance, as well as an increase in heat intensity of fires for that region. Also fires are so prevalent in the region that multiple fires may coexist in the same 4-kilometer pixel.
The Fire Locating and Modeling of Burning Emissions (FLAMBE) Project
NASA Astrophysics Data System (ADS)
Reid, J. S.; Prins, E. M.; Westphal, D.; Richardson, K.; Christopher, S.; Schmidt, C.; Theisen, M.; Eck, T.; Reid, E. A.
2001-12-01
The Fire Locating and Modeling of Burning Emissions (FLAMBE) project was initiated by NASA, the US Navy and NOAA to monitor biomass burning and burning emissions on a global scale. The idea behind the mission is to integrate remote sensing data with global and regional transport models in real time for the purpose of providing the scientific community with smoke and fire products for planning and research purposes. FLAMBE is currently utilizing real time satellite data from GOES satellites, fire products based on the Wildfire Automated Biomass Burning Algorithm (WF_ABBA) are generated for the Western Hemisphere every 30 minutes with only a 90 minute processing delay. We are currently collaborating with other investigators to gain global coverage. Once generated, the fire products are used to input smoke fluxes into the NRL Aerosol Analysis and Prediction System, where advection forecasts are performed for up to 6 days. Subsequent radiative transfer calculations are used to estimate top of atmosphere and surface radiative forcing as well as surface layer visibility. Near real time validation is performed using field data collected by Aerosol Robotic Network (AERONET) Sun photometers. In this paper we fully describe the FLAMBE project and data availability. Preliminary result from the previous year will also be presented, with an emphasis on the development of algorithms to determine smoke emission fluxes from individual fire products. Comparisons to AERONET Sun photometer data will be made.
Ionizing radiation, ion transports, and radioresistance of cancer cells
Huber, Stephan M.; Butz, Lena; Stegen, Benjamin; Klumpp, Dominik; Braun, Norbert; Ruth, Peter; Eckert, Franziska
2013-01-01
The standard treatment of many tumor entities comprises fractionated radiation therapy which applies ionizing radiation to the tumor-bearing target volume. Ionizing radiation causes double-strand breaks in the DNA backbone that result in cell death if the number of DNA double-strand breaks exceeds the DNA repair capacity of the tumor cell. Ionizing radiation reportedly does not only act on the DNA in the nucleus but also on the plasma membrane. In particular, ionizing radiation-induced modifications of ion channels and transporters have been reported. Importantly, these altered transports seem to contribute to the survival of the irradiated tumor cells. The present review article summarizes our current knowledge on the underlying mechanisms and introduces strategies to radiosensitize tumor cells by targeting plasma membrane ion transports. PMID:23966948
Genetic algorithm driven spectral shaping of supercontinuum radiation in a photonic crystal fiber
NASA Astrophysics Data System (ADS)
Michaeli, Linor; Bahabad, Alon
2018-05-01
We employ a genetic algorithm to control a pulse-shaping system pumping a nonlinear photonic crystal with ultrashort pulses. With this system, we are able to modify the spectrum of the generated supercontinuum (SC) radiation to yield narrow Gaussian-like features around pre-selected wavelengths over the whole SC spectrum.
Simulation of Automatic Incidents Detection Algorithm on the Transport Network
ERIC Educational Resources Information Center
Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Ivakhnenko, Andrey M.
2016-01-01
Management of traffic incident is a functional part of the whole approach to solving traffic problems in the framework of intelligent transport systems. Development of an effective process of traffic incident management is an important part of the transport system. In this research, it's suggested algorithm based on fuzzy logic to detect traffic…
Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms
NASA Technical Reports Server (NTRS)
Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.
2014-01-01
Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.
NASA Astrophysics Data System (ADS)
Bridge, J. W.; Dormand, J.; Cooper, J.; Judson, D.; Boston, A. J.; Bankhead, M.; Onda, Y.
2014-12-01
The legacy to-date of the nuclear disaster at Fukushima Dai-ichi, Japan, has emphasised the fundamental importance of high quality radiation measurements in soils and plant systems. Current-generation radiometers based on coded-aperture collimation are limited in their ability to locate sources of radiation in three dimensions, and require a relatively long measurement time due to the poor efficiency of the collimation system. The quality of data they can provide to support biogeochemical process models in such systems is therefore often compromised. In this work we report proof-of-concept experiments demonstrating the potential of an alternative approach in the measurement of environmentally-important radionuclides (in particular 137Cs) in quartz sand and soils from the Fukushima exclusion zone. Compton-geometry imaging radiometers harness the scattering of incident radiation between two detectors to yield significant improvements in detection efficiency, energy resolution and spatial location of radioactive sources in a 180° field of view. To our knowledge we are reporting its first application to environmentally-relevant systems at low activity, dispersed sources, with significant background radiation and, crucially, movement over time. We are using a simple laboratory column setup to conduct one-dimensional transport experiments for 139Ce and 137Cs in quartz sand and in homogenized repacked Fukushima soils. Polypropylene columns 15 cm length with internal diameter 1.6 cm were filled with sand or soil and saturated slowly with tracer-free aqueous solutions. Radionuclides were introduced as 2mL pulses (step-up step-down) at the column inlet. Data were collected continuously throughout the transport experiment and then binned into sequential time intervals to resolve the total activity in the column and its progressive movement through the sand/soil. The objective of this proof-of-concept work is to establish detection limits, optimise image reconstruction algorithms, and develop a novel approach to time-lapse quantification of radionuclide dynamics in the soil-plant system. The aim is to underpin the development of a new generation of Compton radiometers equipped to provide high resolution, dynamic measurements of radionuclides in terrestrial biogeochemical environments.
NASA Astrophysics Data System (ADS)
Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Mancosu, Pietro; Cozzi, Luca
2011-05-01
This corrigendum intends to clarify some important points that were not clearly or properly addressed in the original paper, and for which the authors apologize. The original description of the first Acuros algorithm is from the developers, published in Physics in Medicine and Biology by Vassiliev et al (2010) in the paper entitled 'Validation of a new grid-based Boltzmann equation solver for dose calculation in radiotherapy with photon beams'. The main equations describing the algorithm reported in our paper, implemented as the 'Acuros XB Advanced Dose Calculation Algorithm' in the Varian Eclipse treatment planning system, were originally described (for the original Acuros algorithm) in the above mentioned paper by Vassiliev et al. The intention of our description in our paper was to give readers an overview of the algorithm, not pretending to have authorship of the algorithm itself (used as implemented in the planning system). Unfortunately our paper was not clear, particularly in not allocating full credit to the work published by Vassiliev et al on the original Acuros algorithm. Moreover, it is important to clarify that we have not adapted any existing algorithm, but have used the Acuros XB implementation in the Eclipse planning system from Varian. In particular, the original text of our paper should have been as follows: On page 1880 the sentence 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008, 2010). Acuros XB builds upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were adapted especially for external photon beam dose calculations' should be corrected to 'A prototype LBTE solver, called Attila (Wareing et al 2001), was also applied to external photon beam dose calculations (Gifford et al 2006, Vassiliev et al 2008). A new algorithm called Acuros, developed by the Transpire Inc. group, was built upon many of the methods in Attila, but represents a ground-up rewrite of the solver where the methods were especially adapted for external photon beam dose calculations, and described in Vassiliev et al (2010). Acuros XB is the Varian implementation of the original Acuros algorithm in the Eclipse planning system'. On page 1881, the sentence 'Monte Carlo and explicit LBTE solution, with sufficient refinement, will converge on the same solution. However, both methods produce errors (inaccuracies). In explicit LBTE solution methods, errors are primarily systematic, and result from discretization of the solution variables in space, angle, and energy. In both Monte Carlo and explicit LBTE solvers, a trade-off exists between speed and accuracy: reduced computational time may be achieved when less stringent accuracy criteria are specified, and vice versa' should cite the reference Vassiliev et al (2010). On page 1882, the beginning of the sub-paragraph The radiation transport model should start with 'The following description of the Acuros XB algorithm is as outlined by Vassiliev et al (2010) and reports the main steps of the radiation transport model as implemented in Eclipse'. The authors apologize for this lack of clarity in our published paper, and trust that this corrigendum gives full credit to Vassiliev et al in their earlier paper, with respect to previous work on the Acuros algorithm. However we wish to note that the entire contents of the data and results published in our paper are original and the work of the listed authors. References Gifford K A, Horton J L Jr, Wareing T A, Failla G and Mourtada F 2006 Comparison of a finite-element multigroup discrete-ordinates code with Monte Carlo for radiotherapy calculations Phys. Med. Biol. 51 2253-65 Vassiliev O N, Wareing T A, Davis I M, McGhee J, Barnett D, Horton J L, Gifford K, Failla G, Titt U and Mourtada F 2008 Feasibility of a multigroup deterministic solution method for three-dimensional radiotherapy dose calculations Int. J. Radiat. Oncol. Biol. Phys. 72 220-7 Vassiliev O N, Wareing T A, McGhee J, Failla G, Salehpour M R and Mourtada F 2010 Validation of a new grid based Boltzmann equation solver for dose calculation in radiotherapy with photon beams Phys. Med. Biol. 55 581-98 Wareing T A, McGhee J M, Morel J E and Pautz S D 2001 Discontinuous finite element Sn methods on three-dimensional unstructured grids Nucl. Sci. Eng. 138 256-68
Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo
2014-05-01
To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
NASA Astrophysics Data System (ADS)
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.
Mehdinejadiani, Behrouz
2017-08-01
This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.
Observed and Simulated Radiative and Microphysical Properties of Tropical Convective Storms
NASA Technical Reports Server (NTRS)
DelGenio, Anthony D.; Hansen, James E. (Technical Monitor)
2001-01-01
Increases in the ice content, albedo and cloud cover of tropical convective storms in a warmer climate produce a large negative contribution to cloud feedback in the GISS GCM. Unfortunately, the physics of convective upward water transport, detrainment, and ice sedimentation, and the relationship of microphysical to radiative properties, are all quite uncertain. We apply a clustering algorithm to TRMM satellite microwave rainfall retrievals to identify contiguous deep precipitating storms throughout the tropics. Each storm is characterized according to its size, albedo, OLR, rain rate, microphysical structure, and presence/absence of lightning. A similar analysis is applied to ISCCP data during the TOGA/COARE experiment to identify optically thick deep cloud systems and relate them to large-scale environmental conditions just before storm onset. We examine the statistics of these storms to understand the relative climatic roles of small and large storms and the factors that regulate convective storm size and albedo. The results are compared to GISS GCM simulated statistics of tropical convective storms to identify areas of agreement and disagreement.
Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan
2016-04-01
To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.
NASA Astrophysics Data System (ADS)
Iswari, T.; Asih, A. M. S.
2018-04-01
In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.
Study of high speed complex number algorithms. [for determining antenna for field radiation patterns
NASA Technical Reports Server (NTRS)
Heisler, R.
1981-01-01
A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.
Diffusive, supersonic x-ray transport in radiatively heated foam cylinders
NASA Astrophysics Data System (ADS)
Back, C. A.; Bauer, J. D.; Hammer, J. H.; Lasinski, B. F.; Turner, R. E.; Rambo, P. W.; Landen, O. L.; Suter, L. J.; Rosen, M. D.; Hsing, W. W.
2000-05-01
Diffusive supersonic radiation transport, where the ratio of the diffusive radiation front velocity to the material sound speed >2 has been studied in experiments on low density (40 mg/cc to 50 mg/cc) foams. Laser-heated Au hohlraums provided a radiation drive that heated SiO2 and Ta2O5 aerogel foams of varying lengths. Face-on emission measurements at 550 eV provided clean signatures of the radiation breakout. The high quality data provides new detailed information on the importance of both the fill and wall material opacities and heat capacities in determining the radiation front speed and curvature. The Marshak radiation wave transport is studied in a geometry that allows direct comparisons with analytic models and two-dimensional code simulations. Experiments show important effects that will affect even nondiffusive and transonic radiation transport experiments studied by others in the field. This work is of basic science interest with applications to inertial confinement fusion and astrophysics.
Path Toward a Unifid Geometry for Radiation Transport
NASA Technical Reports Server (NTRS)
Lee, Kerry; Barzilla, Janet; Davis, Andrew; Zachmann
2014-01-01
The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex computer-aided design (CAD) models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN [high charge and energy transport code developed by NASA Langley Research Center (LaRC)], are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit-specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The workflow for achieving radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats
A New Fast Algorithm to Completely Account for Non-Lambertian Surface Reflection of The Earth
NASA Technical Reports Server (NTRS)
Qin, Wen-Han; Herman, Jay R.; Ahmad, Ziauddin; Einaudi, Franco (Technical Monitor)
2000-01-01
Surface bidirectional reflectance distribution function (BRDF) influences not only radiance just about the surface, but that emerging from the top of the atmosphere (TOA). In this study we propose a new, fast and accurate, algorithm CASBIR (correction for anisotropic surface bidirectional reflection) to account for such influences on radiance measured above TOA. This new algorithm is based on a 4-stream theory that separates the radiation field into direct and diffuse components in both upwelling and downwelling directions. This is important because the direct component accounts for a substantial portion of incident radiation under a clear sky, and the BRDF effect is strongest in the reflection of the direct radiation reaching the surface. The model is validated by comparison with a full-scale, vector radiation transfer model for the atmosphere-surface system. The result demonstrates that CASBIR performs very well (with overall relative difference of less than one percent) for all solar and viewing zenith and azimuth angles considered in wavelengths from ultraviolet to near-infrared over three typical, but very different surface types. Application of this algorithm includes both accounting for non-Lambertian surface scattering on the emergent radiation above TOA and a potential approach for surface BRDF retrieval from satellite measured radiance.
NASA Strategy to Safely Live and Work in the Space Radiation Environment
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wu, Honglu; Corbin, Barbara J.; Sulzman, Frank M.; Krenek, Sam
2007-01-01
In space, astronauts are constantly bombarded with energetic particles. The goal of the National Aeronautics and Space Agency and the NASA Space Radiation Project is to ensure that astronauts can safely live and work in the space radiation environment. The space radiation environment poses both acute and chronic risks to crew health and safety, but unlike some other aspects of space travel, space radiation exposure has clinically relevant implications for the lifetime of the crew. Among the identified radiation risks are cancer, acute and late CNS damage, chronic and degenerative tissue decease, and acute radiation syndrome. The term "safely" means that risks are sufficiently understood such that acceptable limits on mission, post-mission and multi-mission consequences can be defined. The NASA Space Radiation Project strategy has several elements. The first element is to use a peer-reviewed research program to increase our mechanistic knowledge and genetic capabilities to develop tools for individual risk projection, thereby reducing our dependency on epidemiological data and population-based risk assessment. The second element is to use the NASA Space Radiation Laboratory to provide a ground-based facility to study the health effects/mechanisms of damage from space radiation exposure and the development and validation of biological models of risk, as well as methods for extrapolation to human risk. The third element is a risk modeling effort that integrates the results from research efforts into models of human risk to reduce uncertainties in predicting the identified radiation risks. To understand the biological basis for risk, we must also understand the physical aspects of the crew environment. Thus, the fourth element develops computer algorithms to predict radiation transport properties, evaluate integrated shielding technologies and provide design optimization recommendations for the design of human space systems. Understanding the risks and determining methods to mitigate the risks are keys to a successful radiation protection strategy.
Radiation detector device for rejecting and excluding incomplete charge collection events
Bolotnikov, Aleksey E.; De Geronimo, Gianluigi; Vernon, Emerson; Yang, Ge; Camarda, Giuseppe; Cui, Yonggang; Hossain, Anwar; Kim, Ki Hyun; James, Ralph B.
2016-05-10
A radiation detector device is provided that is capable of distinguishing between full charge collection (FCC) events and incomplete charge collection (ICC) events based upon a correlation value comparison algorithm that compares correlation values calculated for individually sensed radiation detection events with a calibrated FCC event correlation function. The calibrated FCC event correlation function serves as a reference curve utilized by a correlation value comparison algorithm to determine whether a sensed radiation detection event fits the profile of the FCC event correlation function within the noise tolerances of the radiation detector device. If the radiation detection event is determined to be an ICC event, then the spectrum for the ICC event is rejected and excluded from inclusion in the radiation detector device spectral analyses. The radiation detector device also can calculate a performance factor to determine the efficacy of distinguishing between FCC and ICC events.
Review of TRMM/GPM Rainfall Algorithm Validation
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2004-01-01
A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.
UV Reconstruction Algorithm And Diurnal Cycle Variability
NASA Astrophysics Data System (ADS)
Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara
2009-03-01
UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
NASA Astrophysics Data System (ADS)
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Capabilities of VOS-based fluxes for estimating ocean heat budget and its variability
NASA Astrophysics Data System (ADS)
Gulev, S.; Belyaev, K.
2016-12-01
We consider here the perspective of using VOS observations by merchant ships available form the ICOADS data for estimating ocean surface heat budget at different time scale. To this purpose we compute surface turbulent heat fluxes as well as short- and long-wave radiative fluxes from the ICOADS reports for the last several decades in the North Atlantic mid latitudes. Turbulent fluxes were derived using COARE-3 algorithm and for computation of radiative fluxes new algorithms accounting for cloud types were used. Sampling uncertainties in the VOS-based fluxes were estimated by sub-sampling of the recomputed reanalysis (ERA-Interim) fluxes according to the VOS sampling scheme. For the turbulent heat fluxes we suggest an approach to minimize sampling uncertainties. The approach is based on the integration of the turbulent heat fluxes in the coordinates of steering parameters (vertical surface temperature and humidity gradients on one hand and wind speed on the other) for which theoretical probability distributions are known. For short-wave radiative fluxes sampling uncertainties were minimized by "rotating local observation time around the clock" and using probability density functions for the cloud cover occurrence distributions. Analysis was performed for the North Atlantic latitudinal band from 25 N to 60 N, for which also estimates of the meridional heat transport are available from the ocean cross-sections. Over the last 35 years turbulent fluxes within the region analysed increase by about 6 W/m2 with the major growth during the 1990s and early 2000s. Decreasing incoming short wave radiation during the same time (about 1 W/m2) implies upward change of the ocean surface heat loss by about 7-8 W/m2. We discuss different sources of uncertainties of computations as well as potential of the application of the analysis concept to longer time series going back to 1920s.
Patient‐specific CT dosimetry calculation: a feasibility study
Xie, Huchen; Cheng, Jason Y.; Ning, Holly; Zhuge, Ying; Miller, Robert W.
2011-01-01
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of “standard man”. Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient‐specific CT dosimetry. A radiation treatment planning system was modified to calculate patient‐specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose‐volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi‐empirical, measured correction‐based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point‐by‐point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%–20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient‐specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation. PACS numbers: 87.55.D‐, 87.57.Q‐, 87.53.Bn, 87.55.K‐ PMID:22089016
Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.
On the Development of a Deterministic Three-Dimensional Radiation Transport Code
NASA Technical Reports Server (NTRS)
Rockell, Candice; Tweed, John
2011-01-01
Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.
NASA Technical Reports Server (NTRS)
Liu, Hongyu; Crawford, James H.; Pierce, Robert B.; Norris, Peter; Platnick, Steven E.; Chen, Gao; Logan, Jennifer A.; Yantosca, Robert M.; Evans, Mat J.; Kittaka, Chieko;
2006-01-01
Clouds exert an important influence on tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies (J-values). We assess the radiative effect of clouds on photolysis frequencies and key oxidants in the troposphere with a global three-dimensional (3-D) chemical transport model (GEOS-CHEM) driven by assimilated meteorological observations from the Goddard Earth Observing System data assimilation system (GEOS DAS) at the NASA Global Modeling and Assimilation Office (GMAO). We focus on the year of 2001 with the GEOS-3 meteorological observations. Photolysis frequencies are calculated using the Fast-J radiative transfer algorithm. The GEOS-3 global cloud optical depth and cloud fraction are evaluated and generally consistent with the satellite retrieval products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the International Satellite Cloud Climatology Project (ISCCP). Results using the linear assumption, which assumes linear scaling of cloud optical depth with cloud fraction in a grid box, show global mean OH concentrations generally increase by less than 6% because of the radiative effect of clouds. The OH distribution shows much larger changes (with maximum decrease of approx.20% near the surface), reflecting the opposite effects of enhanced (weakened) photochemistry above (below) clouds. The global mean photolysis frequencies for J[O1D] and J[NO2] in the troposphere change by less than 5% because of clouds; global mean O3 concentrations in the troposphere increase by less than 5%. This study shows tropical upper tropospheric O3 to be less sensitive to the radiative effect of clouds than previously reported (approx.5% versus approx.20-30%). These results emphasize that the dominant effect of clouds is to influence the vertical redistribution of the intensity of photochemical activity while global average effects remain modest, again contrasting with previous studies. Differing vertical distributions of clouds may explain part, but not the majority, of these discrepancies between models. Using an approximate random overlap or a maximum-random overlap scheme to take account of the effect of cloud overlap in the vertical reduces the impact of clouds on photochemistry but does not significantly change our results with respect to the modest global average effect.
Experimental validation of a coupled neutron-photon inverse radiation transport solver
NASA Astrophysics Data System (ADS)
Mattingly, John; Mitchell, Dean J.; Harding, Lee T.
2011-10-01
Sandia National Laboratories has developed an inverse radiation transport solver that applies nonlinear regression to coupled neutron-photon deterministic transport models. The inverse solver uses nonlinear regression to fit a radiation transport model to gamma spectrometry and neutron multiplicity counting measurements. The subject of this paper is the experimental validation of that solver. This paper describes a series of experiments conducted with a 4.5 kg sphere of α-phase, weapons-grade plutonium. The source was measured bare and reflected by high-density polyethylene (HDPE) spherical shells with total thicknesses between 1.27 and 15.24 cm. Neutron and photon emissions from the source were measured using three instruments: a gross neutron counter, a portable neutron multiplicity counter, and a high-resolution gamma spectrometer. These measurements were used as input to the inverse radiation transport solver to evaluate the solver's ability to correctly infer the configuration of the source from its measured radiation signatures.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
NASA Astrophysics Data System (ADS)
Wibking, Benjamin D.; Thompson, Todd A.; Krumholz, Mark R.
2018-04-01
The radiation force on dust grains may be dynamically important in driving turbulence and outflows in rapidly star-forming galaxies. Recent studies focus on the highly optically-thick limit relevant to the densest ultra-luminous galaxies and super star clusters, where reprocessed infrared photons provide the dominant source of electromagnetic momentum. However, even among starburst galaxies, the great majority instead lie in the so-called "single-scattering" limit, where the system is optically-thick to the incident starlight, but optically-thin to the re-radiated infrared. In this paper we present a stability analysis and multidimensional radiation-hydrodynamic simulations exploring the stability and dynamics of isothermal dusty gas columns in this regime. We describe our algorithm for full angle-dependent radiation transport based on the discontinuous Galerkin finite element method. For a range of near-Eddington fluxes, we show that the medium is unstable, producing convective-like motions in a turbulent atmosphere with a scale height significantly inflated compared to the gas pressure scale height and mass-weighted turbulent energy densities of ˜0.01 - 0.1 of the midplane radiation energy density, corresponding to mass-weighted velocity dispersions of Mach number ˜0.5 - 2. Extrapolation of our results to optical depths of 103 implies maximum turbulent Mach numbers of ˜20. Comparing our results to galaxy-averaged observations, and subject to the approximations of our calculations, we find that radiation pressure does not contribute significantly to the effective supersonic pressure support in star-forming disks, which in general are substantially sub-Eddington. We further examine the time-averaged vertical density profiles in dynamical equilibrium and comment on implications for radiation-pressure-driven galactic winds.
49 CFR 173.441 - Radiation level limitations and exclusive use provisions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 2 2014-10-01 2014-10-01 false Radiation level limitations and exclusive use... Radiation level limitations and exclusive use provisions. (a) Except as provided in paragraph (b) of this... prepared for shipment, so that under conditions normally incident to transportation, the radiation level...
49 CFR 173.441 - Radiation level limitations and exclusive use provisions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 2 2012-10-01 2012-10-01 false Radiation level limitations and exclusive use... Radiation level limitations and exclusive use provisions. (a) Except as provided in paragraph (b) of this... prepared for shipment, so that under conditions normally incident to transportation, the radiation level...
49 CFR 173.441 - Radiation level limitations and exclusive use provisions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 2 2013-10-01 2013-10-01 false Radiation level limitations and exclusive use... Radiation level limitations and exclusive use provisions. (a) Except as provided in paragraph (b) of this... prepared for shipment, so that under conditions normally incident to transportation, the radiation level...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Rao; Zavan, Rodolfo; McGeachy, Philip
2016-08-15
Purpose: Transport based dose calculation algorithm Acuros XB (AXB) has been shown to accurately account for heterogeneities mostly through comparisons with Monte Carlo simulations. This study aims at providing additional experimental verification for AXB for flattened and unflattened clinical energies in low density phantoms of the same material. Materials and Methods: Polystyrene slabs were created using a bench-top 3D printer. Six slabs were printed at varying densities from 0.23 g/cm{sup 3} to 0.68 g/cm{sup 3}, corresponding to different density humanoid tissues. The slabs were used to form different single and multilayer geometries. Dose was calculated with AXB 11.0.31 for 6MV,more » 15MV flattened and 6FFF (flattening filter free) energies for field sizes of 2×2 cm{sup 2} and 5×5 cm{sup 2}. The phantoms containing radiochromic EBT3 films were irradiated. Absolute dose profiles and 2D gamma analyses were performed for 96 dose planes. Results: For all single slab, multislab configurations and energies, absolute dose differences between the AXB calculation and film measurements remained <3% for both fields, with slightly poor disagreement in penumbra. The gamma index at 2% / 2mm averaged 98% in all combinations of fields, phantoms and photon energies. Conclusions: The transport based dose algorithm AXB is in good agreement with the experimental measurements for small field sizes using 6MV, 6FFF and 15MV beams adjacent to low density heterogeneous media. This work provides sufficient experimental ground to support the use of AXB for heterogeneous dose calculation purposes.« less
NASA Astrophysics Data System (ADS)
Rajon, D. A.; Shah, A. P.; Watchman, C. J.; Brindle, J. M.; Bolch, W. E.
2003-06-01
Recent advances in physical models of skeletal dosimetry utilize high-resolution NMR microscopy images of trabecular bone. These images are coupled to radiation transport codes to assess energy deposition within active bone marrow irradiated by bone- or marrow-incorporated radionuclides. Recent studies have demonstrated that the rectangular shape of image voxels is responsible for cross-region (bone-to-marrow) absorbed fraction errors of up to 50% for very low-energy electrons (<50 keV). In this study, a new hyperboloid adaptation of the marching cube (MC) image-visualization algorithm is implemented within 3D digital images of trabecular bone to better define the bone-marrow interface, and thus reduce voxel effects in the assessment of cross-region absorbed fractions. To test the method, a mathematical sample of trabecular bone was constructed, composed of a random distribution of spherical marrow cavities, and subsequently coupled to the EGSnrc radiation code to generate reference values for the energy deposition in marrow or bone. Next, digital images of the bone model were constructed over a range of simulated image resolutions, and coupled to EGSnrc using the hyperboloid MC (HMC) algorithm. For the radionuclides 33P, 117mSn, 131I and 153Sm, values of S(marrow←bone) estimated using voxel models of trabecular bone were shown to have relative errors of 10%, 9%, <1% and <1% at a voxel size of 150 µm. At a voxel size of 60 µm, these errors were 6%, 5%, <1% and <1%, respectively. When the HMC model was applied during particle transport, the relative errors on S(marrow←bone) for these same radionuclides were reduced to 7%, 6%, <1% and <1% at a voxel size of 150 µm, and to 2%, 2%, <1% and <1% at a voxel size of 60 µm. The technique was also applied to a real NMR image of human trabecular bone with a similar demonstration of reductions in dosimetry errors.
Engineering Near-Field Transport of Energy using Nanostructured Materials
2015-12-12
increasingly important for a wide range of nanotechnology applications. Recent computational studies on near- field radiative heat transfer (NFRHT) suggest...SECURITY CLASSIFICATION OF: The transport of heat at the nanometer scale is becoming increasingly important for a wide range of nanotechnology...applications. Recent computational studies on near- field radiative heat transfer (NFRHT) suggest that radiative energy transport between suitably chosen
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E
2013-10-21
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
NASA Astrophysics Data System (ADS)
Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.
2013-10-01
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.
Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel
2016-10-01
This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.
Infrared Algorithm Development for Ocean Observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1997-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.
NASA Astrophysics Data System (ADS)
Machida, Manabu
2017-01-01
We consider the radiative transport equation in which the time derivative is replaced by the Caputo derivative. Such fractional-order derivatives are related to anomalous transport and anomalous diffusion. In this paper we describe how the time-fractional radiative transport equation is obtained from continuous-time random walk and see how the equation is related to the time-fractional diffusion equation in the asymptotic limit. Then we solve the equation with Legendre-polynomial expansion.
In-Space Radiator Shape Optimization using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael
2006-01-01
Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in-space radiators for unique situations. Preliminary results indicate an optimized shape following that of the temperature distribution regions in the "cooler" portions of the radiator. The results closely follow the expected radiator shape.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
NASA Astrophysics Data System (ADS)
Coffer, Amy Beth
Radiation imagers are import tools in the modern world for a wide range of applications. They span the use-cases of fundamental sciences, astrophysics, medical imaging, all the way to national security, nuclear safeguards, and non-proliferation verification. The type of radiation imagers studied in this thesis were gamma-ray imagers that detect emissions from radioactive materials. Gamma-ray imagers goal is to localize and map the distribution of radiation within their specific field-of-view despite the fact of complicating background radiation that can be terrestrial, astronomical, and temporal. Compton imaging systems are one type of gamma-ray imager that can map the radiation around the system without the use of collimation. Lack of collimation enables the imaging system to be able to detect radiation from all-directions, while at the same time, enables increased detection efficiency by not absorbing incident radiation in non-sensing materials. Each Compton-scatter events within an imaging system generated a possible cone-surface in space that the radiation could have originated from. Compton imaging is limited in its reconstructed image signal-to-background due to these source Compton-cones overlapping with background radiation Compton-cones. These overlapping cones limit Compton imaging's detection-sensitivity in image space. Electron-tracking Compton imaging (ETCI) can improve the detection-sensitivity by measuring the Compton-scattered electron's initial trajectory. With an estimate of the scattered electron's trajectory, one can reduce the Compton-back-projected cone to a cone-arc, thus enabling faster radiation source detection and localization. However, the ability to measure the Compton-scattered electron-trajectories adds another layer of complexity to an already complex methodology. For a real-world imaging applications, improvements are needed in electron-track detection efficiency and in electron-track reconstruction. One way of measuring Compton-scattered electron-trajectories is with high-resolution Charged-Coupled Devices (CCDs). The proof-of-principle CCD-based ETCI experiment demonstrated the CCDs' ability to measure the Compton-scattered electron-tracks as a 2-dimensional image. Electron-track-imaging algorithms using the electron-track-image are able to determine the 3-dimensional electron-track trajectory within +/- 20 degrees. The work presented here is the physics simulations developed along side the experimental proof-of-principle experiment. The development of accurate physics modeling for multiple-layer CCDs based ETCI systems allow for the accurate prediction of future ETCI system performance. The simulations also enable quick development insights for system design, and they guide the development of electron-track reconstruction methods. The physics simulation efforts for this project looked closely at the accuracy of the Geant4 Monte Carlo methods for medium energy electron transport. In older version of Geant4 there were some discrepancies between the electron-tracking experimental measurements and the simulation results. It was determined that when comparing the electron dynamics of electrons at very high resolutions, Geant4 simulations must be fine tuned with careful choices for physics production cuts and electron physics stepping sizes. One result of this work is a CCDs Monte Carlo model that has been benchmarked to experimental findings and fully characterized for both photon and electron transport. The CCDs physics model now match to within 1 percent error of experimental results for scattered-electron energies below 500 keV. Following the improvements of the CCDs simulations, the performance of a realistic two-layer CCD-stack system was characterized. The realistic CCD-stack system looked at the effect of thin passive-layers on the CCDs' front face and back-contact. The photon interaction efficiency was calculated for the two-layer CCD-stack, and we found that there is a 90 percent probability of scattered-electrons from a 662 keV source to stay within a single active layer. This demonstrates the improved detection efficiency, which is one of the strengths of the CCDs' implementation as a ETCI system. The CCD-stack simulations also established that electron-tracks scattering from one CCDs layer to another could be reconstructed. The passive-regions on the CCD-stack mean that these inter-layer scattered-electron-tracks will always loose both angular information and energy information. Looking at the angular changes of these electrons scattering between the CCDs layers showed us there is not a strong energy dependence on the angular changes due to the passive-regions of the CCDs. The angular changes of the electron track are, for the most part, a function of the thickness of the thin back-layer of the CCDs. Lastly, an approach using CCD-stack simulations was developed to reconstruct the energy transport across dead-layers and its feasibility was demonstrated. Adding back this lost energy will limit the loss of energy resolution of the scatter-interactions. Energy resolution losses would negatively impacted the achievable image resolution from image reconstruction algorithms. Returning some of the energy back to the reconstructed electron-track will help retain the expected performance of the electron-track trajectory determination algorithm.
Olive Crown Porosity Measurement Based on Radiation Transmittance: An Assessment of Pruning Effect.
Castillo-Ruiz, Francisco J; Castro-Garcia, Sergio; Blanco-Roldan, Gregorio L; Sola-Guirado, Rafael R; Gil-Ribes, Jesus A
2016-05-19
Crown porosity influences radiation interception, air movement through the fruit orchard, spray penetration, and harvesting operation in fruit crops. The aim of the present study was to develop an accurate and reliable methodology based on transmitted radiation measurements to assess the porosity of traditional olive trees under different pruning treatments. Transmitted radiation was employed as an indirect method to measure crown porosity in two olive orchards of the Picual and Hojiblanca cultivars. Additionally, three different pruning treatments were considered to determine if the pruning system influences crown porosity. This study evaluated the accuracy and repeatability of four algorithms in measuring crown porosity under different solar zenith angles. From a 14° to 30° solar zenith angle, the selected algorithm produced an absolute error of less than 5% and a repeatability higher than 0.9. The described method and selected algorithm proved satisfactory in field results, making it possible to measure crown porosity at different solar zenith angles. However, pruning fresh weight did not show any relationship with crown porosity due to the great differences between removed branches. A robust and accurate algorithm was selected for crown porosity measurements in traditional olive trees, making it possible to discern between different pruning treatments.
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
Jiang, Bo; Liang, Shunlin; Ma, Han; ...
2016-03-09
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Bo; Liang, Shunlin; Ma, Han
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
Algorithms for optimization of the transport system in living and artificial cells.
Melkikh, A V; Sutormina, M I
2011-06-01
An optimization of the transport system in a cell has been considered from the viewpoint of the operations research. Algorithms for an optimization of the transport system of a cell in terms of both the efficiency and a weak sensitivity of a cell to environmental changes have been proposed. The switching of various systems of transport is considered as the mechanism of weak sensitivity of a cell to changes in environment. The use of the algorithms for an optimization of a cardiac cell has been considered by way of example. We received theoretically for a cell of a cardiac muscle that at the increase of potassium concentration in the environment switching of transport systems for this ion takes place. This conclusion qualitatively coincides with experiments. The problem of synthesizing an optimal system in an artificial cell has been stated.
Computer aided radiation analysis for manned spacecraft
NASA Technical Reports Server (NTRS)
Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.
1991-01-01
In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.
Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds
NASA Technical Reports Server (NTRS)
Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.
2001-01-01
Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.
Scalable Domain Decomposed Monte Carlo Particle Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Matthew Joseph
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Disk-integrated reflection light curves of planets
NASA Astrophysics Data System (ADS)
Garcia Munoz, A.
2014-03-01
The light scattered by a planet atmosphere contains valuable information on the planet's composition and aerosol content. Typically, the interpretation of that information requires elaborate radiative transport models accounting for the absorption and scattering processes undergone by the star photons on their passage through the atmosphere. I have been working on a particular family of algorithms based on Backward Monte Carlo (BMC) integration for solving the multiple-scattering problem in atmospheric media. BMC algorithms simulate statistically the photon trajectories in the reverse order that they actually occur, i.e. they trace the photons from the detector through the atmospheric medium and onwards to the illumination source following probability laws dictated by the medium's optical properties. BMC algorithms are versatile, as they can handle diverse viewing and illumination geometries, and can readily accommodate various physical phenomena. As will be shown, BMC algorithms are very well suited for the prediction of magnitudes integrated over a planet's disk (whether uniform or not). Disk-integrated magnitudes are relevant in the current context of exploration of extrasolar planets because spatial resolution of these objects will not be technologically feasible in the near future. I have been working on various predictions for the disk-integrated properties of planets that demonstrate the capacities of the BMC algorithm. These cases include the variability of the Earth's integrated signal caused by diurnal and seasonal changes in the surface reflectance and cloudiness, or by sporadic injection of large amounts of volcanic particles into the atmosphere. Since the implemented BMC algorithm includes a polarization mode, these examples also serve to illustrate the potential of polarimetry in the characterization of both Solar System and extrasolar planets. The work is complemented with the analysis of disk-integrated photometric observations of Earth and Venus drawn from various sources.
Nonrelativistic grey S n -transport radiative-shock solutions
Ferguson, J. M.; Morel, J. E.; Lowrie, R. B.
2017-06-01
We present semi-analytic radiative-shock solutions in which grey Sn-transport is used to model the radiation, and we include both constant cross sections and cross sections that depend on temperature and density. These new solutions solve for a variable Eddington factor (VEF) across the shock domain, which allows for interesting physics not seen before in radiative-shock solutions. Comparisons are made with the grey nonequilibrium-diffusion radiative-shock solutions of Lowrie and Edwards [1], which assumed that the Eddington factor is constant across the shock domain. It is our experience that the local Mach number is monotonic when producing nonequilibrium-diffusion solutions, but that thismore » monotonicity may disappear while integrating the precursor region to produce Sn-transport solutions. For temperature- and density-dependent cross sections we show evidence of a spike in the VEF in the far upstream portion of the radiative-shock precursor. We show evidence of an adaptation zone in the precursor region, adjacent to the embedded hydrodynamic shock, as conjectured by Drake [2, 3], and also confirm his expectation that the precursor temperatures adjacent to the Zel’dovich spike take values that are greater than the downstream post-shock equilibrium temperature. We also show evidence that the radiation energy density can be nonmonotonic under the Zel’dovich spike, which is indicative of anti-diffusive radiation flow as predicted by McClarren and Drake [4]. We compare the angle dependence of the radiation flow for the Sn-transport and nonequilibriumdiffusion radiation solutions, and show that there are considerable differences in the radiation flow between these models across the shock structure. Lastly, we analyze the radiation flow to understand the cause of the adaptation zone, as well as the structure of the Sn-transport radiation-intensity solutions across the shock structure.« less
Nonrelativistic grey S n -transport radiative-shock solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, J. M.; Morel, J. E.; Lowrie, R. B.
We present semi-analytic radiative-shock solutions in which grey Sn-transport is used to model the radiation, and we include both constant cross sections and cross sections that depend on temperature and density. These new solutions solve for a variable Eddington factor (VEF) across the shock domain, which allows for interesting physics not seen before in radiative-shock solutions. Comparisons are made with the grey nonequilibrium-diffusion radiative-shock solutions of Lowrie and Edwards [1], which assumed that the Eddington factor is constant across the shock domain. It is our experience that the local Mach number is monotonic when producing nonequilibrium-diffusion solutions, but that thismore » monotonicity may disappear while integrating the precursor region to produce Sn-transport solutions. For temperature- and density-dependent cross sections we show evidence of a spike in the VEF in the far upstream portion of the radiative-shock precursor. We show evidence of an adaptation zone in the precursor region, adjacent to the embedded hydrodynamic shock, as conjectured by Drake [2, 3], and also confirm his expectation that the precursor temperatures adjacent to the Zel’dovich spike take values that are greater than the downstream post-shock equilibrium temperature. We also show evidence that the radiation energy density can be nonmonotonic under the Zel’dovich spike, which is indicative of anti-diffusive radiation flow as predicted by McClarren and Drake [4]. We compare the angle dependence of the radiation flow for the Sn-transport and nonequilibriumdiffusion radiation solutions, and show that there are considerable differences in the radiation flow between these models across the shock structure. Lastly, we analyze the radiation flow to understand the cause of the adaptation zone, as well as the structure of the Sn-transport radiation-intensity solutions across the shock structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less
Turbulent Radiation Effects in HSCT Combustor Rich Zone
NASA Technical Reports Server (NTRS)
Hall, Robert J.; Vranos, Alexander; Yu, Weiduo
1998-01-01
A joint UTRC-University of Connecticut theoretical program was based on describing coupled soot formation and radiation in turbulent flows using stretched flamelet theory. This effort was involved with using the model jet fuel kinetics mechanism to predict soot growth in flamelets at elevated pressure, to incorporate an efficient model for turbulent thermal radiation into a discrete transfer radiation code, and to couple die soot growth, flowfield, and radiation algorithm. The soot calculations used a recently developed opposed jet code which couples the dynamical equations of size-class dependent particle growth with complex chemistry. Several of the tasks represent technical firsts; among these are the prediction of soot from a detailed jet fuel kinetics mechanism, the inclusion of pressure effects in the soot particle growth equations, and the inclusion of the efficient turbulent radiation algorithm in a combustor code.
Sajo, Erno
2016-01-01
We review radiation transport and clinical beam modelling for gold nanoparticle dose-enhanced radiotherapy using X-rays. We focus on the nanoscale radiation transport and its relation to macroscopic dosimetry for monoenergetic and clinical beams. Among other aspects, we discuss Monte Carlo and deterministic methods and their applications to predicting dose enhancement using various metrics. PMID:26642305
Heavy quark radiation in NLO+PS POWHEG generators
NASA Astrophysics Data System (ADS)
Buonocore, Luca; Nason, Paolo; Tramontano, Francesco
2018-02-01
In this paper we deal with radiation from heavy quarks in the context of next-to-leading order calculations matched to parton shower generators. A new algorithm for radiation from massive quarks is presented that has considerable advantages over the one previously employed. We implement the algorithm in the framework of the POWHEG-BOX, and compare it with the previous one in the case of the hvq generator for bottom production in hadronic collisions, and in the case of the bb4l generator for top production and decay.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehtikangas, O., E-mail: Ossi.Lehtikangas@uef.fi; Tarvainen, T.; Department of Computer Science, University College London, Gower Street, London WC1E 6BT
2015-02-01
The radiative transport equation can be used as a light transport model in a medium with scattering particles, such as biological tissues. In the radiative transport equation, the refractive index is assumed to be constant within the medium. However, in biomedical media, changes in the refractive index can occur between different tissue types. In this work, light propagation in a medium with piece-wise constant refractive index is considered. Light propagation in each sub-domain with a constant refractive index is modeled using the radiative transport equation and the equations are coupled using boundary conditions describing Fresnel reflection and refraction phenomena onmore » the interfaces between the sub-domains. The resulting coupled system of radiative transport equations is numerically solved using a finite element method. The approach is tested with simulations. The results show that this coupled system describes light propagation accurately through comparison with the Monte Carlo method. It is also shown that neglecting the internal changes of the refractive index can lead to erroneous boundary measurements of scattered light.« less
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Assessment of the Broadleaf Crops Leaf Area Index Product from the Terra MODIS Instrument
NASA Technical Reports Server (NTRS)
Tan, Bin; Hu, Jiannan; Huang, Dong; Yang, Wenze; Zhang, Ping; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.
2005-01-01
The first significant processing of Terra MODIS data, called Collection 3, covered the period from November 2000 to December 2002. The Collection 3 leaf area index (LAI) and fraction vegetation absorbed photosynthetically active radiation (FPAR) products for broadleaf crops exhibited three anomalies (a) high LAI values during the peak growing season, (b) differences in LAI seasonality between the radiative transfer-based main algorithm and the vegetation index based back-up algorithm, and (c) too few retrievals from the main algorithm during the summer period when the crops are at full flush. The cause of these anomalies is a mismatch between reflectances modeled by the algorithm and MODIS measurements. Therefore, the Look-Up-Tables accompanying the algorithm were revised and implemented in Collection 4 processing. The main algorithm with the revised Look-Up-Tables generated retrievals for over 80% of the pixels with valid data. Retrievals from the back-up algorithm, although few, should be used with caution as they are generated from surface reflectances with high uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
2018-03-20
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
The TROPOMI surface UV algorithm
NASA Astrophysics Data System (ADS)
Lindfors, Anders V.; Kujanpää, Jukka; Kalakoski, Niilo; Heikkilä, Anu; Lakkala, Kaisa; Mielonen, Tero; Sneep, Maarten; Krotkov, Nickolay A.; Arola, Antti; Tamminen, Johanna
2018-02-01
The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload of the Sentinel-5 Precursor (S5P), which is a polar-orbiting satellite mission of the European Space Agency (ESA). TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI) and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF) algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i) daily dose or daily accumulated irradiance, (ii) overpass dose rate or irradiance, and (iii) local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2) satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.
Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Chacon, Luis
2015-09-01
We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womble, David E.
Unified collision operator demonstrated for both radiation transport and PIC-DSMC. A side-by-side comparison between the DSMC method and the radiation transport method was conducted for photon attenuation in the atmosphere over 2 kilometers in physical distance with a reduction of photon density of six orders of magnitude. Both DSMC and traditional radiation transport agreed with theory to two digits. This indicates that PIC-DSMC operators can be unified with the radiation transport collision operators into a single code base and that physics kernels can remain unique to the actual collision pairs. This simulation example provides an initial validation of the unifiedmore » collision theory approach that will later be implemented into EMPIRE.« less
An exact peak capturing and essentially oscillation-free (EPCOF) algorithm, consisting of advection-dispersion decoupling, backward method of characteristics, forward node tracking, and adaptive local grid refinement, is developed to solve transport equations. This algorithm repr...
NASA Astrophysics Data System (ADS)
Lee, Sangkyu
Illicit trafficking and smuggling of radioactive materials and special nuclear materials (SNM) are considered as one of the most important recent global nuclear threats. Monitoring the transport and safety of radioisotopes and SNM are challenging due to their weak signals and easy shielding. Great efforts worldwide are focused at developing and improving the detection technologies and algorithms, for accurate and reliable detection of radioisotopes of interest in thus better securing the borders against nuclear threats. In general, radiation portal monitors enable detection of gamma and neutron emitting radioisotopes. Passive or active interrogation techniques, present and/or under the development, are all aimed at increasing accuracy, reliability, and in shortening the time of interrogation as well as the cost of the equipment. Equally important efforts are aimed at advancing algorithms to process the imaging data in an efficient manner providing reliable "readings" of the interiors of the examined volumes of various sizes, ranging from cargos to suitcases. The main objective of this thesis is to develop two synergistic algorithms with the goal to provide highly reliable - low noise identification of radioisotope signatures. These algorithms combine analysis of passive radioactive detection technique with active interrogation imaging techniques such as gamma radiography or muon tomography. One algorithm consists of gamma spectroscopy and cosmic muon tomography, and the other algorithm is based on gamma spectroscopy and gamma radiography. The purpose of fusing two detection methodologies per algorithm is to find both heavy-Z radioisotopes and shielding materials, since radionuclides can be identified with gamma spectroscopy, and shielding materials can be detected using muon tomography or gamma radiography. These combined algorithms are created and analyzed based on numerically generated images of various cargo sizes and materials. In summary, the three detection methodologies are fused into two algorithms with mathematical functions providing: reliable identification of radioisotopes in gamma spectroscopy; noise reduction and precision enhancement in muon tomography; and the atomic number and density estimation in gamma radiography. It is expected that these new algorithms maybe implemented at portal scanning systems with the goal to enhance the accuracy and reliability in detecting nuclear materials inside the cargo containers.
Recent Developments in Three Dimensional Radiation Transport Using the Green's Function Technique
NASA Technical Reports Server (NTRS)
Rockell, Candice; Tweed, John; Blattnig, Steve R.; Mertens, Christopher J.
2010-01-01
In the future, astronauts will be sent into space for longer durations of time compared to previous missions. The increased risk of exposure to dangerous radiation, such as Galactic Cosmic Rays and Solar Particle Events, is of great concern. Consequently, steps must be taken to ensure astronaut safety by providing adequate shielding. In order to better determine and verify shielding requirements, an accurate and efficient radiation transport code based on a fully three dimensional radiation transport model using the Green's function technique is being developed
Application of JAERI quantum molecular dynamics model for collisions of heavy nuclei
NASA Astrophysics Data System (ADS)
Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji
2016-06-01
The quantum molecular dynamics (QMD) model incorporated into the general-purpose radiation transport code PHITS was revised for accurate prediction of fragment yields in peripheral collisions. For more accurate simulation of peripheral collisions, stability of the nuclei at their ground state was improved and the algorithm to reject invalid events was modified. In-medium correction on nucleon-nucleon cross sections was also considered. To clarify the effect of this improvement on fragmentation of heavy nuclei, the new QMD model coupled with a statistical decay model was used to calculate fragment production cross sections of Ag and Au targets and compared with the data of earlier measurement. It is shown that the revised version can predict cross section more accurately.
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
DIAPHANE: A portable radiation transport library for astrophysical applications
NASA Astrophysics Data System (ADS)
Reed, Darren S.; Dykes, Tim; Cabezón, Rubén; Gheller, Claudio; Mayer, Lucio
2018-05-01
One of the most computationally demanding aspects of the hydrodynamical modelingof Astrophysical phenomena is the transport of energy by radiation or relativistic particles. Physical processes involving energy transport are ubiquitous and of capital importance in many scenarios ranging from planet formation to cosmic structure evolution, including explosive events like core collapse supernova or gamma-ray bursts. Moreover, the ability to model and hence understand these processes has often been limited by the approximations and incompleteness in the treatment of radiation and relativistic particles. The DIAPHANE project has focused on developing a portable and scalable library that handles the transport of radiation and particles (in particular neutrinos) independently of the underlying hydrodynamic code. In this work, we present the computational framework and the functionalities of the first version of the DIAPHANE library, which has been successfully ported to three different smoothed-particle hydrodynamic codes, GADGET2, GASOLINE and SPHYNX. We also present validation of different modules solving the equations of radiation and neutrino transport using different numerical schemes.
Is the aerosol emission detectable in the thermal infrared?
NASA Astrophysics Data System (ADS)
Hollweg, H.-D.; Bakan, S.; Taylor, J. P.
2006-08-01
The impact of aerosols on the thermal infrared radiation can be assessed by combining observations and radiative transfer calculations. Both have uncertainties, which are discussed in this paper. Observational uncertainties are obtained for two FTIR instruments operated side by side on the ground during the LACE 1998 field campaign. Radiative transfer uncertainties are assessed using a line-by-line model taking into account the uncertainties of the HITRAN 2004 spectroscopic database, uncertainties in the determination of the atmospheric profiles of water vapor and ozone, and differences in the treatment of the water vapor continuum absorption by the CKD 2.4.1 and MT_CKD 1.0 algorithms. The software package OPAC was used to describe the optical properties of aerosols for climate modeling. The corresponding radiative signature is a guideline to the assessment of the uncertainty ranges of observations and models. We found that the detection of aerosols depends strongly on the measurement accuracy of atmospheric profiles of water vapor and ozone and is easier for drier conditions. Within the atmospheric window, only the forcing of downward radiation at the surface by desert aerosol emerges clearly from the uncertainties of modeling and FTIR measurement. Urban and polluted continental aerosols are only partially detectable depending on the wave number and on the atmospheric water vapor amount. Simulations for the space-borne interferometer IASI show that only upward radiation above transported mineral dust aloft emerges out of the uncertainties. The detection of aerosols with weak radiative impact by FTIR instruments like ARIES and OASIS is made difficult by noise as demonstrated by the signal to noise ratio for clean continental aerosols. Altogether, the uncertainties found suggest that it is difficult to detect the optical depths of nonmineral and unpolluted aerosols.
NASA Astrophysics Data System (ADS)
Wibking, Benjamin D.; Thompson, Todd A.; Krumholz, Mark R.
2018-07-01
The radiation force on dust grains may be dynamically important in driving turbulence and outflows in rapidly star-forming galaxies. Recent studies focus on the highly optically thick limit relevant to the densest ultraluminous galaxies and super star clusters, where reprocessed infrared photons provide the dominant source of electromagnetic momentum. However, even among starburst galaxies, the great majority instead lie in the so-called `single-scattering' limit, where the system is optically thick to the incident starlight, but optically thin to the reradiated infrared. In this paper, we present a stability analysis and multidimensional radiation-hydrodynamic simulations exploring the stability and dynamics of isothermal dusty gas columns in this regime. We describe our algorithm for full angle-dependent radiation transport based on the discontinuous Galerkin finite element method. For a range of near-Eddington fluxes, we show that the medium is unstable, producing convective-like motions in a turbulent atmosphere with a scale height significantly inflated compared to the gas pressure scale height and mass-weighted turbulent energy densities of ˜0.01-0.1 of the mid-plane radiation energy density, corresponding to mass-weighted velocity dispersions of Mach number ˜0.5-2. Extrapolation of our results to optical depths of 103 implies maximum turbulent Mach numbers of ˜20. Comparing our results to galaxy-averaged observations, and subject to the approximations of our calculations, we find that radiation pressure does not contribute significantly to the effective supersonic pressure support in star-forming discs, which in general are substantially sub-Eddington. We further examine the time-averaged vertical density profiles in dynamical equilibrium and comment on implications for radiation-pressure-driven galactic winds.
2009-01-01
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6
Heberton, C.I.; Russell, T.F.; Konikow, Leonard F.; Hornberger, G.Z.
2000-01-01
This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Shiyang; Song, Peng; Pei, Wenbing
2013-09-15
Based on the conjugate gradient method, a simple algorithm is presented for deconvolving the temporal response of photoelectric x-ray detectors (XRDs) to reconstruct the resolved time-dependent x-ray fluxes. With this algorithm, we have studied the impact of temporal response of XRD on the radiation diagnosis of hohlraum heated by a short intense laser pulse. It is found that the limiting temporal response of XRD not only postpones the rising edge and peak position of x-ray pulses but also smoothes the possible fluctuations of radiation fluxes. Without a proper consideration of the temporal response of XRD, the measured radiation flux canmore » be largely misinterpreted for radiation pulses of a hohlraum heated by short or shaped laser pulses.« less
Simulation of a fast diffuse optical tomography system based on radiative transfer equation
NASA Astrophysics Data System (ADS)
Motevalli, S. M.; Payani, A.
2016-12-01
Studies show that near-infrared (NIR) light (light with wavelength between 700nm and 1300nm) undergoes two interactions, absorption and scattering, when it penetrates a tissue. Since scattering is the predominant interaction, the calculation of light distribution in the tissue and the image reconstruction of absorption and scattering coefficients are very complicated. Some analytical and numerical methods, such as radiative transport equation and Monte Carlo method, have been used for the simulation of light penetration in tissue. Recently, some investigators in the world have tried to develop a diffuse optical tomography system. In these systems, NIR light penetrates the tissue and passes through the tissue. Then, light exiting the tissue is measured by NIR detectors placed around the tissue. These data are collected from all the detectors and transferred to the computational parts (including hardware and software), which make a cross-sectional image of the tissue after performing some computational processes. In this paper, the results of the simulation of an optical diffuse tomography system are presented. This simulation involves two stages: a) Simulation of the forward problem (or light penetration in the tissue), which is performed by solving the diffusion approximation equation in the stationary state using FEM. b) Simulation of the inverse problem (or image reconstruction), which is performed by the optimization algorithm called Broyden quasi-Newton. This method of image reconstruction is faster compared to the other Newton-based optimization algorithms, such as the Levenberg-Marquardt one.
ipole: Semianalytic scheme for relativistic polarized radiative transport
NASA Astrophysics Data System (ADS)
Moscibrodzka, Monika; Gammie, Charles F.
2018-04-01
ipole is a ray-tracing code for covariant, polarized radiative transport particularly useful for modeling Event Horizon Telescope sources, though may also be used for other relativistic transport problems. The code extends the ibothros scheme for covariant, unpolarized transport using two representations of the polarized radiation field: in the coordinate frame, it parallel transports the coherency tensor, and in the frame of the plasma, it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is as spacetime- and coordinate- independent as possible; the emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, ipole is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth.
Improvements to the Ionizing Radiation Risk Assessment Program for NASA Astronauts
NASA Technical Reports Server (NTRS)
Semones, E. J.; Bahadori, A. A.; Picco, C. E.; Shavers, M. R.; Flores-McLaughlin, J.
2011-01-01
To perform dosimetry and risk assessment, NASA collects astronaut ionizing radiation exposure data from space flight, medical imaging and therapy, aviation training activities and prior occupational exposure histories. Career risk of exposure induced death (REID) from radiation is limited to 3 percent at a 95 percent confidence level. The Radiation Health Office at Johnson Space Center (JSC) is implementing a program to integrate the gathering, storage, analysis and reporting of astronaut ionizing radiation dose and risk data and records. This work has several motivations, including more efficient analyses and greater flexibility in testing and adopting new methods for evaluating risks. The foundation for these improvements is a set of software tools called the Astronaut Radiation Exposure Analysis System (AREAS). AREAS is a series of MATLAB(Registered TradeMark)-based dose and risk analysis modules that interface with an enterprise level SQL Server database by means of a secure web service. It communicates with other JSC medical and space weather databases to maintain data integrity and consistency across systems. AREAS is part of a larger NASA Space Medicine effort, the Mission Medical Integration Strategy, with the goal of collecting accurate, high-quality and detailed astronaut health data, and then securely, timely and reliably presenting it to medical support personnel. The modular approach to the AREAS design accommodates past, current, and future sources of data from active and passive detectors, space radiation transport algorithms, computational phantoms and cancer risk models. Revisions of the cancer risk model, new radiation detection equipment and improved anthropomorphic computational phantoms can be incorporated. Notable hardware updates include the Radiation Environment Monitor (which uses Medipix technology to report real-time, on-board dosimetry measurements), an updated Tissue-Equivalent Proportional Counter, and the Southwest Research Institute Radiation Assessment Detector. Also, the University of Florida hybrid phantoms, which are flexible in morphometry and positioning, are being explored as alternatives to the current NASA computational phantoms.
Lee, Ki Baek
2018-01-01
Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008
Design of Dual-Road Transportable Portal Monitoring System for Visible Light and Gamma-Ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Cunningham, Mark F; Goddard Jr, James Samuel
2010-01-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they entermore » and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third alignment camera for motion compensation and are mounted on a 50 deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.« less
Design of dual-road transportable portal monitoring system for visible light and gamma-ray imaging
NASA Astrophysics Data System (ADS)
Karnowski, Thomas P.; Cunningham, Mark F.; Goddard, James S.; Cheriyadat, Anil M.; Hornback, Donald E.; Fabris, Lorenzo; Kerekes, Ryan A.; Ziock, Klaus-Peter; Bradley, E. Craig; Chesser, J.; Marchant, W.
2010-04-01
The use of radiation sensors as portal monitors is increasing due to heightened concerns over the smuggling of fissile material. Transportable systems that can detect significant quantities of fissile material that might be present in vehicular traffic are of particular interest, especially if they can be rapidly deployed to different locations. To serve this application, we have constructed a rapid-deployment portal monitor that uses visible-light and gamma-ray imaging to allow simultaneous monitoring of multiple lanes of traffic from the side of a roadway. The system operation uses machine vision methods on the visible-light images to detect vehicles as they enter and exit the field of view and to measure their position in each frame. The visible-light and gamma-ray cameras are synchronized which allows the gamma-ray imager to harvest gamma-ray data specific to each vehicle, integrating its radiation signature for the entire time that it is in the field of view. Thus our system creates vehicle-specific radiation signatures and avoids source confusion problems that plague non-imaging approaches to the same problem. Our current prototype instrument was designed for measurement of upto five lanes of freeway traffic with a pair of instruments, one on either side of the roadway. Stereoscopic cameras are used with a third "alignment" camera for motion compensation and are mounted on a 50' deployable mast. In this paper we discuss the design considerations for the machine-vision system, the algorithms used for vehicle detection and position estimates, and the overall architecture of the system. We also discuss system calibration for rapid deployment. We conclude with notes on preliminary performance and deployment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
An Algorithm to Compress Line-transition Data for Radiative-transfer Calculations
NASA Astrophysics Data System (ADS)
Cubillos, Patricio E.
2017-11-01
Molecular line-transition lists are an essential ingredient for radiative-transfer calculations. With recent databases now surpassing the billion-line mark, handling them has become computationally prohibitive, due to both the required processing power and memory. Here I present a temperature-dependent algorithm to separate strong from weak line transitions, reformatting the large majority of the weaker lines into a cross-section data file, and retaining the detailed line-by-line information of the fewer strong lines. For any given molecule over the 0.3-30 μm range, this algorithm reduces the number of lines to a few million, enabling faster radiative-transfer computations without a significant loss of information. The final compression rate depends on how densely populated the spectrum is. I validate this algorithm by comparing Exomol’s HCN extinction-coefficient spectra between the complete (65 million line transitions) and compressed (7.7 million) line lists. Over the 0.6-33 μm range, the average difference between extinction-coefficient values is less than 1%. A Python/C implementation of this algorithm is open-source and available at https://github.com/pcubillos/repack. So far, this code handles the Exomol and HITRAN line-transition format.
NASA Astrophysics Data System (ADS)
Wang, Jun; Christopher, Sundar A.; Nair, U. S.; Reid, Jeffrey S.; Prins, Elaine M.; Szykman, James; Hand, Jenny L.
2006-03-01
As is typical in the Northern Hemisphere spring, during 20 April to 21 May 2003, significant biomass burning smoke from Central America was transported to the southeastern United States (SEUS). A coupled aerosol, radiation, and meteorology model that is built upon the heritage of the Regional Atmospheric Modeling System (RAMS), having newly developed capabilities of Assimilation and Radiation Online Modeling of Aerosols (AROMA) algorithm, was used to simulate the smoke transport and quantify the smoke radiative impacts on surface energetics, boundary layer, and other atmospheric processes. This paper, the first of a two-part series, describes the model and examines the ability of RAMS-AROMA to simulate the smoke transport. Because biomass-burning fire activities have distinct diurnal variations, the FLAMBE hourly smoke emission inventory that is derived from the geostationary satellite (GOES) fire products was assimilated into the model. In the "top-down" analysis, ground-based observations were used to evaluate the model performance, and the comparisons with model-simulated results were used to estimate emission uncertainties. Qualitatively, a 30-day simulation of smoke spatial distribution as well as the timing and location of the smoke fronts are consistent with those identified from the PM2.5 observation network, local air quality reports, and the measurements of aerosol optical thickness (AOT) and aerosol vertical profiles from the Southern Great Plains (SGP) Atmospheric Radiation Measurements (ARM) site in Oklahoma. Quantitatively, the model-simulated daily mean near-surface dry smoke mass correlates well with PM2.5 mass at 34 locations in Texas and with the total carbon mass and nonsoil potassium mass (KNON) at three IMPROVE sites along the smoke pathway (with linear correlation coefficients R = 0.77, 0.74, and 0.69 at the significance level larger than 0.99, respectively). The top-down sensitivity analysis indicates that the total smoke particle emission during the study period is about 1.3 ± 0.2 Tg. The results further indicate that the simulation with a daily smoke emission inventory provides a slightly better correlation with measurements in the downwind region on daily scales but gives an unrealistic diurnal variation of AOT in the smoke source region. This study suggests that the assimilation of emission inventories from geostationary satellites is superior to that of polar orbiting satellites and has important implications for the modeling of air quality in areas influenced by fire-related pollutants from distant sources.
Validation of energy-weighted algorithm for radiation portal monitor using plastic scintillator.
Lee, Hyun Cheol; Shin, Wook-Geun; Park, Hyo Jun; Yoo, Do Hyun; Choi, Chang-Il; Park, Chang-Su; Kim, Hong-Suk; Min, Chul Hee
2016-01-01
To prevent illicit tracking of radionuclides, radiation portal monitor (RPM) systems employing plastic scintillators have been used in ports and airports. However, their poor energy resolution makes the discrimination of radioactive material inaccurate. In this study, an energy weight algorithm was validated to determine (133)Ba, (22)Na, (137)Cs, and (60)Co by using a plastic scintillator. The Compton edges of energy spectra were converted to peaks based on the algorithm. The peaks have a maximum error of 6% towards the theoretical Compton edge. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Dong-xia; Ye, Qian-wen
Out-of-band radiation suppression algorithm must be used efficiently for broadband aeronautical communication system in order not to interfere the operation of the existing systems in aviation L-Band. Based on the simple introduction of the broadband aeronautical multi-carrier communication (B-AMC) system model, several sidelobe suppression techniques in orthogonal frequency multiplexing (OFDM) system are presented and analyzed so as to find a suitable algorithm for B-AMC system in this paper. Simulation results show that raise-cosine function windowing can suppress the out-of-band radiation of B-AMC system effectively.
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Pandey, Dhirendra K.; Taylor, Deborah B.
1989-01-01
The Earth Radiation Budget Experiment (ERBE) is making high-absolute-accuracy measurements of the reflected solar and Earth-emitted radiation as well as the incoming solar radiation from three satellites: ERBS, NOAA-9, and NOAA-10. Each satellite has four Earth-looking nonscanning radiometers and three scanning radiometers. A fifth nonscanner, the solar monitor, measures the incoming solar radiation. The development of the ERBE sensor characterization procedures are described using the calibration data for each of the Earth-looking nonscanners and scanners. Sensor models for the ERBE radiometers are developed including the radiative exchange, conductive heat flow, and electronics processing for transient and steady state conditions. The steady state models are used to interpret the sensor outputs, resulting in the data reduction algorithms for the ERBE instruments. Both ground calibration and flight calibration procedures are treated and analyzed. The ground and flight calibration coefficients for the data reduction algorithms are presented.
NASA Astrophysics Data System (ADS)
Didari, Azadeh; Pinar Mengüç, M.
2017-08-01
Advances in nanotechnology and nanophotonics are inextricably linked with the need for reliable computational algorithms to be adapted as design tools for the development of new concepts in energy harvesting, radiative cooling, nanolithography and nano-scale manufacturing, among others. In this paper, we provide an outline for such a computational tool, named NF-RT-FDTD, to determine the near-field radiative transfer between structured surfaces using Finite Difference Time Domain method. NF-RT-FDTD is a direct and non-stochastic algorithm, which accounts for the statistical nature of the thermal radiation and is easily applicable to any arbitrary geometry at thermal equilibrium. We present a review of the fundamental relations for far- and near-field radiative transfer between different geometries with nano-scale surface and volumetric features and gaps, and then we discuss the details of the NF-RT-FDTD formulation, its application to sample geometries and outline its future expansion to more complex geometries. In addition, we briefly discuss some of the recent numerical works for direct and indirect calculations of near-field thermal radiation transfer, including Scattering Matrix method, Finite Difference Time Domain method (FDTD), Wiener Chaos Expansion, Fluctuating Surface Current (FSC), Fluctuating Volume Current (FVC) and Thermal Discrete Dipole Approximations (TDDA).
NASA Astrophysics Data System (ADS)
Hartatik; Purbayu, A.; Triyono, L.
2018-03-01
Major problem that often occurs in waste transportation in each region is the route of garbage transportation. Determination of this route should become a major concern because it affects fuel consumption and also the working time from the employee. Therefore, in this research we will develop an application to optimize with pigeonhole and dijsktra algorithm. Pigeonhole algorithm is used to determine which garbage trucks should be taken in a particular TPS. Time optimization is done by determining the shortest path that can be skipped for each garbage truck. Data generated from Pigeonhole then used to determine the shortest path by using Dijkstra algorithm.
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Mobile transporter path planning
NASA Technical Reports Server (NTRS)
Baffes, Paul; Wang, Lui
1990-01-01
The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.
Diffusive, Supersonic X-ray Transport in Foam Cylinders
NASA Astrophysics Data System (ADS)
Back, Christina A.
1999-11-01
Diffusive supersonic radiation transport, where the ratio of the diffusive radiation front velocity to the material sound speed >2 has been studied in a series of laboratory experiments on low density foams. This work is of interest for radiation transport in basic science and astrophysics. The Marshak radiation wave transport is studied for both low and high Z foam materials and for different length foams in a novel hohlraum geometry that allows direct comparisons with 2-dimensional analytic models and code simulations. The radiation wave is created by a ~ 80 eV near blackbody 12-ns long drive or a ~ 200 eV 1.2-2.4 ns long drive generated by laser-heated Au hohlraums. The targets are SiO2 and Ta2O5 aerogel foams of varying lengths which span 10 to 50 mg/cc densities. Clean signatures of radiation breakout were observed by radially resolved face-on transmission measurements of the radiation flux at a photon energy of 250 eV or 550 eV. The high quality data provides new detailed information on the importance of both the fill and wall material opacities and heat capacities in determining the radiation front speed and curvature. note number.
2013-01-01
Background Resistance to radiation treatment remains a major clinical problem for patients with brain cancer. Medulloblastoma is the most common malignant brain tumor of childhood, and occurs in the cerebellum. Though radiation treatment has been critical in increasing survival rates in recent decades, the presence of resistant cells in a substantial number of medulloblastoma patients leads to relapse and death. Methods Using the established medulloblastoma cell lines UW228 and Daoy, we developed a novel model system to enrich for and study radiation tolerant cells early after radiation exposure. Using fluorescence-activated cell sorting, dead cells and cells that had initiated apoptosis were removed, allowing surviving cells to be investigated before extensive proliferation took place. Results Isolated surviving cells were tumorigenic in vivo and displayed elevated levels of ABCG2, an ABC transporter linked to stem cell behavior and drug resistance. Further investigation showed another family member, ABCA1, was also elevated in surviving cells in these lines, as well as in early passage cultures from pediatric medulloblastoma patients. We discovered that the multi-ABC transporter inhibitors verapamil and reserpine sensitized cells from particular patients to radiation, suggesting that ABC transporters have a functional role in cellular radiation protection. Additionally, verapamil had an intrinsic anti-proliferative effect, with transient exposure in vitro slowing subsequent in vivo tumor formation. When expression of key ABC transporter genes was assessed in medulloblastoma tissue from 34 patients, levels were frequently elevated compared with normal cerebellum. Analysis of microarray data from independent cohorts (n = 428 patients) showed expression of a number of ABC transporters to be strongly correlated with certain medulloblastoma subtypes, which in turn are associated with clinical outcome. Conclusions ABC transporter inhibitors are already being trialed clinically, with the aim of decreasing chemotherapy resistance. Our findings suggest that the inhibition of ABC transporters could also increase the efficacy of radiation treatment for medulloblastoma patients. Additionally, the finding that certain family members are associated with particular molecular subtypes (most notably high ABCA8 and ABCB4 expression in Sonic Hedgehog pathway driven tumors), along with cell membrane location, suggests ABC transporters are worthy of consideration for the diagnostic classification of medulloblastoma. PMID:24219920
An Algorithm for the Transport of Anisotropic Neutrons
NASA Technical Reports Server (NTRS)
Tweed, J.
2005-01-01
One major obstacle to human space exploration is the possible limitations imposed by the adverse effect of long-term exposure to the space environment. Even before human spaceflight began, the potentially brief exposure of astronauts to the very intense random solar particle events (SPE) were of great concern. A new challenge appears in deep space exploration from exposure to the low-intensity heavy-ion flux of the galactic cosmic rays (GCR) since the missions are of long duration and the accumulated GCR exposures can be high. Because cancer induction rates increase behind low to rather large thicknesses of aluminum shielding, according to available biological data on mammalian exposures to GCR like ions, the shield requirements for a Mars mission are prohibitively expensive in terms of mission launch costs. Therefore, a critical issue in the Human Exploration and Development of Space enterprise is cost effective mitigation of risk associated with ionizing radiation exposure. In order to estimate astronaut risk to GCR exposure and associated cancer risks and health hazards, it is necessary to do shield material studies. To determine an optimum radiation shield material it is necessary to understand nuclear interaction processes such as fragmentation and secondary particle production which is a function of energy dependent cross sections. This requires knowledge of material transmission characteristics either through laboratory testing or improved theoretical modeling. Here ion beam transport theory is of importance in that testing of materials in the laboratory environment generated by particle accelerators is a necessary step in materials development and evaluation for space use. The approximations used in solving the Boltzmann transport equation for the space setting are often not sufficient for laboratory work and those issues are a major emphasis of the present work.
SXR measurement and W transport survey using GEM tomographic system on WEST
NASA Astrophysics Data System (ADS)
Mazon, D.; Jardin, A.; Malard, P.; Chernyshova, M.; Coston, C.; Malard, P.; O'Mullane, M.; Czarski, T.; Malinowski, K.; Faisse, F.; Ferlay, F.; Verger, J. M.; Bec, A.; Larroque, S.; Kasprowicz, G.; Wojenski, A.; Pozniak, K.
2017-11-01
Measuring Soft X-Ray (SXR) radiation (0.1-20 keV) of fusion plasmas is a standard way of accessing valuable information on particle transport. Since heavy impurities like tungsten (W) could degrade plasma core performances and cause radiative collapses, it is necessary to develop new diagnostics to be able to monitor the impurity distribution in harsh fusion environments like ITER. A gaseous detector with energy discrimination would be a very good candidate for this purpose. The design and implementation of a new SXR diagnostic developed for the WEST project, based on a triple Gas Electron Multiplier (GEM) detector is presented. This detector works in photon counting mode and presents energy discrimination capabilities. The SXR system is composed of two 1D cameras (vertical and horizontal views respectively), located in the same poloidal cross-section to allow for tomographic reconstruction. An array (20 cm × 2 cm) consists of up to 128 detectors in front of a beryllium pinhole (equipped with a 1 mm diameter diaphragm) inserted at about 50 cm depth inside a cooled thimble in order to retrieve a wide plasma view. Acquisition of low energy spectrum is insured by a helium buffer installed between the pinhole and the detector. Complementary cooling systems (water) are used to maintain a constant temperature (25oC) inside the thimble. Finally a real-time automatic extraction system has been developed to protect the diagnostic during baking phases or any overheating unwanted events. Preliminary simulations of plasma emissivity and W distribution have been performed for WEST using a recently developed synthetic diagnostic coupled to a tomographic algorithm based on the minimum Fisher information (MFI) inversion method. First GEM acquisitions are presented as well as estimation of transport effect in presence of ICRH on W density reconstruction capabilities of the GEM.
Silva, Leonardo W T; Barros, Vitor F; Silva, Sandro G
2014-08-18
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.
2014-01-01
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013
Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer
NASA Astrophysics Data System (ADS)
Meliani, Zakaria; Mizuno, Yosuke; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri
2017-02-01
Context. In many astrophysical phenomena, and especially in those that involve the high-energy regimes that always accompany the astronomical phenomenology of black holes and neutron stars, physical conditions that are achieved are extreme in terms of speeds, temperatures, and gravitational fields. In such relativistic regimes, numerical calculations are the only tool to accurately model the dynamics of the flows and the transport of radiation in the accreting matter. Aims: We here continue our effort of modelling the behaviour of matter when it orbits or is accreted onto a generic black hole by developing a new numerical code that employs advanced techniques geared towards solving the equations of general-relativistic hydrodynamics. Methods: More specifically, the new code employs a number of high-resolution shock-capturing Riemann solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of adaptive mesh-refinement (AMR) techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to accurately compute the electromagnetic emissions from such accretion flows. Results: We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry that are performed either in two or three spatial dimensions. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black-hole binary interacting with the surrounding circumbinary disc. In this way, we can present for the first time ray-traced images of the shocked fluid and the light curve resulting from consistent general-relativistic radiation-transport calculations from this process. Conclusions: The work presented here lays the ground for the development of a generic computational infrastructure employing AMR techniques to accurately and self-consistently calculate general-relativistic accretion flows onto compact objects. In addition to the accurate handling of the matter, we provide a self-consistent electromagnetic emission from these scenarios by solving the associated radiative-transfer problem. While magnetic fields are currently excluded from our analysis, the tools presented here can have a number of applications to study accretion flows onto black holes or neutron stars.
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Algorithms for constructing optimal paths and statistical analysis of passenger traffic
NASA Astrophysics Data System (ADS)
Trofimov, S. P.; Druzhinina, N. G.; Trofimova, O. G.
2018-01-01
Several existing information systems of urban passenger transport (UPT) are considered. Author’s UPT network model is presented. To a passenger a new service is offered that is the best path from one stop to another stop at a specified time. The algorithm and software implementation for finding the optimal path are presented. The algorithm uses the current UPT schedule. The article also describes the algorithm of statistical analysis of trip payments by the electronic E-cards. The algorithm allows obtaining the density of passenger traffic during the day. This density is independent of the network topology and UPT schedules. The resulting density of the traffic flow can solve a number of practical problems. In particular, the forecast for the overflow of passenger transport in the «rush» hours, the quantitative comparison of different topologies transport networks, constructing of the best UPT timetable. The efficiency of the proposed integrated approach is demonstrated by the example of the model town with arbitrary dimensions.
Spatiotemporal observation of transport in fractured rocks
NASA Astrophysics Data System (ADS)
Kulenkampff, Johannes; Enzmann, Frieder; Gründig, Marion; Mittmann, Hellmuth; Wolf, Martin
2010-05-01
A number of injection experiments in different rocks types have been conducted with positron emission-process-tomography using a high-resolution "small-animal" PET-scanner (ClearPET by Raytest, Straubenhardt) for the monitoring of transport processes. The fluids are labelled with positron-emitting isotopes like e.g. 18F-, 124I- or dissolvable complexes like K3[58Co(CN)6], without affecting their physico-chemical properties. The annihilation radiation from individual decaying tracer atoms is detected with high sensitivity, and the tomographic reconstruction of the recorded events yields quantitative 3D-images of the tracer distribution. Sequential tomograms during and after tracer injection are used for the spatiotemporal observation of the fluid transport. Raw data is corrected with respect to background radiation (randoms) and Compton scattering, which turns out to be much more significant in rocks than in common biomedical applications. Although in principle these effects are exactly known, we developed and apply simplified and fast correction methods. Deficiencies of these correction algorithms generate some artefacts, that cause the lower limit of the tracer concentration in the order of 1 kBq/?l or about 107 atoms/?l, still outranging other methods (e.g. NMR or resistivity tomography) by many orders of magnitude. New 3D-visualizations of the process-tomograms in fractured rocks show strongly localized and complex flow paths and in parts unexpected deviations from the fracture structures as deduced from ?CT-images. Such results demonstrate the potential of large discrepancies between ?CT-derived parameters like pore volume and specific surface area and the hydraulic effective parameters as derived by means of the PET-process-tomography. We conclude that such discrepancies and the complexity of the transport process in natural heterogeneous porous media illustrates the limits of parameter determination methods from model simulations based on structural pore-space models - in particular as long as the simulations are not verified by experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
St Aubin, J., E-mail: joel.st.aubin@albertahealthservices.ca; Keyvanloo, A.; Fallone, B. G.
Purpose: The advent of magnetic resonance imaging (MRI) guided radiotherapy systems demands the incorporation of the magnetic field into dose calculation algorithms of treatment planning systems. This is due to the fact that the Lorentz force of the magnetic field perturbs the path of the relativistic electrons, hence altering the dose deposited by them. Building on the previous work, the authors have developed a discontinuous finite element space-angle treatment of the linear Boltzmann transport equation to accurately account for the effects of magnetic fields on radiotherapy doses. Methods: The authors present a detailed description of their new formalism and comparemore » its accuracy to GEANT4 Monte Carlo calculations for magnetic fields parallel and perpendicular to the radiation beam at field strengths of 0.5 and 3 T for an inhomogeneous 3D slab geometry phantom comprising water, bone, and air or lung. The accuracy of the authors’ new formalism was determined using a gamma analysis with a 2%/2 mm criterion. Results: Greater than 98.9% of all points analyzed passed the 2%/2 mm gamma criterion for the field strengths and orientations tested. The authors have benchmarked their new formalism against Monte Carlo in a challenging radiation transport problem with a high density material (bone) directly adjacent to a very low density material (dry air at STP) where the effects of the magnetic field dominate collisions. Conclusions: A discontinuous finite element space-angle approach has been proven to be an accurate method for solving the linear Boltzmann transport equation with magnetic fields for cases relevant to MRI guided radiotherapy. The authors have validated the accuracy of this novel technique against GEANT4, even in cases of strong magnetic field strengths and low density air.« less
Transport calculations and accelerator experiments needed for radiation risk assessment in space.
Sihver, Lembit
2008-01-01
The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacón, Luis; CoCoMans Team
2014-10-01
For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.
Testing block subdivision algorithms on block designs
NASA Astrophysics Data System (ADS)
Wiseman, Natalie; Patterson, Zachary
2016-01-01
Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.
Advanced Clinical Decision Support for Transport of the Critically Ill Patient
2013-12-01
algorithms, status asthmaticus and status epilepticus , are to "go live" for use on pediatric critical care transport by the end of October. (Appendices 5...additional algorithms ( status asthmaticus and status epilepticus , Appendices 5 and 6). 8) Plans for validation testing to other transport teams...Practice Guideline Status Epilepticus Clinical Practice Guideline 17 d .s:. ... > 0 Cf :~f t t) ’ U t I .!! ~ .. z ~ i ~ t 0 - : i l
Anthology of the Development of Radiation Transport Tools as Applied to Single Event Effects
NASA Astrophysics Data System (ADS)
Reed, R. A.; Weller, R. A.; Akkerman, A.; Barak, J.; Culpepper, W.; Duzellier, S.; Foster, C.; Gaillardin, M.; Hubert, G.; Jordan, T.; Jun, I.; Koontz, S.; Lei, F.; McNulty, P.; Mendenhall, M. H.; Murat, M.; Nieminen, P.; O'Neill, P.; Raine, M.; Reddell, B.; Saigné, F.; Santin, G.; Sihver, L.; Tang, H. H. K.; Truscott, P. R.; Wrobel, F.
2013-06-01
This anthology contains contributions from eleven different groups, each developing and/or applying Monte Carlo-based radiation transport tools to simulate a variety of effects that result from energy transferred to a semiconductor material by a single particle event. The topics span from basic mechanisms for single-particle induced failures to applied tasks like developing websites to predict on-orbit single event failure rates using Monte Carlo radiation transport tools.
Studies of the net surface radiative flux from satellite radiances during FIFE
NASA Technical Reports Server (NTRS)
Frouin, Robert
1993-01-01
Studies of the net surface radiative flux from satellite radiances during First ISLSCP Field Experiment (FIFE) are presented. Topics covered include: radiative transfer model validation; calibration of VISSR and AVHRR solar channels; development and refinement of algorithms to estimate downward solar and terrestrial irradiances at the surface, including photosynthetically available radiation (PAR) and surface albedo; verification of these algorithms using in situ measurements; production of maps of shortwave irradiance, surface albedo, and related products; analysis of the temporal variability of shortwave irradiance over the FIFE site; development of a spectroscopy technique to estimate atmospheric total water vapor amount; and study of optimum linear combinations of visible and near-infrared reflectances for estimating the fraction of PAR absorbed by plants.
Radiation transport calculations for cosmic radiation.
Endo, A; Sato, T
2012-01-01
The radiation environment inside and near spacecraft consists of various components of primary radiation in space and secondary radiation produced by the interaction of the primary radiation with the walls and equipment of the spacecraft. Radiation fields inside astronauts are different from those outside them, because of the body's self-shielding as well as the nuclear fragmentation reactions occurring in the human body. Several computer codes have been developed to simulate the physical processes of the coupled transport of protons, high-charge and high-energy nuclei, and the secondary radiation produced in atomic and nuclear collision processes in matter. These computer codes have been used in various space radiation protection applications: shielding design for spacecraft and planetary habitats, simulation of instrument and detector responses, analysis of absorbed doses and quality factors in organs and tissues, and study of biological effects. This paper focuses on the methods and computer codes used for radiation transport calculations on cosmic radiation, and their application to the analysis of radiation fields inside spacecraft, evaluation of organ doses in the human body, and calculation of dose conversion coefficients using the reference phantoms defined in ICRP Publication 110. Copyright © 2012. Published by Elsevier Ltd.
Far-field radiation patterns of aperture antennas by the Winograd Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Heisler, R.
1978-01-01
A more time-efficient algorithm for computing the discrete Fourier transform, the Winograd Fourier transform (WFT), is described. The WFT algorithm is compared with other transform algorithms. Results indicate that the WFT algorithm in antenna analysis appears to be a very successful application. Significant savings in cpu time will improve the computer turn around time and circumvent the need to resort to weekend runs.
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, T; Finlay, J; Mesina, C
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J.
2015-01-01
Abstract. Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of 100 μm. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a 5×5 array of 200 μm pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent K-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of 194 μm, with 2×2 binning during the acquisition giving an effective pixel size of 388 μm. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors. PMID:26158095
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J
2015-04-01
Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of [Formula: see text]. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a [Formula: see text] array of [Formula: see text] pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent [Formula: see text]-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of [Formula: see text], with [Formula: see text] binning during the acquisition giving an effective pixel size of [Formula: see text]. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors.
Magnetic field effects on the energy deposition spectra of MV photon radiation.
Kirkby, C; Stanescu, T; Fallone, B G
2009-01-21
Several groups worldwide have proposed various concepts for improving megavoltage (MV) radiotherapy that involve irradiating patients in the presence of a magnetic field-either for image guidance in the case of hybrid radiotherapy-MRI machines or for purposes of introducing tighter control over dose distributions. The presence of a magnetic field alters the trajectory of charged particles between interactions with the medium and thus has the potential to alter energy deposition patterns within a sub-cellular target volume. In this work, we use the MC radiation transport code PENELOPE with appropriate algorithms invoked to incorporate magnetic field deflections to investigate electron energy fluence in the presence of a uniform magnetic field and the energy deposition spectra within a 10 microm water sphere as a function of magnetic field strength. The simulations suggest only very minor changes to the electron fluence even for extremely strong magnetic fields. Further, calculations of the dose-averaged lineal energy indicate that a magnetic field strength of at least 70 T is required before beam quality will change by more than 2%.
Radiative transfer calculated from a Markov chain formalism
NASA Technical Reports Server (NTRS)
Esposito, L. W.; House, L. L.
1978-01-01
The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.
A new scanning device in CT with dose reduction potential
NASA Astrophysics Data System (ADS)
Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2006-03-01
The amount of x-ray radiation currently applied in CT practice is not utilized optimally. A portion of radiation traversing the patient is either not detected at all or is used ineffectively. The reason lies partly in the reconstruction algorithms and partly in the geometry of the CT scanners designed specifically for these algorithms. In fact, the reconstruction methods widely used in CT are intended to invert the data that correspond to ideal straight lines. However, the collection of such data is often not accurate due to likely movement of the source/detector system of the scanner in the time interval during which all the detectors are read. In this paper, a new design of the scanner geometry is proposed that is immune to the movement of the CT system and will collect all radiation traversing the patient. The proposed scanning design has a potential to reduce the patient dose by a factor of two. Furthermore, it can be used with the existing reconstruction algorithm and it is particularly suitable for OPED, a new robust reconstruction algorithm.
A preliminary study to metaheuristic approach in multilayer radiation shielding optimization
NASA Astrophysics Data System (ADS)
Arif Sazali, Muhammad; Rashid, Nahrul Khair Alang Md; Hamzah, Khaidzir
2018-01-01
Metaheuristics are high-level algorithmic concepts that can be used to develop heuristic optimization algorithms. One of their applications is to find optimal or near optimal solutions to combinatorial optimization problems (COPs) such as scheduling, vehicle routing, and timetabling. Combinatorial optimization deals with finding optimal combinations or permutations in a given set of problem components when exhaustive search is not feasible. A radiation shield made of several layers of different materials can be regarded as a COP. The time taken to optimize the shield may be too high when several parameters are involved such as the number of materials, the thickness of layers, and the arrangement of materials. Metaheuristics can be applied to reduce the optimization time, trading guaranteed optimal solutions for near-optimal solutions in comparably short amount of time. The application of metaheuristics for radiation shield optimization is lacking. In this paper, we present a review on the suitability of using metaheuristics in multilayer shielding design, specifically the genetic algorithm and ant colony optimization algorithm (ACO). We would also like to propose an optimization model based on the ACO method.
MESTRN: A Deterministic Meson-Muon Transport Code for Space Radiation
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Norbury, John W.; Norman, Ryan B.; Wilson, John W.; Singleterry, Robert C., Jr.; Tripathi, Ram K.
2004-01-01
A safe and efficient exploration of space requires an understanding of space radiations, so that human life and sensitive equipment can be protected. On the way to these sensitive sites, the radiation fields are modified in both quality and quantity. Many of these modifications are thought to be due to the production of pions and muons in the interactions between the radiation and intervening matter. A method used to predict the effects of the presence of these particles on the transport of radiation through materials is developed. This method was then used to develop software, which was used to calculate the fluxes of pions and muons after the transport of a cosmic ray spectrum through aluminum and water. Software descriptions are given in the appendices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicolae, A; Department of Physics, Ryerson University, Toronto, ON; Lu, L
Purpose: A novel, automated, algorithm for permanent prostate brachytherapy (PPB) treatment planning has been developed. The novel approach uses machine-learning (ML), a form of artificial intelligence, to substantially decrease planning time while simultaneously retaining the clinical intuition of plans created by radiation oncologists. This study seeks to compare the ML algorithm against expert-planned PPB plans to evaluate the equivalency of dosimetric and clinical plan quality. Methods: Plan features were computed from historical high-quality PPB treatments (N = 100) and stored in a relational database (RDB). The ML algorithm matched new PPB features to a highly similar case in the RDB;more » this initial plan configuration was then further optimized using a stochastic search algorithm. PPB pre-plans (N = 30) generated using the ML algorithm were compared to plan variants created by an expert dosimetrist (RT), and radiation oncologist (MD). Planning time and pre-plan dosimetry were evaluated using a one-way Student’s t-test and ANOVA, respectively (significance level = 0.05). Clinical implant quality was evaluated by expert PPB radiation oncologists as part of a qualitative study. Results: Average planning time was 0.44 ± 0.42 min compared to 17.88 ± 8.76 min for the ML algorithm and RT, respectively, a significant advantage [t(9), p = 0.01]. A post-hoc ANOVA [F(2,87) = 6.59, p = 0.002] using Tukey-Kramer criteria showed a significantly lower mean prostate V150% for the ML plans (52.9%) compared to the RT (57.3%), and MD (56.2%) plans. Preliminary qualitative study results indicate comparable clinical implant quality between RT and ML plans with a trend towards preference for ML plans. Conclusion: PPB pre-treatment plans highly comparable to those of an expert radiation oncologist can be created using a novel ML planning model. The use of an ML-based planning approach is expected to translate into improved PPB accessibility and plan uniformity.« less
Herman, Gabor T; Chen, Wei
2008-03-01
The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.
A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II
NASA Astrophysics Data System (ADS)
Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki
The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.
Fuego/Scefire MPMD Coupling L2 Milestone Executive Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Flint; Tencer, John; Pautz, Shawn D.
2017-09-01
This milestone campaign was focused on coupling Sandia physics codes SIERRA low Mach module Fuego and RAMSES Boltzmann transport code Sceptre(Scefire). Fuego enables simulation of low Mach, turbulent, reacting, particle laden flows on unstructured meshes using CVFEM for abnormal thermal environments throughout SNL and the larger national security community. Sceptre provides simulation for photon, neutron, and charged particle transport on unstructured meshes using Discontinuous Galerkin for radiation effects calculations at SNL and elsewhere. Coupling these ”best of breed” codes enables efficient modeling of thermal/fluid environments with radiation transport, including fires (pool, propellant, composite) as well as those with directed radiantmore » fluxes. We seek to improve the experience of Fuego users who require radiation transport capabilities in two ways. The first is performance. We achieve this through leveraging additional computational resources for Scefire, reducing calculation times while leaving unaffected resources for fluid physics. This approach is new to Fuego, which previously utilized the same resources for both fluid and radiation solutions. The second improvement enables new radiation capabilities, including spectral (banded) radiation, beam boundary sources, and alternate radiation solvers (i.e. Pn). This summary provides an overview of these achievements.« less
Prompt Radiation Protection Factors
2018-02-01
dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat
2009-07-05
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA
NASA Technical Reports Server (NTRS)
Knies, R. J.; Byrn, N. R.; Smith, H. T.
1972-01-01
A study program of radiation shielding against the deleterious effects of nuclear radiation on man and equipment is reported. The methods used to analyze the radiation environment from bremsstrahlung photons are discussed along with the methods employed by transport code users. The theory and numerical methods used to solve transport of neutrons and gammas are described, and the neutron and cosmic fluxes that would be present on the gamma-ray telescope were analyzed.
NASA Astrophysics Data System (ADS)
Laiti, Lavinia; Giovannini, Lorenzo; Zardi, Dino
2015-04-01
The accurate assessment of the solar radiation available at the Earth's surface is essential for a wide range of energy-related applications, such as the design of solar power plants, water heating systems and energy-efficient buildings, as well as in the fields of climatology, hydrology, ecology and agriculture. The characterization of solar radiation is particularly challenging in complex-orography areas, where topographic shadowing and altitude effects, together with local weather phenomena, greatly increase the spatial and temporal variability of such variable. At present, approaches ranging from surface measurements interpolation to orographic down-scaling of satellite data, to numerical model simulations are adopted for mapping solar radiation. In this contribution a high-resolution (200 m) solar atlas for the Trentino region (Italy) is presented, which was recently developed on the basis of hourly observations of global radiation collected from the local radiometric stations during the period 2004-2012. Monthly and annual climatological irradiation maps were obtained by the combined use of a GIS-based clear-sky model (r.sun module of GRASS GIS) and geostatistical interpolation techniques (kriging). Moreover, satellite radiation data derived by the MeteoSwiss HelioMont algorithm (2 km resolution) were used for missing-data reconstruction and for the final mapping, thus integrating ground-based and remote-sensing information. The results are compared with existing solar resource datasets, such as the PVGIS dataset, produced by the Joint Research Center Institute for Energy and Transport, and the HelioMont dataset, in order to evaluate the accuracy of the different datasets available for the region of interest.
NASA Astrophysics Data System (ADS)
Niu, X.; Yang, K.; Tang, W.; Qin, J.
2014-12-01
Surface Solar Radiation (SSR) plays an important role of the hydrological and land process modeling, which particularly contributes more than 90% to the total melt energy for the Tibetan Plateau (TP) ice melting. Neither surface measurement nor existing remote sensing products can meet that requirement in TP. The well-known satellite products (i.e. ISCCP-FD and GEWEX-SRB) are in relatively low spatial resolution (0.5º-2.5º) and temporal resolution (3-hourly, daily, or monthly). The objective of this study is to develop capabilities to improved estimates of SSR in TP based on geostationary satellite observations from the Multi-functional Transport Satellite (MTSAT) with high spatial (0.05º) and temporal (hourly) resolution. An existing physical model, the UMD-SRB (University of Maryland Surface Radiation Budget) which is the basis of the GEWEX-SRB model, is re-visited to improve SSR estimates in TP. The UMD-SRB algorithm transforms TOA radiances into broadband albedos in order to infer atmospheric transmissivity which finally determines the SSR. Specifically, main updates introduced in this study are: implementation at 0.05º spatial resolution at hourly intervals integrated to daily and monthly time scales; and improvement of surface albedo model by introducing the most recently developed Global Land Surface Broadband Albedo Product (GLASS) based on MODIS data. This updated inference scheme will be evaluated against ground observations from China Meteorological Administration (CMA) radiation stations and three TP radiation stations contributed from the Institute of Tibetan Plateau Research.
A radiation and energy budget algorithm for forest canopies
NASA Astrophysics Data System (ADS)
Tunick, A.
2006-01-01
Previously, it was shown that a one-dimensional, physics-based (conservation-law) computer model can provide a useful mathematical representation of the wind flow, temperatures, and turbulence inside and above a uniform forest stand. A key element of this calculation was a radiation and energy budget algorithm (implemented to predict the heat source). However, to keep the earlier publication brief, a full description of the radiation and energy budget algorithm was not given. Hence, this paper presents our equation set for calculating the incoming total radiation at the canopy top as well as the transmission, reflection, absorption, and emission of the solar flux through a forest stand. In addition, example model output is presented from three interesting numerical experiments, which were conducted to simulate the canopy microclimate for a forest stand that borders the Blossom Point Field Test Facility (located near La Plata, Maryland along the Potomac River). It is anticipated that the current numerical study will be useful to researchers and experimental planners who will be collecting acoustic and meteorological data at the Blossom Point Facility in the near future.
Vectorization of transport and diffusion computations on the CDC Cyber 205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Shumays, I.K.
1986-01-01
The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less
Shoemaker, W C; Patil, R; Appel, P L; Kram, H B
1992-11-01
A generalized decision tree or clinical algorithm for treatment of high-risk elective surgical patients was developed from a physiologic model based on empirical data. First, a large data bank was used to do the following: (1) describe temporal hemodynamic and oxygen transport patterns that interrelate cardiac, pulmonary, and tissue perfusion functions in survivors and nonsurvivors; (2) define optimal therapeutic goals based on the supranormal oxygen transport values of high-risk postoperative survivors; (3) compare the relative effectiveness of alternative therapies in a wide variety of clinical and physiologic conditions; and (4) to develop criteria for titration of therapy to the endpoints of the supranormal optimal goals using cardiac index (CI), oxygen delivery (DO2), and oxygen consumption (VO2) as proxy outcome measures. Second, a general purpose algorithm was generated from these data and tested in preoperatively randomized clinical trials of high-risk surgical patients. Improved outcome was demonstrated with this generalized algorithm. The concept that the supranormal values represent compensations that have survival value has been corroborated by several other groups. We now propose a unique approach to refine the generalized algorithm to develop customized algorithms and individualized decision analysis for each patient's unique problems. The present article describes a preliminary evaluation of the feasibility of artificial intelligence techniques to accomplish individualized algorithms that may further improve patient care and outcome.
Meson Production and Space Radiation
NASA Astrophysics Data System (ADS)
Norbury, John; Blattnig, Steve; Norman, Ryan; Aghara, Sukesh
Protecting astronauts from the harmful effects of space radiation is an important priority for long duration space flight. The National Council on Radiation Protection (NCRP) has recently recommended that pion and other mesons should be included in space radiation transport codes, especially in connection with the Martian atmosphere. In an interesting accident of nature, the galactic cosmic ray spectrum has its peak intensity near the pion production threshold. The Boltzmann transport equation is structured in such a way that particle production cross sec-tions are multiplied by particle flux. Therefore, the peak of the incident flux of the galactic cosmic ray spectrum is more important than other regions of the spectrum and cross sections near the peak are enhanced. This happens with pion cross sections. The MCNPX Monte-Carlo transport code now has the capability of transporting heavy ions, and by using a galactic cosmic ray spectrum as input, recent work has shown that pions contribute about twenty percent of the dose from galactic cosmic rays behind a shield of 20 g/cm2 aluminum and 30 g/cm2 water. It is therefore important to include pion and other hadron production in transport codes designed for space radiation studies, such as HZETRN. The status of experimental hadron production data for energies relevant to space radiation will be reviewed, as well as the predictive capa-bilities of current theoretical hadron production cross section and space radiation transport models. Charged pions decay into muons and neutrinos, and neutral pions decay into photons. An electromagnetic cascade is produced as these particles build up in a material. The cascade and transport of pions, muons, electrons and photons will be discussed as they relate to space radiation. The importance of other hadrons, such as kaons, eta mesons and antiprotons will be considered as well. Efficient methods for calculating cross sections for meson production in nucleon-nucleon and nucleus-nucleus reactions will be presented. The NCRP has also recom-mended that more attention should be paid to neutron and light ion transport. The coupling of neutrons, light ions, mesons and other hadrons will be discussed.
NASA Technical Reports Server (NTRS)
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
Design of laboratory experiments to study radiation-driven implosions
Keiter, P. A.; Trantham, M.; Malamud, G.; ...
2017-02-03
The interstellar medium is heterogeneous with dense clouds amid an ambient medium. Radiation from young OB stars asymmetrically irradiate the dense clouds. Bertoldi (1989) developed analytic formulae to describe possible outcomes of these clouds when irradiated by hot, young stars. One of the critical parameters that determines the cloud’s fate is the number of photon mean free paths in the cloud. For the extreme cases where the cloud size is either much greater than or much less than one mean free path, the radiation transport should be well understood. However, as one transitions between these limits, the radiation transport ismore » much more complex and is a challenge to solve with many of the current radiation transport models implemented in codes. In this paper, we present the design of laboratory experiments that use a thermal source of x-rays to asymmetrically irradiate a low-density plastic foam sphere. The experiment will vary the density and hence the number of mean free paths of the sphere to study the radiation transport in different regimes. Finally, we have developed dimensionless parameters to relate the laboratory experiment to the astrophysical system and we show that we can perform the experiment in the same transport regime.« less
CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining
NASA Astrophysics Data System (ADS)
Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei
2015-10-01
The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.
Shielding from space radiations
NASA Technical Reports Server (NTRS)
Chang, C. Ken; Badavi, Forooz F.; Tripathi, Ram K.
1993-01-01
This Progress Report covering the period of December 1, 1992 to June 1, 1993 presents the development of an analytical solution to the heavy ion transport equation in terms of Green's function formalism. The mathematical development results are recasted into a highly efficient computer code for space applications. The efficiency of this algorithm is accomplished by a nonperturbative technique of extending the Green's function over the solution domain. The code may also be applied to accelerator boundary conditions to allow code validation in laboratory experiments. Results from the isotopic version of the code with 59 isotopes present for a single layer target material, for the case of an iron beam projectile at 600 MeV/nucleon in water is presented. A listing of the single layer isotopic version of the code is included.
Electromagnetic Chirps from Neutron Star–Black Hole Mergers
NASA Astrophysics Data System (ADS)
Schnittman, Jeremy D.; Dal Canton, Tito; Camp, Jordan; Tsang, David; Kelly, Bernard J.
2018-02-01
We calculate the electromagnetic signal of a gamma-ray flare coming from the surface of a neutron star shortly before merger with a black hole companion. Using a new version of the Monte Carlo radiation transport code Pandurata that incorporates dynamic spacetimes, we integrate photon geodesics from the neutron star surface until they reach a distant observer or are captured by the black hole. The gamma-ray light curve is modulated by a number of relativistic effects, including Doppler beaming and gravitational lensing. Because the photons originate from the inspiraling neutron star, the light curve closely resembles the corresponding gravitational waveform: a chirp signal characterized by a steadily increasing frequency and amplitude. We propose to search for these electromagnetic chirps using matched filtering algorithms similar to those used in LIGO data analysis.
Software for Simulation of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Richtsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.
2002-01-01
A package of software generates simulated hyperspectral images for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport as well as surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, 'ground truth' is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces and the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for and a supplement to field validation data.
Simulation of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Richsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.
2004-01-01
A software package generates simulated hyperspectral imagery for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport, as well as reflections from surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, "ground truth" is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces, as well as the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for, and a supplement to, field validation data.
Electromagnetic Chirps from Neutron Star-Black Hole Mergers
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy D.; Dal Canton, Tito; Camp, Jordan B.; Tsang, David; Kelly, Bernard J.
2018-01-01
We calculate the electromagnetic signal of a gamma-ray flare coming from the surface of a neutron star shortly before merger with a black hole companion. Using a new version of the Monte Carlo radiation transport code Pandurata that incorporates dynamic spacetimes, we integrate photon geodesics from the neutron star surface until they reach a distant observer or are captured by the black hole. The gamma-ray light curve is modulated by a number of relativistic effects, including Doppler beaming and gravitational lensing. Because the photons originate from the inspiraling neutron star, the light curve closely resembles the corresponding gravitational waveform: a chirp signal characterized by a steadily increasing frequency and amplitude. We propose to search for these electromagnetic chirps using matched filtering algorithms similar to those used in LIGO data analysis.
Non-line-of-sight single-scatter propagation model for noncoplanar geometries.
Elshimy, Mohamed A; Hranilovic, Steve
2011-03-01
In this paper, a geometrical propagation model is developed that generalizes the classical single-scatter model under the assumption of first-order scattering and non-line-of-sight (NLOS) communication. The generalized model considers the case of a noncoplanar geometry, where it overcomes the restriction that the transmitter and the receiver cone axes lie in the same plane. To verify the model, a Monte Carlo (MC) radiative transfer model based on a photon transport algorithm is constructed. Numerical examples for a wavelength of 266 nm are illustrated, which corresponds to a solar-blind NLOS UV communication system. A comparison of the temporal responses of the generalized model and the MC simulation results shows close agreement. Path loss and delay spread are also shown for different pointing directions.
Satellite measurement of aerosol mass over land
NASA Technical Reports Server (NTRS)
Fraser, R. S.; Kaufman, Y. J.; Mahoney, R. L.
1984-01-01
The estimation of aerosol optical thickness and mass from satellite observations over land is demonstrated using data from the GOES Visible/IR Spin-Scan Radiometer for the eastern U.S. The post-launch calibration technique is described; the algorithm used to derive optical thickness from the radiance of scattered sunlight (by means of a radiative-transfer model in which the optical characteristics of the aerosol are assumed) is presented; and data on aerosol S for July 31, 1980 are analyzed. The results are presented in a series of graphs and maps and compared with ground-based data. The errors in the optical thickness and columnar mass are estimated as 15 and 40 percent, respectively, and the need for independent-data-set validation of satellite-based mass, transport, and divergence values is indicated.
On the Transport and Radiative Properties of Plasmas with Small-Scale Electromagnetic Fluctuations
NASA Astrophysics Data System (ADS)
Keenan, Brett D.
Plasmas with sub-Larmor-scale ("small-scale") electromagnetic fluctuations are a feature of a wide variety of high-energy-density environments, and are essential to the description of many astrophysical/laboratory plasma phenomena. Radiation from particles, whether they be relativistic or non-relativistic, moving through small-scale electromagnetic turbulence has spectral characteristics distinct from both synchrotron and cyclotron radiation. The radiation, carrying information on the statistical properties of the turbulence, is also intimately related to the particle diffusive transport. We investigate, both theoretically and numerically, the transport of non-relativistic and transrelativistic particles in plasmas with high-amplitude isotropic sub-Larmor-scale magnetic turbulence---both with and without a mean field component---and its relation to the spectra of radiation simultaneously produced by these particles. Furthermore, the transport of particles through small-scale electromagnetic turbulence---under certain conditions---resembles the random transport of particles---via Coulomb collisions---in collisional plasmas. The pitch-angle diffusion coefficient, which acts as an effective "collision" frequency, may be substantial in these, otherwise, collisionless environments. We show that this effect, colloquially referred to as the plasma "quasi-collisionality", may radically alter the expected radiative transport properties of candidate plasmas. We argue that the modified magneto-optic effects in these plasmas provide an attractive, novel, diagnostic tool for the exploration and characterization of small-scale electromagnetic turbulence. Lastly, we speculate upon the manner in which quasi-collisions may affect inertial confinement fusion (ICF), and other laser-plasma experiments. Finally, we show that mildly relativistic jitter radiation, from laser-produced plasmas, may offer insight into the underlying electromagnetic turbulence. Here we investigate the prospects for, and demonstrate the feasibility of, such direct radiative diagnostics for mildly relativistic, solid-density laser plasmas produced in lab experiments. In effect, we demonstrate how the diffusive and radiative properties of plasmas with small-scale, turbulent, electromagnetic fluctuations may serve as a powerful tool for the diagnosis of laboratory, astrophysical, and space plasmas.
Algorithms for radiative transfer simulations for aerosol retrieval
NASA Astrophysics Data System (ADS)
Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko
2012-11-01
Aerosol retrieval work from satellite data, i.e. aerosol remote sensing, is divided into three parts as: satellite data analysis, aerosol modeling and multiple light scattering calculation in the atmosphere model which is called radiative transfer simulation. The aerosol model is compiled from the accumulated measurements during more than ten years provided with the world wide aerosol monitoring network (AERONET). The radiative transfer simulations take Rayleigh scattering by molecules and Mie scattering by aerosols in the atmosphere, and reflection by the Earth surface into account. Thus the aerosol properties are estimated by comparing satellite measurements with the numerical values of radiation simulations in the Earth-atmosphere-surface model. It is reasonable to consider that the precise simulation of multiple light-scattering processes is necessary, and needs a long computational time especially in an optically thick atmosphere model. Therefore efficient algorithms for radiative transfer problems are indispensable to retrieve aerosols from space.
NASA Astrophysics Data System (ADS)
Rose, D. V.; Welch, D. R.; Clark, R. E.; Thoma, C.; Zimmerman, W. R.; Bruner, N.; Rambo, P. K.; Atherton, B. W.
2011-09-01
Streamer and leader formation in high pressure devices is dynamic process involving a broad range of physical phenomena. These include elastic and inelastic particle collisions in the gas, radiation generation, transport and absorption, and electrode interactions. Accurate modeling of these physical processes is essential for a number of applications, including high-current, laser-triggered gas switches. Towards this end, we present a new 3D implicit particle-in-cell simulation model of gas breakdown leading to streamer formation in electronegative gases. The model uses a Monte Carlo treatment for all particle interactions and includes discrete photon generation, transport, and absorption for ultra-violet and soft x-ray radiation. Central to the realization of this fully kinetic particle treatment is an algorithm that manages the total particle count by species while preserving the local momentum distribution functions and conserving charge [D. R. Welch, T. C. Genoni, R. E. Clark, and D. V. Rose, J. Comput. Phys. 227, 143 (2007)]. The simulation model is fully electromagnetic, making it capable of following, for example, the evolution of a gas switch from the point of laser-induced localized breakdown of the gas between electrodes through the successive stages of streamer propagation, initial electrode current connection, and high-current conduction channel evolution, where self-magnetic field effects are likely to be important. We describe the model details and underlying assumptions used and present sample results from 3D simulations of streamer formation and propagation in SF6.
NASA Astrophysics Data System (ADS)
Valdes, Gilmer; Solberg, Timothy D.; Heskel, Marina; Ungar, Lyle; Simone, Charles B., II
2016-08-01
To develop a patient-specific ‘big data’ clinical decision tool to predict pneumonitis in stage I non-small cell lung cancer (NSCLC) patients after stereotactic body radiation therapy (SBRT). 61 features were recorded for 201 consecutive patients with stage I NSCLC treated with SBRT, in whom 8 (4.0%) developed radiation pneumonitis. Pneumonitis thresholds were found for each feature individually using decision stumps. The performance of three different algorithms (Decision Trees, Random Forests, RUSBoost) was evaluated. Learning curves were developed and the training error analyzed and compared to the testing error in order to evaluate the factors needed to obtain a cross-validated error smaller than 0.1. These included the addition of new features, increasing the complexity of the algorithm and enlarging the sample size and number of events. In the univariate analysis, the most important feature selected was the diffusion capacity of the lung for carbon monoxide (DLCO adj%). On multivariate analysis, the three most important features selected were the dose to 15 cc of the heart, dose to 4 cc of the trachea or bronchus, and race. Higher accuracy could be achieved if the RUSBoost algorithm was used with regularization. To predict radiation pneumonitis within an error smaller than 10%, we estimate that a sample size of 800 patients is required. Clinically relevant thresholds that put patients at risk of developing radiation pneumonitis were determined in a cohort of 201 stage I NSCLC patients treated with SBRT. The consistency of these thresholds can provide radiation oncologists with an estimate of their reliability and may inform treatment planning and patient counseling. The accuracy of the classification is limited by the number of patients in the study and not by the features gathered or the complexity of the algorithm.
Planning and delivery of four-dimensional radiation therapy with multileaf collimators
NASA Astrophysics Data System (ADS)
McMahon, Ryan L.
This study is an investigation of the application of multileaf collimators (MLCs) to the treatment of moving anatomy with external beam radiation therapy. First, a method for delivering intensity modulated radiation therapy (IMRT) to moving tumors is presented. This method uses an MLC control algorithm that calculates appropriate MLC leaf speeds in response to feedback from real-time imaging. The algorithm does not require a priori knowledge of a tumor's motion, and is based on the concept of self-correcting DMLC leaf trajectories . This gives the algorithm the distinct advantage of allowing for correction of DMLC delivery errors without interrupting delivery. The algorithm is first tested for the case of one-dimensional (1D) rigid tumor motion in the beam's eye view (BEV). For this type of motion, it is shown that the real-time tracking algorithm results in more accurate deliveries, with respect to delivered intensity, than those which ignore motion altogether. This is followed by an appropriate extension of the algorithm to two-dimensional (2D) rigid motion in the BEV. For this type of motion, it is shown that the 2D real-time tracking algorithm results in improved accuracy (in the delivered intensity) in comparison to deliveries which ignore tumor motion or only account for tumor motion which is aligned with MLC leaf travel. Finally, a method is presented for designing DMLC leaf trajectories which deliver a specified intensity over a moving tumor without overexposing critical structures which exhibit motion patterns that differ from that of the tumor. In addition to avoiding overexposure of critical organs, the method can, in the case shown, produce deliveries that are superior to anything achievable using stationary anatomy. In this regard, the method represents a systematic way to include anatomical motion as a degree of freedom in the optimization of IMRT while producing treatment plans that are deliverable with currently available technology. These results, combined with those related to the real-time MLC tracking algorithm, show that an MLC is a promising tool to investigate for the delivery of four-dimensional radiation therapy.
The point explosion with radiation transport
NASA Astrophysics Data System (ADS)
Lin, Zhiwei; Zhang, Lu; Kuang, Longyu; Jiang, Shaoen
2017-10-01
Some amount of energy is released instantaneously at the origin to generate simultaneously a spherical radiative heat wave and a spherical shock wave in the point explosion with radiation transport, which is a complicated problem due to the competition between these two waves. The point explosion problem possesses self-similar solutions when only hydrodynamic motion or only heat conduction is considered, which are Sedov solution and Barenblatt solution respectively. The point explosion problem wherein both physical mechanisms of hydrodynamic motion and heat conduction are included has been studied by P. Reinicke and A.I. Shestakov. In this talk we numerically investigate the point explosion problem wherein both physical mechanisms of hydrodynamic motion and radiation transport are taken into account. The radiation transport equation in one dimensional spherical geometry has to be solved for this problem since the ambient medium is optically thin with respect to the initially extremely high temperature at the origin. The numerical results reveal a high compression of medium and a bi-peak structure of density, which are further theoretically analyzed at the end.
NASA Astrophysics Data System (ADS)
Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu
2018-01-01
Based on the study of earth infrared radiation and further requirement of anticloud interference ability for a spinning projectile's infrared attitude measurement, a compensation method of cloud infrared radiation interference is proposed. First, the theoretical model of infrared radiation interference is established by analyzing the generation mechanism and interference characteristics of cloud infrared radiation. Then, the influence of cloud infrared radiation on attitude angle is calculated in the following two situations. The first situation is the projectile in cloud, and the maximum of roll angle error can reach ± 20 deg. The second situation is the projectile outside of cloud, and it results in the inability to measure the projectile's attitude angle. Finally, a multisensor weighted fusion algorithm is proposed based on trust function method to reduce the influence of cloud infrared radiation. The results of semiphysical experiments show that the error of roll angle with a weighted fusion algorithm can be kept within ± 0.5 deg in the presence of cloud infrared radiation interference. This proposed method improves the accuracy of roll angle by nearly four times in attitude measurement and also solves the problem of low accuracy of infrared radiation attitude measurement in navigation and guidance field.
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard
1990-01-01
The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Spinning projectile's attitude measurement with LW infrared radiation under sea-sky background
NASA Astrophysics Data System (ADS)
Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu
2018-05-01
With the further development of infrared radiation research in sea-sky background and the requirement of spinning projectile's attitude measurement, the sea-sky infrared radiation field is used to carry out spinning projectile's attitude angle instead of inertial sensors. Firstly, the generation mechanism of sea-sky infrared radiation is analysed. The mathematical model of sea-sky infrared radiation is deduced in LW (long wave) infrared 8 ∼ 14 μm band by calculating the sea surface and sky infrared radiation. Secondly, according to the movement characteristics of spinning projectile, the attitude measurement model of infrared sensors on projectile's three axis is established. And the feasibility of the model is analysed by simulation. Finally, the projectile's attitude calculation algorithm is designed to improve the attitude angle estimation accuracy. The results of semi-physical experiments show that the segmented interactive algorithm estimation error of pitch and roll angle is within ±1.5°. The attitude measurement method is effective and feasible, and provides accurate measurement basis for the guidance of spinning projectile.
Normalization Of Thermal-Radiation Form-Factor Matrix
NASA Technical Reports Server (NTRS)
Tsuyuki, Glenn T.
1994-01-01
Report describes algorithm that adjusts form-factor matrix in TRASYS computer program, which calculates intraspacecraft radiative interchange among various surfaces and environmental heat loading from sources such as sun.
Monte Carlo Analysis of Pion Contribution to Absorbed Dose from Galactic Cosmic Rays
NASA Technical Reports Server (NTRS)
Aghara, S.K.; Battnig, S.R.; Norbury, J.W.; Singleterry, R.C.
2009-01-01
Accurate knowledge of the physics of interaction, particle production and transport is necessary to estimate the radiation damage to equipment used on spacecraft and the biological effects of space radiation. For long duration astronaut missions, both on the International Space Station and the planned manned missions to Moon and Mars, the shielding strategy must include a comprehensive knowledge of the secondary radiation environment. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. Galactic cosmic rays (GCR) comprised of protons and heavier nuclei have energies from a few MeV per nucleon to the ZeV region, with the spectra reaching flux maxima in the hundreds of MeV range. Therefore, the MeV - GeV region is most important for space radiation. Coincidentally, the pion production energy threshold is about 280 MeV. The question naturally arises as to how important these particles are with respect to space radiation problems. The space radiation transport code, HZETRN (High charge (Z) and Energy TRaNsport), currently used by NASA, performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. In this paper, we present results from the Monte Carlo code MCNPX (Monte Carlo N-Particle eXtended), showing the effect of leptons and mesons when they are produced and transported in a GCR environment.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.
Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon
2010-11-01
Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.
An improved random walk algorithm for the implicit Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keady, Kendra P., E-mail: keadyk@lanl.gov; Cleveland, Mathew A.
In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities aremore » a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.« less
Ray Tracing Through Non-Imaging Concentrators
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
1984-01-01
A generalized algorithm for tracing rays through both imaging and non-imaging radiation collectors is presented. A computer program based on the algorithm is then applied to analyzing various two-stage Winston concentrators.
1993-01-01
Panasonic TLD . Panasonic Industrial Company; Secaucus, New Jersey. 5. Thurlow, Ronald M. "Neutron Dosimetry Using a Panasonic Thermoluminescent Dosimeter." A...steps 8-12. 29-15 THE BUILDING OF THE USAF PANASONIC UD-809AS ALGORITHM Katherine M. Arnold Research Associate Radiation Dosimetry Branch Brooks Air...Research August 1993 30-1 THE BUILDING OF THE USAF PANASONIC UD-809AS ALGORITHM Katherine M. Arnold Research Associate Radiation Dosimetry Branch
Bogle, Brittany M; Asimos, Andrew W; Rosamond, Wayne D
2017-10-01
The Severity-Based Stroke Triage Algorithm for Emergency Medical Services endorses routing patients with suspected large vessel occlusion acute ischemic strokes directly to endovascular stroke centers (ESCs). We sought to evaluate different specifications of this algorithm within a region. We developed a discrete event simulation environment to model patients with suspected stroke transported according to algorithm specifications, which varied by stroke severity screen and permissible additional transport time for routing patients to ESCs. We simulated King County, Washington, and Mecklenburg County, North Carolina, distributing patients geographically into census tracts. Transport time to the nearest hospital and ESC was estimated using traffic-based travel times. We assessed undertriage, overtriage, transport time, and the number-needed-to-route, defined as the number of patients enduring additional transport to route one large vessel occlusion patient to an ESC. Undertriage was higher and overtriage was lower in King County compared with Mecklenburg County for each specification. Overtriage variation was primarily driven by screen (eg, 13%-55% in Mecklenburg County and 10%-40% in King County). Transportation time specifications beyond 20 minutes increased overtriage and decreased undertriage in King County but not Mecklenburg County. A low- versus high-specificity screen routed 3.7× more patients to ESCs. Emergency medical services spent nearly twice the time routing patients to ESCs in King County compared with Mecklenburg County. Our results demonstrate how discrete event simulation can facilitate informed decision making to optimize emergency medical services stroke severity-based triage algorithms. This is the first step toward developing a mature simulation to predict patient outcomes. © 2017 American Heart Association, Inc.
Development of a new version of the Vehicle Protection Factor Code (VPF3)
NASA Astrophysics Data System (ADS)
Jamieson, Terrance J.
1990-10-01
The Vehicle Protection Factor (VPF) Code is an engineering tool for estimating radiation protection afforded by armoured vehicles and other structures exposed to neutron and gamma ray radiation from fission, thermonuclear, and fusion sources. A number of suggestions for modifications have been offered by users of early versions of the code. These include: implementing some of the more advanced features of the air transport rating code, ATR5, used to perform the air over ground radiation transport analyses; allowing the ability to study specific vehicle orientations within the free field; implementing an adjoint transport scheme to reduce the number of transport runs required; investigating the possibility of accelerating the transport scheme; and upgrading the computer automated design (CAD) package used by VPF. The generation of radiation free field fluences for infinite air geometries as required for aircraft analysis can be accomplished by using ATR with the air over ground correction factors disabled. Analysis of the effects of fallout bearing debris clouds on aircraft will require additional modelling of VPF.
Zhou, C.; Liu, L.; Lane, J.W.
2001-01-01
A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.
Global Carbon Cycle Modeling in GISS ModelE2 GCM
NASA Astrophysics Data System (ADS)
Aleinov, I. D.; Kiang, N. Y.; Romanou, A.; Romanski, J.
2014-12-01
Consistent and accurate modeling of the Global Carbon Cycle remains one of the main challenges for the Earth System Models. NASA Goddard Institute for Space Studies (GISS) ModelE2 General Circulation Model (GCM) was recently equipped with a complete Global Carbon Cycle algorithm, consisting of three integrated components: Ent Terrestrial Biosphere Model (Ent TBM), Ocean Biogeochemistry Module and atmospheric CO2 tracer. Ent TBM provides CO2 fluxes from the land surface to the atmosphere. Its biophysics utilizes the well-known photosynthesis functions of Farqhuar, von Caemmerer, and Berry and Farqhuar and von Caemmerer, and stomatal conductance of Ball and Berry. Its phenology is based on temperature, drought, and radiation fluxes, and growth is controlled via allocation of carbon from labile carbohydrate reserve storage to different plant components. Soil biogeochemistry is based on the Carnegie-Ames-Stanford (CASA) model of Potter et al. Ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. Atmospheric CO2 is advected with a quadratic upstream algorithm implemented in atmospheric part of ModelE2. Here we present the results for pre-industrial equilibrium and modern transient simulations and provide comparison to available observations. We also discuss the process of validation and tuning of particular algorithms used in the model.
PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability.
Kirby, Jacqueline C; Speltz, Peter; Rasmussen, Luke V; Basford, Melissa; Gottesman, Omri; Peissig, Peggy L; Pacheco, Jennifer A; Tromp, Gerard; Pathak, Jyotishman; Carrell, David S; Ellis, Stephen B; Lingren, Todd; Thompson, Will K; Savova, Guergana; Haines, Jonathan; Roden, Dan M; Harris, Paul A; Denny, Joshua C
2016-11-01
Health care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems.Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites. As of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%). These results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others. By providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability
Kirby, Jacqueline C; Speltz, Peter; Rasmussen, Luke V; Basford, Melissa; Gottesman, Omri; Peissig, Peggy L; Pacheco, Jennifer A; Tromp, Gerard; Pathak, Jyotishman; Carrell, David S; Ellis, Stephen B; Lingren, Todd; Thompson, Will K; Savova, Guergana; Haines, Jonathan; Roden, Dan M; Harris, Paul A
2016-01-01
Objective Health care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems. Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites. Results As of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%). Discussion These results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others. Conclusion By providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data. PMID:27026615
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priimak, Dmitri
2014-12-01
We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques.
Algorithm and data support of traffic congestion forecasting in the controlled transport
NASA Astrophysics Data System (ADS)
Dmitriev, S. V.
2015-06-01
The topicality of problem of the traffic congestion forecasting in the logistic systems of product movement highways is considered. The concepts: the controlled territory, the highway occupancy by vehicles, the parking and the controlled territory are introduced. Technical realizabilityof organizing the necessary flow of information on the state of the transport system for its regulation has been marked. Sequence of practical implementation of the solution is given. An algorithm for predicting traffic congestion in the controlled transport system is suggested.
NASA Astrophysics Data System (ADS)
Hogan, J.; Demichelis, C.; Monier-Garbet, P.; Guirlet, R.; Hess, W.; Schunke, B.
2000-10-01
A model combining the MIST (core symmetric) and BBQ (SOL asymmetric) codes is used to study the relation between impurity density and radiated power for representative cases from Tore Supra experiments on strong radiation regimes using the ergodic divertor. Transport predictions of external radiation are compared with observation to estimate the absolute impurity density. BBQ provides the incoming distribution of recycling impurity charge states for the radial transport calculation. The shots studied use the ergodic divertor and high ICRH power. Power is first applied and then the extrinsic impurity (Ne, N or Ar) is injected. Separate time dependent intrinsic (C and O) impurity transport calculations match radiation levels before and during the high power and impurity injection phases. Empirical diffusivities are sought to reproduce the UV (CV R, I lines), CVI Lya, OVIII Lya, Zeff, and horizontal bolometer data. The model has been used to calculate the relative radiative efficiency (radiated power / extrinsically contributed electron) for the sample database.
Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms
Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.
2016-04-12
Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less
Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.
Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less
Numerical convergence and validation of the DIMP inverse particle transport model
Nelson, Noel; Azmy, Yousry
2017-09-01
The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
The Continuous Intercomparison of Radiation Codes (CIRC): Phase I Cases
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Turner, David D.; Miller, Mark A.; Minnis, Patrick; Clough, Shepard; Barker, Howard; Ellingson, Robert
2007-01-01
CIRC aspires to be the successor to ICRCCM (Intercomparison of Radiation Codes in Climate Models). It is envisioned as an evolving and regularly updated reference source for GCM-type radiative transfer (RT) code evaluation with the principle goal to contribute in the improvement of RT parameterizations. CIRC is jointly endorsed by DOE's Atmospheric Radiation Measurement (ARM) program and the GEWEX Radiation Panel (GRP). CIRC's goal is to provide test cases for which GCM RT algorithms should be performing at their best, i.e, well characterized clear-sky and homogeneous, overcast cloudy cases. What distinguishes CIRC from previous intercomparisons is that its pool of cases is based on observed datasets. The bulk of atmospheric and surface input as well as radiative fluxes come from ARM observations as documented in the Broadband Heating Rate Profile (BBHRP) product. BBHRP also provides reference calculations from AER's RRTM RT algorithms that can be used to select the most optimal set of cases and to provide a first-order estimate of our ability to achieve radiative flux closure given the limitations in our knowledge of the atmospheric state.
Frontiers in Numerical Relativity
NASA Astrophysics Data System (ADS)
Evans, Charles R.; Finn, Lee S.; Hobill, David W.
2011-06-01
Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics Christopher S. Kochanek and Charles R. Evans; 22. Relativistic hydrodynamics James R. Wilson and Grant J. Mathews; 23. Computational dynamics of U(1) gauge strings: probability of reconnection of cosmic strings Richard A. Matzner; 24. Dynamically inhomogenous cosmic nucleosynthesis Hannu Kurki-Suonio; 25. Initial value solutions in planar cosmologies Peter Anninos, Joan Centrella and Richard Matzner; 26. An algorithmic overview of an Einstein solver Roger Ove; 27. A PDE compiler for full-metric numerical relativity Jonathan Thornburg; 28. Numerical evolution on null cones R. Gomez and J. Winicour; 29. Normal modes coupled to gravitational waves in a relativistic star Yasufumi Kojima; 30. Cosmic censorship and numerical relativity Dalia S. Goldwirth, Amos Ori and Tsvi Piran.
Profiling Transboundary Aerosols over Taiwan and Assessing Their Radiative Effects
NASA Technical Reports Server (NTRS)
Wang, Sheng-Hsiang; Lin, Neng-Huei; Chou, Ming-Dah; Tsay, Si-Chee; Welton, Ellsworth J.; Hsu, N. Christina; Giles, David M.; Liu, Gin-Rong; Holben, Brent N.
2010-01-01
A synergistic process was developed to study the vertical distributions of aerosol optical properties and their effects on solar heating using data retrieved from ground-based radiation measurements and radiative transfer simulations. Continuous MPLNET and AERONET observations were made at a rural site in northern Taiwan from 2005 to 2007. The aerosol vertical extinction profiles retrieved from ground-based lidar measurements were categorized into near-surface, mixed, and two-layer transport types, representing 76% of all cases. Fine-mode (Angstrom exponent, alpha, approx.1.4) and moderate-absorbing aerosols (columnar single-scattering albedo approx.0.93, asymmetry factor approx.0.73 at 440 nm wavelength) dominated in this region. The column-integrated aerosol optical thickness at 500 nm (tau(sub 500nm)) ranges from 0.1 to 0.6 for the near-surface transport type, but can be doubled in the presence of upper-layer aerosol transport. We utilize aerosol radiative efficiency (ARE; the impact on solar radiation per unit change of tau(sub 500nm)) to quantify the radiative effects due to different vertical distributions of aerosols. Our results show that the ARE at the top-of-atmosphere (-23 W/ sq m) is weakly sensitive to aerosol vertical distributions confined in the lower troposphere. On the other hand, values of the ARE at the surface are -44.3, -40.6 and -39.7 W/sq m 38 for near-surface, mixed, and two-layer transport types, respectively. Further analyses show that the impact of aerosols on the vertical profile of solar heating is larger for the near-surface transport type than that of two-layer transport type. The impacts of aerosol on the surface radiation and the solar heating profiles have implications for the stability and convection in the lower troposphere.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
NASA Astrophysics Data System (ADS)
Liu, Zhiquan; Liu, Quanhua; Lin, Hui-Chuan; Schwartz, Craig S.; Lee, Yen-Huei; Wang, Tijian
2011-12-01
Assimilation of the Moderate Resolution Imaging Spectroradiometer (MODIS) total aerosol optical depth (AOD) retrieval products (at 550 nm wavelength) from both Terra and Aqua satellites have been developed within the National Centers for Environmental Prediction (NCEP) Gridpoint Statistical Interpolation (GSI) three-dimensional variational (3DVAR) data assimilation system. This newly developed algorithm allows, in a one-step procedure, the analysis of 3-D mass concentration of 14 aerosol variables from the Goddard Chemistry Aerosol Radiation and Transport (GOCART) module. The Community Radiative Transfer Model (CRTM) was extended to calculate AOD using GOCART aerosol variables as input. Both the AOD forward model and corresponding Jacobian model were developed within the CRTM and used in the 3DVAR minimization algorithm to compute the AOD cost function and its gradient with respect to 3-D aerosol mass concentration. The impact of MODIS AOD data assimilation was demonstrated by application to a dust storm from 17 to 24 March 2010 over East Asia. The aerosol analyses initialized Weather Research and Forecasting/Chemistry (WRF/Chem) model forecasts. Results indicate that assimilating MODIS AOD substantially improves aerosol analyses and subsequent forecasts when compared to MODIS AOD, independent AOD observations from the Aerosol Robotic Network (AERONET) and Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument, and surface PM10 (particulate matter with diameters less than 10 μm) observations. The newly developed AOD data assimilation system can serve as a tool to improve simulations of dust storms and general air quality analyses and forecasts.
Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P
2007-01-01
The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported.
NASA Technical Reports Server (NTRS)
1979-01-01
Earth and solar radiation budget measurements were examined. Sensor calibration and measurement accuracy were emphasized. Past works on the earth's radiation field that must be used in reducing observations of the radiation field were reviewed. Using a finite difference radiative transfer algorithm, models of the angular and spectral dependence of the earth's radiation field were developed.
Mohamed, Abdallah S. R.; Ruangskul, Manee-Naad; Awan, Musaddiq J.; Baron, Charles A.; Kalpathy-Cramer, Jayashree; Castillo, Richard; Castillo, Edward; Guerrero, Thomas M.; Kocak-Uzel, Esengul; Yang, Jinzhong; Court, Laurence E.; Kantor, Michael E.; Gunn, G. Brandon; Colen, Rivka R.; Frank, Steven J.; Garden, Adam S.; Rosenthal, David I.
2015-01-01
Purpose To develop a quality assurance (QA) workflow by using a robust, curated, manually segmented anatomic region-of-interest (ROI) library as a benchmark for quantitative assessment of different image registration techniques used for head and neck radiation therapy–simulation computed tomography (CT) with diagnostic CT coregistration. Materials and Methods Radiation therapy–simulation CT images and diagnostic CT images in 20 patients with head and neck squamous cell carcinoma treated with curative-intent intensity-modulated radiation therapy between August 2011 and May 2012 were retrospectively retrieved with institutional review board approval. Sixty-eight reference anatomic ROIs with gross tumor and nodal targets were then manually contoured on images from each examination. Diagnostic CT images were registered with simulation CT images rigidly and by using four deformable image registration (DIR) algorithms: atlas based, B-spline, demons, and optical flow. The resultant deformed ROIs were compared with manually contoured reference ROIs by using similarity coefficient metrics (ie, Dice similarity coefficient) and surface distance metrics (ie, 95% maximum Hausdorff distance). The nonparametric Steel test with control was used to compare different DIR algorithms with rigid image registration (RIR) by using the post hoc Wilcoxon signed-rank test for stratified metric comparison. Results A total of 2720 anatomic and 50 tumor and nodal ROIs were delineated. All DIR algorithms showed improved performance over RIR for anatomic and target ROI conformance, as shown for most comparison metrics (Steel test, P < .008 after Bonferroni correction). The performance of different algorithms varied substantially with stratification by specific anatomic structures or category and simulation CT section thickness. Conclusion Development of a formal ROI-based QA workflow for registration assessment demonstrated improved performance with DIR techniques over RIR. After QA, DIR implementation should be the standard for head and neck diagnostic CT and simulation CT allineation, especially for target delineation. © RSNA, 2014 Online supplemental material is available for this article. PMID:25380454
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
Characterization of Asian Dust Properties Near Source Region During ACE-Asia
NASA Technical Reports Server (NTRS)
Tsay, Si-Chee; Hsu, N. Christina; King, Michael D.; Kaufman, Yoram J.; Herman, Jay R.
2004-01-01
Asian dust typically originates in desert areas far from polluted urban regions. During transport, dust layers can interact with anthropogenic sulfate and soot aerosols from heavily polluted urban areas. Added to the complex effects of clouds and natural marine aerosols, dust particles reaching the marine environment can have drastically different properties than those from the source. Thus, understanding the unique temporal and spatial variations of Asian aerosols is of special importance in regional-to-global climate issues such as radiative forcing, the hydrological cycle, and primary biological productivity in the mid-Pacific Ocean. During ACE-Asia campaign, we have acquired ground- based (temporal) and satellite (spatial) measurements to infer aerosol physical/optical/radiative properties, column precipitable water amount, and surface reflectivity over this region. The inclusion of flux measurements permits the determination of aerosol radiative flux in addition to measurements of loading and optical depth. At the time of the Terra/MODIS, SeaWiFS, TOMS and other satellite overpasses, these ground-based observations can provide valuable data to compare with satellite retrievals over land. In this paper, we will demonstrate new capability of the Deep Blue algorithm to track the evolution of the Asian dust storm from sources to sinks. Although there are large areas often covered by clouds in the dust season in East Asia, this algorithm is able to distinguish heavy dust from clouds over the entire regions. Examination of the retrieved daily maps of dust plumes over East Asia clearly identifies the sources contributing to the dust loading in the atmosphe. We have compared the satellite retrieved aerosol optical thickness to the ground-based measurements and obtained a reasonable agreement between these two. Our results also indicate that there is a large difference in the retrieved value of spectral single scattering albedo of windblown dust between different sources in East Asia.
Wang, Lilie; Ding, George X
2018-06-12
Therapeutic radiation to cancer patients is accompanied by unintended radiation to organs outside the treatment field. It is known that the model-based dose algorithm has limitation in calculating the out-of-field doses. This study evaluated the out-of-field dose calculated by the Varian Eclipse treatment planning system (v.11 with AAA algorithm) in realistic treatment plans with the goal of estimating the uncertainties of calculated organ doses. Photon beam phase-space files for TrueBeam linear accelerator were provided by Varian. These were used as incident sources in EGSnrc Monte Carlo simulations of radiation transport through the downstream jaws and MLC. Dynamic movements of the MLC leaves were fully modeled based on treatment plans using IMRT or VMAT techniques. The Monte Carlo calculated out-of-field doses were then compared with those calculated by Eclipse. The dose comparisons were performed for different beam energies and treatment sites, including head-and-neck, lung, and pelvis. For 6 MV (FF/FFF), 10 MV (FF/FFF), and 15 MV (FF) beams, Eclipse underestimated out-of-field local doses by 30%-50% compared with Monte Carlo calculations when the local dose was <1% of prescribed dose. The accuracy of out-of-field dose calculations using Eclipse is improved when collimator jaws were set at the smallest possible aperture for MLC openings. The Eclipse system consistently underestimates out-of-field dose by a factor of 2 for all beam energies studied at the local dose level of less than 1% of prescribed dose. These findings are useful in providing information on the uncertainties of out-of-field organ doses calculated by Eclipse treatment planning system. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
Transport implementation of the Bernstein-Vazirani algorithm with ion qubits
NASA Astrophysics Data System (ADS)
Fallek, S. D.; Herold, C. D.; McMahon, B. J.; Maller, K. M.; Brown, K. R.; Amini, J. M.
2016-08-01
Using trapped ion quantum bits in a scalable microfabricated surface trap, we perform the Bernstein-Vazirani algorithm. Our architecture takes advantage of the ion transport capabilities of such a trap. The algorithm is demonstrated using two- and three-ion chains. For three ions, an improvement is achieved compared to a classical system using the same number of oracle queries. For two ions and one query, we correctly determine an unknown bit string with probability 97.6(8)%. For three ions, we succeed with probability 80.9(3)%.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
bhlight: General Relativistic Radiation Magnetohydrodynamics with Monte Carlo Transport
Ryan, Benjamin R; Dolence, Joshua C.; Gammie, Charles F.
2015-06-25
We present bhlight, a numerical scheme for solving the equations of general relativistic radiation magnetohydrodynamics using a direct Monte Carlo solution of the frequency-dependent radiative transport equation. bhlight is designed to evolve black hole accretion flows at intermediate accretion rate, in the regime between the classical radiatively efficient disk and the radiatively inefficient accretion flow (RIAF), in which global radiative effects play a sub-dominant but non-negligible role in disk dynamics. We describe the governing equations, numerical method, idiosyncrasies of our implementation, and a suite of test and convergence results. We also describe example applications to radiative Bondi accretion and tomore » a slowly accreting Kerr black hole in axisymmetry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley
2015-01-15
Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less
Cirrus and Water Vapor Transport in the Tropical Tropopause Layer
NASA Astrophysics Data System (ADS)
Dinh, Tra Phuong
Simulations of tropical-tropopause-layer (TTL) cirrus under the influence of a large-scale equatorial Kelvin wave have been performed in two dimensions. These simulations show that, even under the influence of the large-scale wave, radiatively induced dynamics in TTL cirrus plays an important role in the transport of water vapor in the vertical direction. In a typical TTL cirrus, the heating that results from absorption of radiation by ice crystals induces a mesoscale circulation. Advection of ice and water vapor by the radiatively induced circulation leads to the persistence of the cloud and upward advection of the cloudy air. Upward advection of the cloudy air is equivalent to upward transport of water vapor when the air above the cloud is drier than the cloudy air, and downward transport otherwise. In TTL cirrus, microphysical processes also contribute to transport of water vapor in the vertical direction. Ice nucleation and growth, followed by sedimentation and sublimation, always lead to downward transport of water vapor. The magnitude of the downward transport by microphysical processes increases with the relative humidity of the air surrounding the cloud. Moisture in the surrounding environment is important because there is continuous interactions between the cloudy and environmental air throughout the cloud boundary. In our simulations, when the air surrounding the cloud is subsaturated, hence drier than the cloudy air, the magnitude of the downward transport due to microphysical processes is smaller than that of the upward transport due to the radiatively induced advection of water vapor. The net result is upward transport of water vapor, and equivalently hydration of the lower stratosphere. On the other hand, when the surrounding air is supersaturated, hence moister than the cloudy air, microphysical and radiatively induced dynamical processes work in concert to induce downward transport of water vapor, that is dehydration of the lower stratosphere. TTL cirrus processes also depend sensitively on the deposition coefficient of water vapor on ice crystals. The deposition coefficient determines the depositional growth rate of ice crystals, hence microphysical and radiative properties of the cloud. In our simulations, larger values of the deposition coefficient correspond to less ice crystals nucleated during homogeneous freezing, larger ice crystal sizes, faster ice sedimentation, smaller radiative heating rate and weaker dynamics. These results indicate that detailed observations of the relative humidity in the vicinity of TTL cirrus and accurate laboratory measurements of the deposition coefficient are necessary to quantify the impact of TTL cirrus in the dehydration of the stratosphere. This research highlights the complex role of microphysical, radiative and dynamical processes in the transport of water vapor within TTL cirrus. It shows that under certain realistic conditions, TTL cirrus may lead to upward transport of water vapor, which results in moistening of the lower stratosphere. Thus it is not accurate to always associate TTL cirrus with stratospheric dehydration.
Note: thermal imaging enhancement algorithm for gas turbine aerothermal characterization.
Beer, S K; Lawson, S A
2013-08-01
An algorithm was developed to convert radiation intensity images acquired using a black and white CCD camera to thermal images without requiring knowledge of incident background radiation. This unique infrared (IR) thermography method was developed to determine aerothermal characteristics of advanced cooling concepts for gas turbine cooling application. Compared to IR imaging systems traditionally used for gas turbine temperature monitoring, the system developed for the current study is relatively inexpensive and does not require calibration with surface mounted thermocouples.
Computerized tomography platform using beta rays
NASA Astrophysics Data System (ADS)
Paetkau, Owen; Parsons, Zachary; Paetkau, Mark
2017-12-01
A computerized tomography (CT) system using a 0.1 μCi Sr-90 beta source, Geiger counter, and low density foam samples was developed. A simple algorithm was used to construct images from the data collected with the beta CT scanner. The beta CT system is analogous to X-ray CT as both types of radiation are sensitive to density variations. This system offers a platform for learning opportunities in an undergraduate laboratory, covering topics such as image reconstruction algorithms, radiation exposure, and the energy dependence of absorption.
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less
Thermal Considerations of Space Solar Power Concepts with 3.5 GW RF Output
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2000-01-01
This paper presents the thermal challenge of the Space Solar Power (SSP) design concepts with a 3.5 GW radio-frequency (RF) output. High efficiency klystrons are thermally more favored than solid state (butterstick) to convert direct current (DC) electricity to radio-frequency (RF) energy at the transmitters in these concepts. Using klystrons, the heat dissipation is 0.72 GW. Using solid state, the heat dissipation is 2.33 GW. The heat dissipation of the klystrons is 85% at 500C, 10% at 300C, and 5% at 125C. All the heat dissipation of the solid state is at 100C. Using klystrons, the radiator area is 74,500 square m Using solid state, the radiator area is 2,362,200 square m Space constructable heat pipe radiators are assumed in the thermal analysis. Also, to make the SSP concepts feasible, the mass of the heat transport system must be minimized. The heat transport distance from the transmitters to the radiators must be minimized. It can be accomplished by dividing the radiator into a cluster of small radiators, so that the heat transport distances between the klystrons and radiators can be minimized. The area of each small radiator is on the order of 1 square m. Two concepts for accommodating a cluster of small radiators are presented. If the distance between the transmitters and radiators is 1.5 m or less, constant conductance heat pipes (CCHPs) are acceptable for heat transport. If the distance exceeds 1.5 m, loop heat pipes (LHPs) are needed.
Dose algorithm for EXTRAD 4100S extremity dosimeter for use at Sandia National Laboratories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Charles Augustus
An updated algorithm for the EXTRAD 4100S extremity dosimeter has been derived. This algorithm optimizes the binning of dosimeter element ratios and uses a quadratic function to determine the response factors for low response ratios. This results in lower systematic bias across all test categories and eliminates the need for the 'red strap' algorithm that was used for high energy beta/gamma emitting radionuclides. The Radiation Protection Dosimetry Program (RPDP) at Sandia National Laboratories uses the Thermo Fisher EXTRAD 4100S extremity dosimeter, shown in Fig 1.1 to determine shallow dose to the extremities of potentially exposed individuals. This dosimeter consists ofmore » two LiF TLD elements or 'chipstrates', one of TLD-700 ({sup 7}Li) and one of TLD-100 (natural Li) separated by a tin filter. Following readout and background subtraction, the ratio of the responses of the two elements is determined defining the penetrability of the incident radiation. While this penetrability approximates the incident energy of the radiation, X-rays and beta particles exist in energy distributions that make determination of dose conversion factors less straightforward in their determination.« less
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee T.
Isotope identification algorithms that are contained in the Gamma Detector Response and Analysis Software (GADRAS) can be used for real-time stationary measurement and search applications on platforms operating under Linux or Android operating sys-tems. Since the background radiation can vary considerably due to variations in natu-rally-occurring radioactive materials (NORM), spectral algorithms can be substantial-ly more sensitive to threat materials than search algorithms based strictly on count rate. Specific isotopes or interest can be designated for the search algorithm, which permits suppression of alarms for non-threatening sources, such as such as medical radionuclides. The same isotope identification algorithms that are usedmore » for search ap-plications can also be used to process static measurements. The isotope identification algorithms follow the same protocols as those used by the Windows version of GADRAS, so files that are created under the Windows interface can be copied direct-ly to processors on fielded sensors. The analysis algorithms contain provisions for gain adjustment and energy lineariza-tion, which enables direct processing of spectra as they are recorded by multichannel analyzers. Gain compensation is performed by utilizing photopeaks in background spectra. Incorporation of this energy calibration tasks into the analysis algorithm also eliminates one of the more difficult challenges associated with development of radia-tion detection equipment.« less
Local reconstruction in computed tomography of diffraction enhanced imaging
NASA Astrophysics Data System (ADS)
Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia
2007-07-01
Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron
Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Control of a Wheeled Transport Robot with Two Steerable Wheels
NASA Astrophysics Data System (ADS)
Larin, V. B.
2017-09-01
The control of a system with one actuator failed is studied. The problem of control of a wheeled transport robot with two steerable wheels of which the rear one is stuck (its drive has failed) is solved. An algorithm for controlling the system in this situation is proposed. The effectiveness of the algorithm is demonstrated by way of an example.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
1984-12-01
radiation lengths. The off-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured using thermal luminescent...various path lengths out to 2 radiation lengths. The cff-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured... using thermal luminescent dosimeters (TLD’s). Calculations were performed on a CDC-7600 computer at Los Alamos National Laboratory and measurements
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
Optoelectronic device with nanoparticle embedded hole injection/transport layer
Wang, Qingwu [Chelmsford, MA; Li, Wenguang [Andover, MA; Jiang, Hua [Methuen, MA
2012-01-03
An optoelectronic device is disclosed that can function as an emitter of optical radiation, such as a light-emitting diode (LED), or as a photovoltaic (PV) device that can be used to convert optical radiation into electrical current, such as a photovoltaic solar cell. The optoelectronic device comprises an anode, a hole injection/transport layer, an active layer, and a cathode, where the hole injection/transport layer includes transparent conductive nanoparticles in a hole transport material.
Radiation Transport and Shielding for Space Exploration and High Speed Flight Transportation
NASA Technical Reports Server (NTRS)
Maung, Khin Maung; Trapathi, R. K.
1997-01-01
Transportation of ions and neutrons in matter is of direct interest in several technologically important and scientific areas, including space radiation, cosmic ray propagation studies in galactic medium, nuclear power plants and radiological effects that impact industrial and public health. For the proper assessment of radiation exposure, both reliable transport codes and accurate data are needed. Nuclear cross section data is one of the essential inputs into the transport codes. In order to obtain an accurate parametrization of cross section data, theoretical input is indispensable especially for processes where there is little or no experimental data available. In this grant period work has been done on the studies of the use of relativistic equations and their one-body limits. The results will be useful in choosing appropriate effective one-body equation for reaction calculations. Work has also been done to improve upon the data base needed for the transport codes used in the studies of radiation transport and shielding for space exploration and high speed flight transportation. A phenomenological model was developed for the total absorption cross sections valid for any system of charged and/or uncharged collision pairs for the entire energy range. The success of the model is gratifying. It is being used by other federal agencies, national labs and universities. A list of publications based on the work during the grant period is given below and copies are enclosed with this report.
Cloud Properties and Radiative Heating Rates for TWP
Comstock, Jennifer
2013-11-07
A cloud properties and radiative heating rates dataset is presented where cloud properties retrieved using lidar and radar observations are input into a radiative transfer model to compute radiative fluxes and heating rates at three ARM sites located in the Tropical Western Pacific (TWP) region. The cloud properties retrieval is a conditional retrieval that applies various retrieval techniques depending on the available data, that is if lidar, radar or both instruments detect cloud. This Combined Remote Sensor Retrieval Algorithm (CombRet) produces vertical profiles of liquid or ice water content (LWC or IWC), droplet effective radius (re), ice crystal generalized effective size (Dge), cloud phase, and cloud boundaries. The algorithm was compared with 3 other independent algorithms to help estimate the uncertainty in the cloud properties, fluxes, and heating rates (Comstock et al. 2013). The dataset is provided at 2 min temporal and 90 m vertical resolution. The current dataset is applied to time periods when the MMCR (Millimeter Cloud Radar) version of the ARSCL (Active Remotely-Sensed Cloud Locations) Value Added Product (VAP) is available. The MERGESONDE VAP is utilized where temperature and humidity profiles are required. Future additions to this dataset will utilize the new KAZR instrument and its associated VAPs.
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2015-09-01
Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.
Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A
2018-01-01
Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
Collision of Physics and Software in the Monte Carlo Application Toolkit (MCATK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, Jeremy Ed
2016-01-21
The topic is presented in a series of slides organized as follows: MCATK overview, development strategy, available algorithms, problem modeling (sources, geometry, data, tallies), parallelism, miscellaneous tools/features, example MCATK application, recent areas of research, and summary and future work. MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library with continuous energy neutron and photon transport. Designed to build specialized applications and to provide new functionality in existing general-purpose Monte Carlo codes like MCNP, it reads ACE formatted nuclear data generated by NJOY. The motivation behind MCATK was to reduce costs. MCATK physics involves continuous energy neutron & gammamore » transport with multi-temperature treatment, static eigenvalue (k eff and α) algorithms, time-dependent algorithm, and fission chain algorithms. MCATK geometry includes mesh geometries and solid body geometries. MCATK provides verified, unit-test Monte Carlo components, flexibility in Monte Carlo application development, and numerous tools such as geometry and cross section plotters.« less
Satellite change detection of forest damage near the Chernobyl accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClellan, G.E.; Anno, G.H.
1992-01-01
A substantial amount of forest within a few kilometers of the Chernobyl nuclear reactor station was badly contaminated with radionuclides by the April 26, 1986, explosion and ensuing fire at reactor No. 4. Radiation doses to conifers in some areas were sufficient to cause discoloration of needles within a few weeks. Other areas, receiving smaller doses, showed foliage changes beginning 6 months to a year later. Multispectral imagery available from Landsat sensors is especially suited for monitoring such changes in vegetation. A series of Landsat Thematic Mapper images was developed that span the 2 yr following the accident. Quantitative dosemore » estimation for the exposed conifers requires an objective change detection algorithm and knowledge of the dose-time response of conifers to ionizing radiation. Pacific-Sierra Research Corporation's Hyperscout{trademark} algorithm is based on an advanced, sensitive technique for change detection particularly suited for multispectral images. The Hyperscout algorithm has been used to assess radiation damage to the forested areas around the Chernobyl nuclear power plant.« less
Harmony search optimization algorithm for a novel transportation problem in a consolidation network
NASA Astrophysics Data System (ADS)
Davod Hosseini, Seyed; Akbarpour Shirazi, Mohsen; Taghi Fatemi Ghomi, Seyed Mohammad
2014-11-01
This article presents a new harmony search optimization algorithm to solve a novel integer programming model developed for a consolidation network. In this network, a set of vehicles is used to transport goods from suppliers to their corresponding customers via two transportation systems: direct shipment and milk run logistics. The objective of this problem is to minimize the total shipping cost in the network, so it tries to reduce the number of required vehicles using an efficient vehicle routing strategy in the solution approach. Solving several numerical examples confirms that the proposed solution approach based on the harmony search algorithm performs much better than CPLEX in reducing both the shipping cost in the network and computational time requirement, especially for realistic size problem instances.
Ground-Based Correction of Remote-Sensing Spectral Imagery
NASA Technical Reports Server (NTRS)
Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander
2007-01-01
Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.
Global modeling of thermospheric airglow in the far ultraviolet
NASA Astrophysics Data System (ADS)
Solomon, Stanley C.
2017-07-01
The Global Airglow (GLOW) model has been updated and extended to calculate thermospheric emissions in the far ultraviolet, including sources from daytime photoelectron-driven processes, nighttime recombination radiation, and auroral excitation. It can be run using inputs from empirical models of the neutral atmosphere and ionosphere or from numerical general circulation models of the coupled ionosphere-thermosphere system. It uses a solar flux module, photoelectron generation routine, and the Nagy-Banks two-stream electron transport algorithm to simultaneously handle energetic electron distributions from photon and auroral electron sources. It contains an ion-neutral chemistry module that calculates excited and ionized species densities and the resulting airglow volume emission rates. This paper describes the inputs, algorithms, and code structure of the model and demonstrates example outputs for daytime and auroral cases. Simulations of far ultraviolet emissions by the atomic oxygen doublet at 135.6 nm and the molecular nitrogen Lyman-Birge-Hopfield bands, as viewed from geostationary orbit, are shown, and model calculations are compared to limb-scan observations by the Global Ultraviolet Imager on the TIMED satellite. The GLOW model code is provided to the community through an open-source academic research license.
IPOLE - semi-analytic scheme for relativistic polarized radiative transport
NASA Astrophysics Data System (ADS)
Mościbrodzka, M.; Gammie, C. F.
2018-03-01
We describe IPOLE, a new public ray-tracing code for covariant, polarized radiative transport. The code extends the IBOTHROS scheme for covariant, unpolarized transport using two representations of the polarized radiation field: In the coordinate frame, it parallel transports the coherency tensor; in the frame of the plasma it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is implemented to be as spacetime- and coordinate- independent as possible. The emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, IPOLE is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth. We show that the code matches analytic results in flat space, and that it produces results that converge to those produced by Dexter's GRTRANS polarized transport code on a complicated model problem. We expect IPOLE will mainly find applications in modelling Event Horizon Telescope sources, but it may also be useful in other relativistic transport problems such as modelling for the IXPE mission.
Dynamic phasing of multichannel cw laser radiation by means of a stochastic gradient algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, V A; Volkov, M V; Garanin, S G
2013-09-30
The phasing of a multichannel laser beam by means of an iterative stochastic parallel gradient (SPG) algorithm has been numerically and experimentally investigated. The operation of the SPG algorithm is simulated, the acceptable range of amplitudes of probe phase shifts is found, and the algorithm parameters at which the desired Strehl number can be obtained with a minimum number of iterations are determined. An experimental bench with phase modulators based on lithium niobate, which are controlled by a multichannel electronic unit with a real-time microcontroller, has been designed. Phasing of 16 cw laser beams at a system response bandwidth ofmore » 3.7 kHz and phase thermal distortions in a frequency band of about 10 Hz is experimentally demonstrated. The experimental data are in complete agreement with the calculation results. (control of laser radiation parameters)« less
A technique for global monitoring of net solar irradiance at the ocean surface. I - Model
NASA Technical Reports Server (NTRS)
Frouin, Robert; Chertock, Beth
1992-01-01
An accurate long-term (84-month) climatology of net surface solar irradiance over the global oceans from Nimbus-7 earth radiation budget (ERB) wide-field-of-view planetary-albedo data is generated via an algorithm based on radiative transfer theory. Net surface solar irradiance is computed as the difference between the top-of-atmosphere incident solar irradiance (known) and the sum of the solar irradiance reflected back to space by the earth-atmosphere system (observed) and the solar irradiance absorbed by atmospheric constituents (modeled). It is shown that the effects of clouds and clear-atmosphere constituents can be decoupled on a monthly time scale, which makes it possible to directly apply the algorithm with monthly averages of ERB planetary-albedo data. Compared theoretically with the algorithm of Gautier et al. (1980), the present algorithm yields higher solar irradiance values in clear and thin cloud conditions and lower values in thick cloud conditions.
Radiation-MHD Simulations of Pillars and Globules in HII Regions
NASA Astrophysics Data System (ADS)
Mackey, J.
2012-07-01
Implicit and explicit raytracing-photoionisation algorithms have been implemented in the author's radiation-magnetohydrodynamics code. The algorithms are described briefly and their efficiency and parallel scaling are investigated. The implicit algorithm is more efficient for calculations where ionisation fronts have very supersonic velocities, and the explicit algorithm is favoured in the opposite limit because of its better parallel scaling. The implicit method is used to investigate the effects of initially uniform magnetic fields on the formation and evolution of dense pillars and cometary globules at the boundaries of HII regions. It is shown that for weak and medium field strengths an initially perpendicular field is swept into alignment with the pillar during its dynamical evolution, matching magnetic field observations of the ‘Pillars of Creation’ in M16. A strong perpendicular magnetic field remains in its initial configuration and also confines the photoevaporation flow into a bar-shaped, dense, ionised ribbon which partially shields the ionisation front.
NASA Astrophysics Data System (ADS)
Medgyesi-Mitschang, L. N.; Putnam, J. M.
1980-04-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.
Comparison of Stopping Power and Range Databases for Radiation Transport Study
NASA Technical Reports Server (NTRS)
Tai, H.; Bichsel, Hans; Wilson, John W.; Shinn, Judy L.; Cucinotta, Francis A.; Badavi, Francis F.
1997-01-01
The codes used to calculate stopping power and range for the space radiation shielding program at the Langley Research Center are based on the work of Ziegler but with modifications. As more experience is gained from experiments at heavy ion accelerators, prudence dictates a reevaluation of the current databases. Numerical values of stopping power and range calculated from four different codes currently in use are presented for selected ions and materials in the energy domain suitable for space radiation transport. This study of radiation transport has found that for most collision systems and for intermediate particle energies, agreement is less than 1 percent, in general, among all the codes. However, greater discrepancies are seen for heavy systems, especially at low particle energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.
2014-09-01
We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less
NASA Astrophysics Data System (ADS)
Ren, Yihui
As real-world complex networks are heterogeneous structures, not all their components such as nodes, edges and subgraphs carry the same role or importance in the functions performed by the networks: some elements are more critical than others. Understanding the roles of the components of a network is crucial for understanding the behavior of the network as a whole. One the most basic function of networks is transport; transport of vehicles/people, information, materials, forces, etc., and these quantities are transported along edges between source and destination nodes. For this reason, network path-based importance measures, also called centralities, play a crucial role in the understanding of the transport functions of the network and the network's structural and dynamical behavior in general. In this thesis we study the notion of betweenness centrality, which measures the fraction of lowest-cost (or shortest) paths running through a network component, in particular through a node or an edge. High betweenness centrality nodes/edges are those that will be frequently used by the entities transported through the network and thus they play a key role in the overall transport properties of the network. In the first part of the thesis we present a first-principles based method for traffic prediction using a cost-based generalization of the radiation model (emission/absorbtion model) for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. We then focus on studying the extent of changes in traffic flows in the wake of a localized damage or alteration to the network and we demonstrate that the changes can propagate globally, affecting traffic several hundreds of miles away. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events. In the second part of the thesis we focus on network deconstruction and community detection problems, both intensely studied topics in network science, using a weighted betweenness centrality approach. We present an algorithm that solves both problems efficiently and accurately and demonstrate that on both benchmark networks and data networks.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
Research on vehicles and cargos matching model based on virtual logistics platform
NASA Astrophysics Data System (ADS)
Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan
2018-04-01
Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.
Kurnikova, M G; Coalson, R D; Graf, P; Nitzan, A
1999-01-01
A lattice relaxation algorithm is developed to solve the Poisson-Nernst-Planck (PNP) equations for ion transport through arbitrary three-dimensional volumes. Calculations of systems characterized by simple parallel plate and cylindrical pore geometries are presented in order to calibrate the accuracy of the method. A study of ion transport through gramicidin A dimer is carried out within this PNP framework. Good agreement with experimental measurements is obtained. Strengths and weaknesses of the PNP approach are discussed. PMID:9929470
2013-07-01
also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32 B. MCNP PHYSICS OPTIONS ......................................................................................... 33 C. HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon
Hotplate precipitation gauge calibrations and field measurements
NASA Astrophysics Data System (ADS)
Zelasko, Nicholas; Wettlaufer, Adam; Borkhuu, Bujidmaa; Burkhart, Matthew; Campbell, Leah S.; Steenburgh, W. James; Snider, Jefferson R.
2018-01-01
First introduced in 2003, approximately 70 Yankee Environmental Systems (YES) hotplate precipitation gauges have been purchased by researchers and operational meteorologists. A version of the YES hotplate is described in Rasmussen et al. (2011; R11). Presented here is testing of a newer version of the hotplate; this device is equipped with longwave and shortwave radiation sensors. Hotplate surface temperature, coefficients describing natural and forced convective sensible energy transfer, and radiative properties (longwave emissivity and shortwave reflectance) are reported for two of the new-version YES hotplates. These parameters are applied in a new algorithm and are used to derive liquid-equivalent accumulations (snowfall and rainfall), and these accumulations are compared to values derived by the internal algorithm used in the YES hotplates (hotplate-derived accumulations). In contrast with R11, the new algorithm accounts for radiative terms in a hotplate's energy budget, applies an energy conversion factor which does not differ from a theoretical energy conversion factor, and applies a surface area that is correct for the YES hotplate. Radiative effects are shown to be relatively unimportant for the precipitation events analyzed. In addition, this work documents a 10 % difference between the hotplate-derived and new-algorithm-derived accumulations. This difference seems consistent with R11's application of a hotplate surface area that deviates from the actual surface area of the YES hotplate and with R11's recommendation for an energy conversion factor that differs from that calculated using thermodynamic theory.
LDRD Final Review: Radiation Transport Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goorley, John Timothy; Morgan, George Lake; Lestone, John Paul
2017-06-22
Both high-fidelity & toy simulations are being used to understand measured signals and improve the Area 11 NDSE diagnostic. We continue to gain more and more confidence in the ability for MCNP to simulate neutron and photon transport from source to radiation detector.
Use of Existing CAD Models for Radiation Shielding Analysis
NASA Technical Reports Server (NTRS)
Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.
2015-01-01
The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.
Radiation Transport in Type IA Supernovae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastman, R
1999-11-16
It has been said more than once that the critical link between explosion models and observations is the ability to accurately simulate cooling and radiation transport in the expanding ejecta of Type Ia supernovae. It is perhaps frustrating to some of the theorists who study explosion mechanisms, and to some of the observers too, that more definitive conclusions have not been reached about the agreement, or lack thereof, between various Type Ia supernova models and the data. Although claims of superlative accuracy in transport simulations are sometimes made, I will argue here that there are outstanding issues of critical importancemore » and in need of addressing before radiation transport calculations are accurate enough to discriminate between subtly different explosion models.« less
GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms.
Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick
2014-10-16
The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the median GTV dose provides a very robust method for treating lung lesions.
Improving Cancer Detection and Dose Efficiency in Dedicated Breast Cancer CT
2010-02-01
source trajectory and data truncation, which can however be solved with the back-projection filtration ( BPF ) algorithm [6,7]. I have used the BPF ...high to low radiation dose levels. I have investigated noise properties in images reconstructed by use of FDK and BPF algorithms at different noise...analytic algorithms such as the FDK and BPF algorithms are applied to sparse-view data, the reconstruction images will contain artifacts such as streak
Accelerated gradient-based free form deformable registration for online adaptive radiotherapy
NASA Astrophysics Data System (ADS)
Yu, Gang; Liang, Yueqiang; Yang, Guanyu; Shu, Huazhong; Li, Baosheng; Yin, Yong; Li, Dengwang
2015-04-01
The registration of planning fan-beam computed tomography (FBCT) and daily cone-beam CT (CBCT) is a crucial step in adaptive radiation therapy. The current intensity-based registration algorithms, such as Demons, may fail when they are used to register FBCT and CBCT, because the CT numbers in CBCT cannot exactly correspond to the electron densities. In this paper, we investigated the effects of CBCT intensity inaccuracy on the registration accuracy and developed an accurate gradient-based free form deformation algorithm (GFFD). GFFD distinguishes itself from other free form deformable registration algorithms by (a) measuring the similarity using the 3D gradient vector fields to avoid the effect of inconsistent intensities between the two modalities; (b) accommodating image sampling anisotropy using the local polynomial approximation-intersection of confidence intervals (LPA-ICI) algorithm to ensure a smooth and continuous displacement field; and (c) introducing a ‘bi-directional’ force along with an adaptive force strength adjustment to accelerate the convergence process. It is expected that such a strategy can decrease the effect of the inconsistent intensities between the two modalities, thus improving the registration accuracy and robustness. Moreover, for clinical application, the algorithm was implemented by graphics processing units (GPU) through OpenCL framework. The registration time of the GFFD algorithm for each set of CT data ranges from 8 to 13 s. The applications of on-line adaptive image-guided radiation therapy, including auto-propagation of contours, aperture-optimization and dose volume histogram (DVH) in the course of radiation therapy were also studied by in-house-developed software.
OUTWARD MOTION OF POROUS DUST AGGREGATES BY STELLAR RADIATION PRESSURE IN PROTOPLANETARY DISKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tazaki, Ryo; Nomura, Hideko, E-mail: rtazaki@kusastro.kyoto-u.ac.jp
2015-02-01
We study the dust motion at the surface layer of protoplanetary disks. Dust grains in the surface layer migrate outward owing to angular momentum transport via gas-drag force induced by the stellar radiation pressure. In this study we calculate the mass flux of the outward motion of compact grains and porous dust aggregates by the radiation pressure. The radiation pressure force for porous dust aggregates is calculated using the T-Matrix Method for the Clusters of Spheres. First, we confirm that porous dust aggregates are forced by strong radiation pressure even if they grow to be larger aggregates, in contrast tomore » homogeneous and spherical compact grains, for which radiation pressure efficiency becomes lower when their sizes increase. In addition, we find that the outward mass flux of porous dust aggregates with monomer size of 0.1 μm is larger than that of compact grains by an order of magnitude at the disk radius of 1 AU, when their sizes are several microns. This implies that large compact grains like calcium-aluminum-rich inclusions are hardly transported to the outer region by stellar radiation pressure, whereas porous dust aggregates like chondritic-porous interplanetary dust particles are efficiently transported to the comet formation region. Crystalline silicates are possibly transported in porous dust aggregates by stellar radiation pressure from the inner hot region to the outer cold cometary region in the protosolar nebula.« less
Method of Generating Transient Equivalent Sink and Test Target Temperatures for Swift BAT
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2004-01-01
The NASA Swift mission has a 600-km altitude and a 22 degrees maximum inclination. The sun angle varies from 45 degrees to 180 degrees in normal operation. As a result, environmental heat fluxes absorbed by the Burst Alert Telescope (BAT) radiator and loop heat pipe (LHP) compensation chambers (CCs) vary transiently. Therefore the equivalent sink temperatures for the radiator and CCs varies transiently. In thermal performance verification testing in vacuum, the radiator and CCs radiated heat to sink targets. This paper presents an analytical technique for generating orbit transient equivalent sink temperatures and a technique for generating transient sink target temperatures for the radiator and LHP CCs. Using these techniques, transient target temperatures for the radiator and LHP CCs were generated for three thermal environmental cases: worst hot case, worst cold case, and cooldown and warmup between worst hot case in sunlight and worst cold case in the eclipse, and three different heat transport values: 128 W, 255 W, and 382 W. The 128 W case assumed that the two LHPs transport 255 W equally to the radiator. The 255 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator. The 382 W case assumed that one LHP fails so that the remaining LHP transports all the waste heat from the detector array to the radiator, and has a 50% design margin. All these transient target temperatures were successfully implemented in the engineering test unit (ETU) LHP and flight LHP thermal performance verification tests in vacuum.
Modelling the phase curve and occultation of WASP-43b with SPIDERMAN
NASA Astrophysics Data System (ADS)
Louden, Tom
2017-06-01
Presenting SPIDERMAN, a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary two dimensional surface brightness distributions. SPIDERMAN uses an exact geometric algorithm to calculate the area of sub-regions of the planet that are occulted by the star, with no loss in numerical precision. The speed of this calculation makes it possible to run MCMCs to marginalise effectively over the underlying parameters controlling the brightness distribution of exoplanets. The code is fully open source and available over Github. We apply the code to the phase curve of WASP-43b using an analytical surface brightness distribution, and find an excellent fit to the data. We are able to place direct constraints on the physics of heat transport in the atmosphere, such as the ratio between advective and radiative timescales at different altitudes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Neil; Jibben, Zechariah; Brady, Peter
2017-06-28
Pececillo is a proxy-app for the open source Truchas metal processing code (LA-CC-15-097). It implements many of the physics models used in Truchas: free-surface, incompressible Navier-Stokes fluid dynamics (e.g., water waves); heat transport, material phase change, view factor thermal radiation; species advection-diffusion; quasi-static, elastic/plastic solid mechanics with contact; electomagnetics (Maxwell's equations). The models are simplified versions that retain the fundamental computational complexity of the Truchas models while omitting many non-essential features and modeling capabilities. The purpose is to expose Truchas algorithms in a greatly simplified context where computer science problems related to parallel performance on advanced architectures can be moremore » easily investigated. While Pececillo is capable of performing simulations representative of typical Truchas metal casting, welding, and additive manufacturing simulations, it lacks many of the modeling capabilites needed for real applications.« less
PCA-based artifact removal algorithm for stroke detection using UWB radar imaging.
Ricci, Elisa; di Domenico, Simone; Cianca, Ernestina; Rossi, Tommaso; Diomedi, Marina
2017-06-01
Stroke patients should be dispatched at the highest level of care available in the shortest time. In this context, a transportable system in specialized ambulances, able to evaluate the presence of an acute brain lesion in a short time interval (i.e., few minutes), could shorten delay of treatment. UWB radar imaging is an emerging diagnostic branch that has great potential for the implementation of a transportable and low-cost device. Transportability, low cost and short response time pose challenges to the signal processing algorithms of the backscattered signals as they should guarantee good performance with a reasonably low number of antennas and low computational complexity, tightly related to the response time of the device. The paper shows that a PCA-based preprocessing algorithm can: (1) achieve good performance already with a computationally simple beamforming algorithm; (2) outperform state-of-the-art preprocessing algorithms; (3) enable a further improvement in the performance (and/or decrease in the number of antennas) by using a multistatic approach with just a modest increase in computational complexity. This is an important result toward the implementation of such a diagnostic device that could play an important role in emergency scenario.
NASA Astrophysics Data System (ADS)
Flores-McLaughlin, John
2017-08-01
Planetary bodies and spacecraft are predominantly exposed to isotropic radiation environments that are subject to transport and interaction in various material compositions and geometries. Specifically, the Martian surface radiation environment is composed of galactic cosmic radiation, secondary particles produced by their interaction with the Martian atmosphere, albedo particles from the Martian regolith and occasional solar particle events. Despite this complex physical environment with potentially significant locational and geometric dependencies, computational resources often limit radiation environment calculations to a one-dimensional or slab geometry specification. To better account for Martian geometry, spherical volumes with respective Martian material densities are adopted in this model. This physical description is modeled with the PHITS radiation transport code and compared to a portion of measurements from the Radiation Assessment Detector of the Mars Science Laboratory. Particle spectra measured between 15 November 2015 and 15 January 2016 and PHITS model results calculated for this time period are compared. Results indicate good agreement between simulated dose rates, proton, neutron and gamma spectra. This work was originally presented at the 1st Mars Space Radiation Modeling Workshop held in 2016 in Boulder, CO.
Flores-McLaughlin, John
2017-08-01
Planetary bodies and spacecraft are predominantly exposed to isotropic radiation environments that are subject to transport and interaction in various material compositions and geometries. Specifically, the Martian surface radiation environment is composed of galactic cosmic radiation, secondary particles produced by their interaction with the Martian atmosphere, albedo particles from the Martian regolith and occasional solar particle events. Despite this complex physical environment with potentially significant locational and geometric dependencies, computational resources often limit radiation environment calculations to a one-dimensional or slab geometry specification. To better account for Martian geometry, spherical volumes with respective Martian material densities are adopted in this model. This physical description is modeled with the PHITS radiation transport code and compared to a portion of measurements from the Radiation Assessment Detector of the Mars Science Laboratory. Particle spectra measured between 15 November 2015 and 15 January 2016 and PHITS model results calculated for this time period are compared. Results indicate good agreement between simulated dose rates, proton, neutron and gamma spectra. This work was originally presented at the 1st Mars Space Radiation Modeling Workshop held in 2016 in Boulder, CO. Copyright © 2017. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Hofmann, Richard J.
1978-01-01
A general factor analysis computer algorithm is briefly discussed. The algorithm is highly transportable with minimum limitations on the number of observations. Both singular and non-singular data can be analyzed. (Author/JKS)
NASA Astrophysics Data System (ADS)
Bodin, Jacques
2015-03-01
In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Ogden, F. L.; Alvarez, L. V.
2016-12-01
This research work presents a methodology for estimating terrain slope degree, aspect (slope orientation) and total incoming solar radiation from Triangular Irregular Network (TIN) terrain models. The algorithm accounts for self shading and cast shadows, sky view fractions for diffuse radiation, remote albedo and atmospheric backscattering, by using a vectorial approach within a topocentric coordinate system and establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computingunit vector defining the position of the sun at each hour and day of the year. Thus, a dot product determines the radiation flux at each TIN element. Cast shadows are computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector only in the visible horizon range. Sky view fractions are computed by a simplified scanning algorithm from the highest to the lowest triangles along prescribed directions and visible distances, useful to determine diffuse radiation. Finally, remotealbedo is computed from the sky view fraction complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. The sensitivity of the different radiative components is tested a in a moutainuous watershed in Wyoming, to seasonal changes in weather and surrounding albedo (snow). This methodology represents an improvement on the current algorithms to compute terrain and radiation values on triangular-based models in an accurate and efficient manner. All terrain-related features (e.g. slope, aspect, sky view fraction) can be pre-computed and stored for easy access for a subsequent, progressive-in-time, numerical simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, R.G.
Much controversy surrounds government regulation of routing and scheduling of Hazardous Materials Transportation (HMT). Increases in operating costs must be balanced against expected benefits from local HMT bans and curfews when promulgating or preempting HMT regulations. Algorithmic approaches for evaluating HMT routing and scheduling regulatory policy are described. A review of current US HMT regulatory policy is presented to provide a context for the analysis. Next, a multiobjective shortest path algorithm to find the set of efficient routes under conflicting objectives is presented. This algorithm generates all efficient routes under any partial ordering in a single pass through the network.more » Also, scheduling algorithms are presented to estimate the travel time delay due to HMT curfews along a route. Algorithms are presented assuming either deterministic or stochastic travel times between curfew cities and also possible rerouting to avoid such cities. These algorithms are applied to the case study of US highway transport of spent nuclear fuel from reactors to permanent repositories. Two data sets were used. One data set included the US Interstate Highway System (IHS) network with reactor locations, possible repository sites, and 150 heavily populated areas (HPAs). The other data set contained estimates of the population residing with 0.5 miles of the IHS and the Eastern US. Curfew delay is dramatically reduced by optimally scheduling departure times unless inter-HPA travel times are highly uncertain. Rerouting shipments to avoid HPAs is a less efficient approach to reducing delay.« less
Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer
NASA Technical Reports Server (NTRS)
Godoy, William F.; Liu, Xu
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.
NASA Astrophysics Data System (ADS)
Zhevnerchuk, D. V.; Surkova, A. S.; Lomakina, L. S.; Golubev, A. S.
2018-05-01
The article describes the component representation approach and semantic models of on-board electronics protection from ionizing radiation of various nature. Semantic models are constructed, the feature of which is the representation of electronic elements, protection modules, sources of impact in the form of blocks with interfaces. The rules of logical inference and algorithms for synthesizing the object properties of the semantic network, imitating the interface between the components of the protection system and the sources of radiation, are developed. The results of the algorithm are considered using the example of radiation-resistant microcircuits 1645RU5U, 1645RT2U and the calculation and experimental method for estimating the durability of on-board electronics.
Three-dimensional monochromatic x-ray computed tomography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Katsuyuki; Uyama, Chikao
1998-08-01
We describe a technique of 3D computed tomography (3D CT) using monochromatic x rays generated by synchrotron radiation, which performs a direct reconstruction of a 3D volume image of an object from its cone-beam projections. For the development, we propose a practical scanning orbit of the x-ray source to obtain complete 3D information on an object, and its corresponding 3D image reconstruction algorithm. The validity and usefulness of the proposed scanning orbit and reconstruction algorithm were confirmed by computer simulation studies. Based on these investigations, we have developed a prototype 3D monochromatic x-ray CT using synchrotron radiation, which provides exact 3D reconstruction and material-selective imaging by using the K-edge energy subtraction technique.
Radiative and convective heating during Venus entry.
NASA Technical Reports Server (NTRS)
Page, W. A.; Woodward, H. T.
1972-01-01
Determination of the stagnation region heating of probes entering the Venusian atmosphere. Both convective and radiative heat-transfer rates are predicted, and account is taken of the important effects of radiative transport in the vehicle shock layer. A nongray radiative transport model is utilized which parallels a four-band treatment previously developed for air (Page et al., 1969), but includes two additional bands to account for the important CO(4+) molecular band system. Some comparisons are made between results for Venus entry and results for earth entry obtained using a viscous earth entry program.
NASA Technical Reports Server (NTRS)
Gleckler, P. J.; Randall, D. A.; Boer, G.; Colman, R.; Dix, M.; Galin, V.; Helfand, M.; Kiehl, J.; Kitoh, A.; Lau, W.
1995-01-01
This paper summarizes the ocean surface net energy flux simulated by fifteen atmospheric general circulation models constrained by realistically-varying sea surface temperatures and sea ice as part of the Atmospheric Model Intercomparison Project. In general, the simulated energy fluxes are within the very large observational uncertainties. However, the annual mean oceanic meridional heat transport that would be required to balance the simulated surface fluxes is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean heat transport can be affected by the errors in simulated cloud-radiation interactions. It is suggested that improved treatment of cloud radiative effects should help in the development of coupled atmosphere-ocean general circulation models.
NASA Technical Reports Server (NTRS)
Kaufman, Yoram; Tanre, Didier; Remer, Lorraine; Holben, Brent; Lau, William K.-M. (Technical Monitor)
2001-01-01
The MODIS instrument was launched on the NASA Terra satellite in Dec. 1999. Since last Oct., the sensor and the aerosol algorithm reached maturity and provide global daily retrievals of aerosol optical thickness and properties. MODIS has 36 spectral channels in the visible to IR with resolution down to 250 m. This allows accurate cloud screening and multi-spectral aerosol retrievals. We derive the aerosol optical thickness over the ocean and most of the land areas, distinguishing between fine (mainly man-made aerosol) and coarse aerosol particles. The information is more precise over the ocean where we derive also the effective radius and scattering asymmetry parameter of the aerosol. New methods to derive the aerosol single scattering albedo are also being developed. These measurements are use to track different aerosol sources, transport and the radiative forcing at the top and bottom of the atmosphere. The AErosol RObotic NETwork of ground based radiometers is used for global validation of the satellite derived optical thickness, size parameters and single scattering albedo and measure additional aerosol parameters that cannot be derived from space.
Multi-scale structural analysis of gas diffusion layers
NASA Astrophysics Data System (ADS)
Göbel, Martin; Godehardt, Michael; Schladitz, Katja
2017-07-01
The macroscopic properties of materials are strongly determined by their micro structure. Here, transport properties of gas diffusion layers (GDL) for fuel cells are considered. In order to simulate flow and thermal properties, detailed micro structural information is essential. 3D images obtained by high-resolution computed tomography using synchrotron radiation and scanning electron microscopy (SEM) combined with focused ion beam (FIB) serial slicing were used. A recent method for reconstruction of porous structures from FIB-SEM images and sophisticated morphological image transformations were applied to segment the solid structural components. The essential algorithmic steps for segmenting the different components in the tomographic data-sets are described and discussed. In this paper, two types of GDL, based on a non-woven substrate layer and a paper substrate layer were considered, respectively. More than three components are separated within the synchrotron radiation computed tomography data. That is, fiber system, polytetrafluoroethylene (PTFE) binder/impregnation, micro porous layer (MPL), inclusions within the latter, and pore space are segmented. The usage of the thus derived 3D structure data in different simulation applications can be demonstrated. Simulations of macroscopic properties such as thermal conductivity, depending on the flooding state of the GDL are possible.
Analysis of Mass Averaged Tissue Doses in CAM, CAF, MAX, and FAX
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Qualls, Garry D.; Clowdsley, Martha S.; Blattnig, Steve R.; Simonsen, Lisa C.; Walker, Steven A.; Singleterry, Robert C.
2009-01-01
To estimate astronaut health risk due to space radiation, one must have the ability to calculate exposure-related quantities averaged over specific organs and tissue types. In this study, we first examine the anatomical properties of the Computerized Anatomical Man (CAM), Computerized Anatomical Female (CAF), Male Adult voXel (MAX), and Female Adult voXel (FAX) models by comparing the masses of various tissues to the reference values specified by the International Commission on Radiological Protection (ICRP). Major discrepancies are found between the CAM and CAF tissue masses and the ICRP reference data for almost all of the tissues. We next examine the distribution of target points used with the deterministic transport code HZETRN to compute mass averaged exposure quantities. A numerical algorithm is used to generate multiple point distributions for many of the effective dose tissues identified in CAM, CAF, MAX, and FAX. It is concluded that the previously published CAM and CAF point distributions were under-sampled and that the set of point distributions presented here should be adequate for future studies involving CAM, CAF, MAX, or FAX. It is concluded that MAX and FAX are more accurate than CAM and CAF for space radiation analyses.
Benchmark Analysis of Pion Contribution from Galactic Cosmic Rays
NASA Technical Reports Server (NTRS)
Aghara, Sukesh K.; Blattnig, Steve R.; Norbury, John W.; Singleterry, Robert C., Jr.
2008-01-01
Shielding strategies for extended stays in space must include a comprehensive resolution of the secondary radiation environment inside the spacecraft induced by the primary, external radiation. The distribution of absorbed dose and dose equivalent is a function of the type, energy and population of these secondary products. A systematic verification and validation effort is underway for HZETRN, which is a space radiation transport code currently used by NASA. It performs neutron, proton and heavy ion transport explicitly, but it does not take into account the production and transport of mesons, photons and leptons. The question naturally arises as to what is the contribution of these particles to space radiation. The pion has a production kinetic energy threshold of about 280 MeV. The Galactic cosmic ray (GCR) spectra, coincidentally, reaches flux maxima in the hundreds of MeV range, corresponding to the pion production threshold. We present results from the Monte Carlo code MCNPX, showing the effect of lepton and meson physics when produced and transported explicitly in a GCR environment.
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei; Adams, J. H., Jr.
2005-01-01
Space radiation risk to astronauts is a major obstacle for long term human space explorations. Space radiation transport codes have thus been developed to evaluate radiation effects at the International Space Station (ISS) and in missions to the Moon or Mars. We study how nuclear fragmentation processes in such radiation transport affect predictions on the radiation risk from galactic cosmic rays. Taking into account effects of the geomagnetic field on the cosmic ray spectra, we investigate the effects of fragmentation cross sections at different energies on the radiation risk (represented by dose-equivalent) from galactic cosmic rays behind typical spacecraft materials. These results tell us how the radiation risk at the ISS is related to nuclear cross sections at different energies, and consequently how to most efficiently reduce the physical uncertainty in our predictions on the radiation risk at the ISS.
Implications of Lagrangian Tracer Transport for Coupled Chemistry-Climate Simulations
NASA Astrophysics Data System (ADS)
Stenke, A.
2009-05-01
Today's coupled chemistry-climate models (CCM) consider a large number of trace species and feedback processes. Due to the radiative effect of some species, errors in simulated tracer distributions can feed back to model dynamics. Thus, shortcomings of the applied transport schemes can have severe implications for the overall model performance. Traditional Eulerian approaches show a satisfactory performance in case of homogeneously distributed trace species, but they can lead to severe problems when applied to highly inhomogeneous tracer distributions. In case of sharp gradients many schemes show a considerable numerical diffusion. Lagrangian approaches, on the other hand, combine a number of favourable numerical properties: They are strictly mass-conserving and do not suffer from numerical diffusion. Therefore they are able to maintain steeper gradients. A further advantage is that they allow the transport of a large number of tracers without being prohibitively expensive. A variety of benefits for stratospheric dynamics and chemistry resulting from a Lagrangian transport algorithm are demonstrated by the example of the CCM E39C. In an updated version of E39C, called E39C-A, the operational semi-Lagrangian advection scheme has been replaced with the purely Lagrangian scheme ATTILA. It will be shown that several model deficiencies can be cured by the choice of an appropriate transport algorithm. The most important advancement concerns the reduction of a pronounced wet bias in the extra- tropical lowermost stratosphere. In turn, the associated temperature error ("cold bias") is significantly reduced. Stratospheric wind variations are now in better agreement with observations, e.g. E39C-A is able to reproduce the stratospheric wind reversal in the Southern Hemisphere in summer which was not captured by the previous model version. Resulting changes in wave propagation and dissipation lead to a weakening of the simulated mean meridional circulation and therefore a more realistic representation of tropical upwelling. Simulated distributions of chemical tracers in the stratosphere are clearly improved. For example, the vertical distribution of stratospheric chlorine (Cly) is now in agreement with analyses derived from observations and other CCMs. As a consequence the model realistically covers the altitude of maximum ozone depletion in the stratosphere. Furthermore, the simulated temporal evolution of stratospheric Cly in the past agrees is realistically reproduced which is an important step towards more reliable projections of future changes, especially of stratospheric ozone.
Aerosol Retrievals Using Channel 1 and 2 AVHRR Data
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.
1999-01-01
The effect of tropospheric aerosols on global climate via the direct and indirect radiative forcings is one of the largest remaining uncertainties in climate change studies. Current assessments of the direct aerosol radiative effect mainly focus on sulfate aerosols. It has become clear, however, that other aerosol types like soil dust and smoke from biomass burning are also likely to be important climate forcing factors. The magnitude and even the sign of the climate forcing caused by these aerosol types is still unknown. General circulation models (GCMs) can be used to estimate the climatic effect of the direct radiative forcing by tropospheric and stratospheric aerosols. Aerosol optical properties are already parameterized in the Goddard Institute for Space Studies GCM. Once the global distribution of aerosol properties (optical thickness, size distribution, and chemical composition) is available, the calculation of the direct aerosol forcing is rather straighfforward. However, estimates of the indirect aerosol effect require additional knowledge of the physics and chemistry of aerosol-cloud interactions which are still poorly understood. One of the main objectives of the Global Aerosol Climatology Project, established in 1998 as a joint initiative of NASA's Radiation Science Program and GEWEX, is to infer the global distribution of aerosols, their properties, and their seasonal and interannual variations for the full period of available satellite data. This will be accomplished primarily through a systematic application of multichannel aerosol retrieval algorithms to existing satellite data and advanced 3-dimensional aerosol chemistry/transport models. In this paper we outline the methodology of analyzing channel 1 and 2 AVHRR radiance data over the oceans and describe preliminary retrieval results.
The role of advanced reconstruction algorithms in cardiac CT
Halliburton, Sandra S.; Tanabe, Yuki; Partovi, Sasan
2017-01-01
Non-linear iterative reconstruction (IR) algorithms have been increasingly incorporated into clinical cardiac CT protocols at institutions around the world. Multiple IR algorithms are available commercially from various vendors. IR algorithms decrease image noise and are primarily used to enable lower radiation dose protocols. IR can also be used to improve image quality for imaging of obese patients, coronary atherosclerotic plaques, coronary stents, and myocardial perfusion. In this article, we will review the various applications of IR algorithms in cardiac imaging and evaluate how they have changed practice. PMID:29255694
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
49 CFR 173.310 - Exceptions for radiation detectors.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 2 2012-10-01 2012-10-01 false Exceptions for radiation detectors. 173.310... for radiation detectors. Radiation detectors, radiation sensors, electron tube devices, or ionization chambers, herein referred to as “radiation detectors,” that contain only Division 2.2 gases, are excepted...
49 CFR 173.310 - Exceptions for radiation detectors.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 2 2013-10-01 2013-10-01 false Exceptions for radiation detectors. 173.310... for radiation detectors. Radiation detectors, radiation sensors, electron tube devices, or ionization chambers, herein referred to as “radiation detectors,” that contain only Division 2.2 gases, are excepted...
49 CFR 173.310 - Exceptions for radiation detectors.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 2 2014-10-01 2014-10-01 false Exceptions for radiation detectors. 173.310... for radiation detectors. Radiation detectors, radiation sensors, electron tube devices, or ionization chambers, herein referred to as “radiation detectors,” that contain only Division 2.2 gases, are excepted...
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.
1999-01-01
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.
Calculation of radiation therapy dose using all particle Monte Carlo transport
Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.
1999-02-09
The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata; Rao, Nageswara S; Wu, Qishi
There have been increasingly large deployments of radiation detection networks that require computationally fast algorithms to produce prompt results over ad-hoc sub-networks of mobile devices, such as smart-phones. These algorithms are in sharp contrast to complex network algorithms that necessitate all measurements to be sent to powerful central servers. In this work, at individual sensors, we employ Wald-statistic based detection algorithms which are computationally very fast, and are implemented as one of three Z-tests and four chi-square tests. At fusion center, we apply the K-out-of-N fusion to combine the sensors hard decisions. We characterize the performance of detection methods bymore » deriving analytical expressions for the distributions of underlying test statistics, and by analyzing the fusion performances in terms of K, N, and the false-alarm rates of individual detectors. We experimentally validate our methods using measurements from indoor and outdoor characterization tests of the Intelligence Radiation Sensors Systems (IRSS) program. In particular, utilizing the outdoor measurements, we construct two important real-life scenarios, boundary surveillance and portal monitoring, and present the results of our algorithms.« less
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
NASA Technical Reports Server (NTRS)
Sohn, Byung-Ju; Smith, Eric A.
1992-01-01
This report investigates the impact of differential net radiative heating on 2D energy transports within the atmosphere ocean system and the role of clouds on this process. The 2D mean energy transports, in answer to zonal and meridional gradients in the net radiation field, show an east-west coupled dipole structure in which the Pacific acts as the major energy source and North Africa as the major energy sink. It is demonstrated that the dipole is embedded in the secondary energy transports arising mainly from the differential heating between land and oceans in the tropics in which the tropical east-west (zonal) transports are up to 30 percent of the tropical north-south (meridional) transports.
Racial disparities in travel time to radiotherapy facilities in the Atlanta metropolitan area
Peipins, Lucy A.; Graham, Shannon; Young, Randall; Lewis, Brian; Flanagan, Barry
2018-01-01
Low-income women with breast cancer who rely on public transportation may have difficulty in completing recommended radiation therapy due to inadequate access to radiation facilities. Using a geographic information system (GIS) and network analysis we quantified spatial accessibility to radiation treatment facilities in the Atlanta, Georgia metropolitan area. We built a transportation network model that included all bus and rail routes and stops, system transfers and walk and wait times experienced by public transportation system travelers. We also built a private transportation network to model travel times by automobile. We calculated travel times to radiation therapy facilities via public and private transportation from a population-weighted center of each census tract located within the study area. We broadly grouped the tracts by low, medium and high household access to a private vehicle and by race. Facility service areas were created using the network model to map the extent of areal coverage at specified travel times (30, 45 and 60 min) for both public and private modes of transportation. The median public transportation travel time to the nearest radiotherapy facility was 56 min vs. approximately 8 min by private vehicle. We found that majority black census tracts had longer public transportation travel times than white tracts across all categories of vehicle access and that 39% of women in the study area had longer than 1 h of public transportation travel time to the nearest facility. In addition, service area analyses identified locations where the travel time barriers are the greatest. Spatial inaccessibility, especially for women who must use public transportation, is one of the barriers they face in receiving optimal treatment. PMID:23726213
Racial disparities in travel time to radiotherapy facilities in the Atlanta metropolitan area.
Peipins, Lucy A; Graham, Shannon; Young, Randall; Lewis, Brian; Flanagan, Barry
2013-07-01
Low-income women with breast cancer who rely on public transportation may have difficulty in completing recommended radiation therapy due to inadequate access to radiation facilities. Using a geographic information system (GIS) and network analysis we quantified spatial accessibility to radiation treatment facilities in the Atlanta, Georgia metropolitan area. We built a transportation network model that included all bus and rail routes and stops, system transfers and walk and wait times experienced by public transportation system travelers. We also built a private transportation network to model travel times by automobile. We calculated travel times to radiation therapy facilities via public and private transportation from a population-weighted center of each census tract located within the study area. We broadly grouped the tracts by low, medium and high household access to a private vehicle and by race. Facility service areas were created using the network model to map the extent of areal coverage at specified travel times (30, 45 and 60 min) for both public and private modes of transportation. The median public transportation travel time to the nearest radiotherapy facility was 56 min vs. approximately 8 min by private vehicle. We found that majority black census tracts had longer public transportation travel times than white tracts across all categories of vehicle access and that 39% of women in the study area had longer than 1 h of public transportation travel time to the nearest facility. In addition, service area analyses identified locations where the travel time barriers are the greatest. Spatial inaccessibility, especially for women who must use public transportation, is one of the barriers they face in receiving optimal treatment. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Wenjun, E-mail: sun_wenjun@iapcm.ac.cn; Jiang, Song, E-mail: jiang@iapcm.ac.cn; Xu, Kun, E-mail: makxu@ust.hk
This paper presents an extension of previous work (Sun et al., 2015 [22]) of the unified gas kinetic scheme (UGKS) for the gray radiative transfer equations to the frequency-dependent (multi-group) radiative transfer system. Different from the gray radiative transfer equations, where the optical opacity is only a function of local material temperature, the simulation of frequency-dependent radiative transfer is associated with additional difficulties from the frequency-dependent opacity. For the multiple frequency radiation, the opacity depends on both the spatial location and the frequency. For example, the opacity is typically a decreasing function of frequency. At the same spatial region themore » transport physics can be optically thick for the low frequency photons, and optically thin for high frequency ones. Therefore, the optical thickness is not a simple function of space location. In this paper, the UGKS for frequency-dependent radiative system is developed. The UGKS is a finite volume method and the transport physics is modeled according to the ratio of the cell size to the photon's frequency-dependent mean free path. When the cell size is much larger than the photon's mean free path, a diffusion solution for such a frequency radiation will be obtained. On the other hand, when the cell size is much smaller than the photon's mean free path, a free transport mechanism will be recovered. In the regime between the above two limits, with the variation of the ratio between the local cell size and photon's mean free path, the UGKS provides a smooth transition in the physical and frequency space to capture the corresponding transport physics accurately. The seemingly straightforward extension of the UGKS from the gray to multiple frequency radiation system is due to its intrinsic consistent multiple scale transport modeling, but it still involves lots of work to properly discretize the multiple groups in order to design an asymptotic preserving (AP) scheme in all regimes. The current scheme is tested in a few frequency-dependent radiation problems, and the results are compared with the solutions from the well-defined implicit Monte Carlo (IMC) method. The UGKS is much more efficient than IMC, and the computational times of both schemes for all test cases are listed. The UGKS seems to be the first discrete ordinate method (DOM) for the accurate capturing of multiple frequency radiative transport physics from ballistic particle motion to the diffusive wave propagation.« less
Veselov, E I
2011-01-01
The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects.
Radiation Diffusion:. AN Overview of Physical and Numerical Concepts
NASA Astrophysics Data System (ADS)
Graziani, Frank
2005-12-01
An overview of the physical and mathematical foundations of radiation transport is given. Emphasis is placed on how the diffusion approximation and its transport corrections arise. An overview of the numerical handling of radiation diffusion coupled to matter is also given. Discussions center on partial temperature and grey methods with comments concerning fully implicit methods. In addition finite difference, finite element and Pert representations of the div-grad operator is also discussed
NASA Astrophysics Data System (ADS)
Bennett, Neil; Coppell, David; Rogers, David; Schrader, John
2004-09-01
Changes in the regulatory framework governing the Radiation Processing Industry have the potential to make a real business impact on day-to-day profitability. Many areas of the Radiation Processing Industry are affected by changes in the regulatory framework within which these areas are managed. When planning for such changes the transportation element in the shipment of sealed cobalt radiation sources is an area that is often neglected by some parts of the distribution chain. A balance must be struck between the cobalt supplier and the facility operator/customer that rests upon how much the customer needs to know about the intricacies of cobalt shipment. The objective of this paper is to highlight areas of possible business impact and reassure the users of sealed radiation sources that the global suppliers of these products are used to negotiating local variations in regulations governing the physical transportation of radiation sources, changes in regulations governing the design, manufacture and use of transportation containers and changes in the availability of commercial shippers and shipping routes. The major suppliers of industrial quantities of cobalt-60 are well placed to lead their customers through this complex process as a matter of routine.
Calculating the Responses of Self-Powered Radiation Detectors.
NASA Astrophysics Data System (ADS)
Thornton, D. A.
Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.
An asymptotic preserving unified gas kinetic scheme for gray radiative transfer equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Wenjun, E-mail: sun_wenjun@iapcm.ac.cn; Jiang, Song, E-mail: jiang@iapcm.ac.cn; Xu, Kun, E-mail: makxu@ust.hk
The solutions of radiative transport equations can cover both optical thin and optical thick regimes due to the large variation of photon's mean-free path and its interaction with the material. In the small mean free path limit, the nonlinear time-dependent radiative transfer equations can converge to an equilibrium diffusion equation due to the intensive interaction between radiation and material. In the optical thin limit, the photon free transport mechanism will emerge. In this paper, we are going to develop an accurate and robust asymptotic preserving unified gas kinetic scheme (AP-UGKS) for the gray radiative transfer equations, where the radiation transportmore » equation is coupled with the material thermal energy equation. The current work is based on the UGKS framework for the rarefied gas dynamics [14], and is an extension of a recent work [12] from a one-dimensional linear radiation transport equation to a nonlinear two-dimensional gray radiative system. The newly developed scheme has the asymptotic preserving (AP) property in the optically thick regime in the capturing of diffusive solution without using a cell size being smaller than the photon's mean free path and time step being less than the photon collision time. Besides the diffusion limit, the scheme can capture the exact solution in the optical thin regime as well. The current scheme is a finite volume method. Due to the direct modeling for the time evolution solution of the interface radiative intensity, a smooth transition of the transport physics from optical thin to optical thick can be accurately recovered. Many numerical examples are included to validate the current approach.« less
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
Modeling the blockage of Lg waves from 3-D variations in crustal structure
NASA Astrophysics Data System (ADS)
Sanborn, Christopher J.; Cormier, Vernon F.
2018-05-01
Comprised of S waves trapped in Earth's crust, the high frequency (2-10 Hz) Lg wave is important to discriminating earthquakes from explosions by comparing its amplitude and waveform to those of Pg and Pn waves. Lateral variations in crustal structure, including variations in crustal thickness, intrinsic attenuation, and scattering, affect the efficiency of Lg propagation and its consistency as a source discriminant at regional (200-1500 km) distances. To investigate the effects of laterally varying Earth structure on the efficiency of propagation of Lg and Pg, we apply a radiative transport algorithm to model complete, high-frequency (2-4 Hz), regional coda envelopes. The algorithm propagates packets of energy with ray theory through large-scale 3-D structure, and includes stochastic effects of multiple-scattering by small-scale heterogeneities within the large-scale structure. Source-radiation patterns are described by moment tensors. Seismograms of explosion and earthquake sources are synthesized in canonical models to predict effects on waveforms of paths crossing regions of crustal thinning (pull-apart basins and ocean/continent transitions) and thickening (collisional mountain belts), For paths crossing crustal thinning regions, Lg is amplified at receivers within the thinned region but strongly disrupted and attenuated at receivers beyond the thinned region. For paths crossing regions of crustal thickening, Lg amplitude is attenuated at receivers within the thickened region, but experiences little or no reduction in amplitude at receivers beyond the thickened region. The length of the Lg propagation within a thickened region and the complexity of over- and under-thrust crustal layers, can produce localized zones of Lg amplification or attenuation. Regions of intense scattering within laterally homogeneous models of the crust increase Lg attenuation but do not disrupt its coda shape.
Comparison of different methods to retrieve optical-equivalent snow grain size in central Antarctica
NASA Astrophysics Data System (ADS)
Carlsen, Tim; Birnbaum, Gerit; Ehrlich, André; Freitag, Johannes; Heygster, Georg; Istomina, Larysa; Kipfstuhl, Sepp; Orsi, Anaïs; Schäfer, Michael; Wendisch, Manfred
2017-11-01
The optical-equivalent snow grain size affects the reflectivity of snow surfaces and, thus, the local surface energy budget in particular in polar regions. Therefore, the specific surface area (SSA), from which the optical snow grain size is derived, was observed for a 2-month period in central Antarctica (Kohnen research station) during austral summer 2013/14. The data were retrieved on the basis of ground-based spectral surface albedo measurements collected by the COmpact RAdiation measurement System (CORAS) and airborne observations with the Spectral Modular Airborne Radiation measurement sysTem (SMART). The snow grain size and pollution amount (SGSP) algorithm, originally developed to analyze spaceborne reflectance measurements by the MODerate Resolution Imaging Spectroradiometer (MODIS), was modified in order to reduce the impact of the solar zenith angle on the retrieval results and to cover measurements in overcast conditions. Spectral ratios of surface albedo at 1280 and 1100 nm wavelength were used to reduce the retrieval uncertainty. The retrieval was applied to the ground-based and airborne observations and validated against optical in situ observations of SSA utilizing an IceCube device. The SSA retrieved from CORAS observations varied between 27 and 89 m2 kg-1. Snowfall events caused distinct relative maxima of the SSA which were followed by a gradual decrease in SSA due to snow metamorphism and wind-induced transport of freshly fallen ice crystals. The ability of the modified algorithm to include measurements in overcast conditions improved the data coverage, in particular at times when precipitation events occurred and the SSA changed quickly. SSA retrieved from measurements with CORAS and MODIS agree with the in situ observations within the ranges given by the measurement uncertainties. However, SSA retrieved from the airborne SMART data slightly underestimated the ground-based results.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander
2015-11-01
Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.
Machine learning based cloud mask algorithm driven by radiative transfer modeling
NASA Astrophysics Data System (ADS)
Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.
2017-12-01
Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.
A decade of infrared versus visible AOD analysis within the dust belt
NASA Astrophysics Data System (ADS)
Capelle, Virginie; Chédin, Alain; Pondrom, Marc; Crevoisier, Cyril; Armante, Raymond; Crépeau, Laurent; Scott, Noëlle
2017-04-01
Aerosols represent one of the dominant uncertainties in radiative forcing, partly because of their very high spatiotemporal variability, a still insufficient knowledge of their microphysical and optical properties, or of their vertical distribution. A better understanding and forecasting of their impact on climate therefore requires precise observations of dust emission and transport. Observations from space offer a good opportunity to follow, day by day and at high spatial resolution, dust evolution at global scale and over long time series. In this context, infrared observations, by allowing retrieving simultaneously dust optical depth (AOD) as well as the mean dust layer altitude, daytime and nighttime, over oceans and over continents, in particular over desert, appears highly complementary to observations in the visible. In this study, a decade of infrared observations (Metop-A/IASI and AIRS/AQUA) has been processed pixel by pixel, using a "Look-Up-Table" (LUT) physical approach. The retrieved infrared 10µm coarse-mode AOD is compared with the Spectral Deconvolution Algorithm (SDA) 500nm coarse mode AOD observed at 50 ground-based Aerosol RObotic NETwork (AERONET) sites located within the dust belt. Analyzing their brings into evidence an important geographical variability. Lowest values are found close to dust sources ( 0.45 for the Sahel or Arabian Peninsula, 0.6-0.7 for the Northern part of Africa or India), whereas the ratio increases for transported dust with values of 0.9-1 for the Caribbean and for the Mediterranean basin. This variability is interpreted as a marker of clays abundance, and might be linked to the dust particle illite to kaolinite ratio, a recognized tracer of dust sources and transport. More generally, it suggests that the difference between the radiative impact of dust aerosols in the visible and in the infrared depends on the type of particles observed. This highlights the importance of taking into account the specificity of the infrared when considering the role of mineral dust on the Earth's energy budget.
Generalized field-splitting algorithms for optimal IMRT delivery efficiency.
Kamath, Srijit; Sahni, Sartaj; Li, Jonathan; Ranka, Sanjay; Palta, Jatinder
2007-09-21
Intensity-modulated radiation therapy (IMRT) uses radiation beams of varying intensities to deliver varying doses of radiation to different areas of the tissue. The use of IMRT has allowed the delivery of higher doses of radiation to the tumor and lower doses to the surrounding healthy tissue. It is not uncommon for head and neck tumors, for example, to have large treatment widths that are not deliverable using a single field. In such cases, the intensity matrix generated by the optimizer needs to be split into two or three matrices, each of which may be delivered using a single field. Existing field-splitting algorithms used the pre-specified arbitrary split line or region where the intensity matrix is split along a column, i.e., all rows of the matrix are split along the same column (with or without the overlapping of split fields, i.e., feathering). If three fields result, then the two splits are along the same two columns for all rows. In this paper we study the problem of splitting a large field into two or three subfields with the field width as the only constraint, allowing for an arbitrary overlap of the split fields, so that the total MU efficiency of delivering the split fields is maximized. Proof of optimality is provided for the proposed algorithm. An average decrease of 18.8% is found in the total MUs when compared to the split generated by a commercial treatment planning system and that of 10% is found in the total MUs when compared to the split generated by our previously published algorithm.
Ardley, Nicholas D; Lau, Ken K; Buchan, Kevin
2013-12-01
Cervical spine injuries occur in 4-8 % of adults with head trauma. Dual acquisition technique has been traditionally used for the CT scanning of brain and cervical spine. The purpose of this study was to determine the efficacy of radiation dose reduction by using a single acquisition technique that incorporated both anatomical regions with a dedicated neck detection algorithm. Thirty trauma patients for brain and cervical spine CT were included and were scanned with the single acquisition technique. The radiation doses from the single CT acquisition technique with the neck detection algorithm, which allowed appropriate independent dose administration relevant to brain and cervical spine regions, were recorded. Comparison was made both to the doses calculated from the simulation of the traditional dual acquisitions with matching parameters, and to the doses of retrospective dual acquisition legacy technique with the same sample size. The mean simulated dose for the traditional dual acquisition technique was 3.99 mSv, comparable to the average dose of 4.2 mSv from 30 previous patients who had CT of brain and cervical spine as dual acquisitions. The mean dose from the single acquisition technique was 3.35 mSv, resulting in a 16 % overall dose reduction. The images from the single acquisition technique were of excellent diagnostic quality. The new single acquisition CT technique incorporating the neck detection algorithm for brain and cervical spine significantly reduces the overall radiation dose by eliminating the unavoidable overlapping range between 2 anatomical regions which occurs with the traditional dual acquisition technique.
Faster and More Accurate Transport Procedures for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.
2010-01-01
Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
NASA Astrophysics Data System (ADS)
Kuang, Simeng Max
This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Fiorino, Steven
2002-01-01
Coordinated ground, aircraft, and satellite observations are analyzed from the 1999 TRMM Kwajalein Atoll field experiment (KWAJEX) to better understand the relationships between cloud microphysical processes and microwave radiation intensities in the context of physical evaluation of the Level 2 TRMM radiometer rain profile algorithm and uncertainties with its assumed microphysics-radiation relationships. This talk focuses on the results of a multi-dataset analysis based on measurements from KWAJEX surface, air, and satellite platforms to test the hypothesis that uncertainties in the passive microwave radiometer algorithm (TMI 2a12 in the nomenclature of TRMM) are systematically coupled and correlated with the magnitudes of deviation of the assumed 3-dimensional microphysical properties from observed microphysical properties. Re-stated, this study focuses on identifying the weaknesses in the operational TRMM 2a12 radiometer algorithm based on observed microphysics and radiation data in terms of over-simplifications used in its theoretical microphysical underpinnings. The analysis makes use of a common transform coordinate system derived from the measuring capabilities of the aircraft radiometer used to survey the experimental study area, i.e., the 4-channel AMPR radiometer flown on the NASA DC-8 aircraft. Normalized emission and scattering indices derived from radiometer brightness temperatures at the four measuring frequencies enable a 2-dimensional coordinate system that facilities compositing of Kwajalein S-band ground radar reflectivities, ARMAR Ku-band aircraft radar reflectivities, TMI spacecraft radiometer brightness temperatures, PR Ku-band spacecraft radar reflectivities, bulk microphysical parameters derived from the aircraft-mounted cloud microphysics laser probes (including liquid/ice water contents, effective liquid/ice hydrometeor radii, and effective liquid/ice hydrometeor variances), and rainrates derived from any of the individual ground, aircraft, or satellite algorithms applied to the radar or radiometer measurements, or their combination. The results support the study's underlying hypothesis, particularly in context of ice phase processes, in that the cloud regions where the 2a12 algorithm's microphysical database most misrepresents the microphysical conditions as determined by the laser probes, are where retrieved surface rainrates are most erroneous relative to other reference rainrates as determined by ground and aircraft radar. In reaching these conclusions, TMI and PR brightness temperatures and reflectivities have been synthesized from the aircraft AMPR and ARMAR measurements with the analysis conducted in a composite framework to eliminate measurement noise associated with the case study approach and single element volumes obfuscated by heterogeneous beam filling effects. In diagnosing the performance of the 2a12 algorithm, weaknesses have been found in the cloud-radiation database used to provide microphysical guidance to the algorithm for upper cloud ice microphysics. It is also necessary to adjust a fractional convective rainfall factor within the algorithm somewhat arbitrarily to achieve satisfactory algorithm accuracy.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Aquarius Salinity Retrieval Algorithm: Final Pre-Launch Version
NASA Technical Reports Server (NTRS)
Wentz, Frank J.; Le Vine, David M.
2011-01-01
This document provides the theoretical basis for the Aquarius salinity retrieval algorithm. The inputs to the algorithm are the Aquarius antenna temperature (T(sub A)) measurements along with a number of NCEP operational products and pre-computed tables of space radiation coming from the galaxy and sun. The output is sea-surface salinity and many intermediate variables required for the salinity calculation. This revision of the Algorithm Theoretical Basis Document (ATBD) is intended to be the final pre-launch version.