FLYCHK Collisional-Radiative Code
National Institute of Standards and Technology Data Gateway
SRD 160 FLYCHK Collisional-Radiative Code (Web, free access) FLYCHK provides a capability to generate atomic level populations and charge state distributions for low-Z to mid-Z elements under NLTE conditions.
BART: Bayesian Atmospheric Radiative Transfer fitting code
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph; Rojo, Patricio; Lust, Nate; Bowman, Oliver; Stemm, Madison; Foster, Andrew; Loredo, Thomas J.; Fortney, Jonathan; Madhusudhan, Nikku
2016-08-01
BART implements a Bayesian, Monte Carlo-driven, radiative-transfer scheme for extracting parameters from spectra of planetary atmospheres. BART combines a thermochemical-equilibrium code, a one-dimensional line-by-line radiative-transfer code, and the Multi-core Markov-chain Monte Carlo statistical module to constrain the atmospheric temperature and chemical-abundance profiles of exoplanets.
MACRAD: A mass analysis code for radiators
Gallup, D.R.
1988-01-01
A computer code to estimate and optimize the mass of heat pipe radiators (MACRAD) is currently under development. A parametric approach is used in MACRAD, which allows the user to optimize radiator mass based on heat pipe length, length to diameter ratio, vapor to wick radius, radiator redundancy, etc. Full consideration of the heat pipe operating parameters, material properties, and shielding requirements is included in the code. Preliminary results obtained with MACRAD are discussed.
TORUS: Radiation transport and hydrodynamics code
NASA Astrophysics Data System (ADS)
Harries, Tim
2014-04-01
TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.
Airborne antenna radiation pattern code user's manual
NASA Technical Reports Server (NTRS)
Burnside, Walter D.; Kim, Jacob J.; Grandchamp, Brett; Rojas, Roberto G.; Law, Philip
1985-01-01
The use of a newly developed computer code to analyze the radiation patterns of antennas mounted on a ellipsoid and in the presence of a set of finite flat plates is described. It is shown how the code allows the user to simulate a wide variety of complex electromagnetic radiation problems using the ellipsoid/plates model. The code has the capacity of calculating radiation patterns around an arbitrary conical cut specified by the user. The organization of the code, definition of input and output data, and numerous practical examples are also presented. The analysis is based on the Uniform Geometrical Theory of Diffraction (UTD), and most of the computed patterns are compared with experimental results to show the accuracy of this solution.
2-DUST: Dust radiative transfer code
NASA Astrophysics Data System (ADS)
Ueta, Toshiya; Meixner, Margaret
2016-04-01
2-DUST is a general-purpose dust radiative transfer code for an axisymmetric system that reveals the global energetics of dust grains in the shell and the 2-D projected morphologies of the shell that are strongly dependent on the mixed effects of the axisymmetric dust distribution and inclination angle. It can be used to model a variety of axisymmetric astronomical dust systems.
An integrated radiation physics computer code system.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Harris, D. W.
1972-01-01
An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.
The Helicopter Antenna Radiation Prediction Code (HARP)
NASA Technical Reports Server (NTRS)
Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.
1990-01-01
The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.
Radiation hydrodynamics integrated in the PLUTO code
NASA Astrophysics Data System (ADS)
Kolb, Stefan M.; Stute, Matthias; Kley, Wilhelm; Mignone, Andrea
2013-11-01
Aims: The transport of energy through radiation is very important in many astrophysical phenomena. In dynamical problems the time-dependent equations of radiation hydrodynamics have to be solved. We present a newly developed radiation-hydrodynamics module specifically designed for the versatile magnetohydrodynamic (MHD) code PLUTO. Methods: The solver is based on the flux-limited diffusion approximation in the two-temperature approach. All equations are solved in the co-moving frame in the frequency-independent (gray) approximation. The hydrodynamics is solved by the different Godunov schemes implemented in PLUTO, and for the radiation transport we use a fully implicit scheme. The resulting system of linear equations is solved either using the successive over-relaxation (SOR) method (for testing purposes) or using matrix solvers that are available in the PETSc library. We state in detail the methodology and describe several test cases to verify the correctness of our implementation. The solver works in standard coordinate systems, such as Cartesian, cylindrical, and spherical, and also for non-equidistant grids. Results: We present a new radiation-hydrodynamics solver coupled to the MHD-code PLUTO that is a modern, versatile, and efficient new module for treating complex radiation hydrodynamical problems in astrophysics. As test cases, either purely radiative situations, or full radiation-hydrodynamical setups (including radiative shocks and convection in accretion disks) were successfully studied. The new module scales very well on parallel computers using MPI. For problems in star or planet formation, we added the possibility of irradiation by a central source.
ASIMUT on line radiative transfer code
NASA Astrophysics Data System (ADS)
Vandaele, A. C.; Neary, L.; Robert, S.; Letocart, V.; Giuranna, M.; Kasaba, Y.
2015-10-01
The CROSS DRIVE project aims to develop an innovative collaborative workspace infrastructure for space missions that will allow distributed scientific and engineering teams to collectively analyse and interpret scientific data as well as execute operations of planetary spacecraft. ASIMUT will be one of the tools that will be made available to the users. Here we describe this radiative transfer code and how it will be integrated into the virtual environment developed within CROSS DRIVE.
Validation of comprehensive space radiation transport code
Shinn, J.L.; Simonsen, L.C.; Cucinotta, F.A.
1998-12-01
The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation.
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
WATERS, LAURIE S.; MCKINNEY, GREGG W.; DURKEE, JOE W.; FENSIN, MICHAEL L.; JAMES, MICHAEL R.; JOHNS, RUSSELL C.; PELOWITZ, DENISE B.
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
LPGS. Code System for Calculating Radiation Exposure
White, J.E.; Eckerman, K.F.
1983-01-01
LPGS was developed to calculate the radiological impacts resulting from radioactive releases to the hydrosphere. The name LPGS was derived from the Liquid Pathway Generic Study for which the original code was used primarily as an analytic tool in the assessment process. The hydrosphere is represented by the following types of water bodies: estuary, small river, well, lake, and one-dimensional (1-d) river. LPGS is designed to calculate radiation dose (individual and population) to body organs as a function of time for the various exposure pathways. The radiological consequences to the aquatic biota are estimated. Several simplified radionuclide transport models are employed with built-in formulations to describe the release rate of the radionuclides. A tabulated user-supplied release model can be input, if desired. Printer plots of dose versus time for the various exposure pathways are provided.
Advances in space radiation shielding codes
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Qualls, Garry D.; Cucinotta, Francis A.; Prael, Richard E.; Norbury, John W.; Heinbockel, John H.; Tweed, John; De Angelis, Giovanni
2002-01-01
Early space radiation shield code development relied on Monte Carlo methods and made important contributions to the space program. Monte Carlo methods have resorted to restricted one-dimensional problems leading to imperfect representation of appropriate boundary conditions. Even so, intensive computational requirements resulted and shield evaluation was made near the end of the design process. Resolving shielding issues usually had a negative impact on the design. Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary concept to the final design. For the last few decades, we have pursued deterministic solutions of the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design methods. A single ray trace in such geometry requires 14 milliseconds and limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given.
Advances in space radiation shielding codes.
Wilson, John W; Tripathi, Ram K; Qualls, Garry D; Cucinotta, Francis A; Prael, Richard E; Norbury, John W; Heinbockel, John H; Tweed, John; De Angelis, Giovanni
2002-12-01
Early space radiation shield code development relied on Monte Carlo methods and made important contributions to the space program. Monte Carlo methods have resorted to restricted one-dimensional problems leading to imperfect representation of appropriate boundary conditions. Even so, intensive computational requirements resulted and shield evaluation was made near the end of the design process. Resolving shielding issues usually had a negative impact on the design. Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary concept to the final design. For the last few decades, we have pursued deterministic solutions of the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design methods. A single ray trace in such geometry requires 14 milliseconds and limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given. PMID:12793737
Space Radiation Transport Code Development: 3DHZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
The space radiation transport code, HZETRN, has been used extensively for research, vehicle design optimization, risk analysis, and related applications. One of the simplifying features of the HZETRN transport formalism is the straight-ahead approximation, wherein all particles are assumed to travel along a common axis. This reduces the governing equation to one spatial dimension allowing enormous simplification and highly efficient computational procedures to be implemented. Despite the physical simplifications, the HZETRN code is widely used for space applications and has been found to agree well with fully 3D Monte Carlo simulations in many circumstances. Recent work has focused on the development of 3D transport corrections for neutrons and light ions (Z < 2) for which the straight-ahead approximation is known to be less accurate. Within the development of 3D corrections, well-defined convergence criteria have been considered, allowing approximation errors at each stage in model development to be quantified. The present level of development assumes the neutron cross sections have an isotropic component treated within N explicit angular directions and a forward component represented by the straight-ahead approximation. The N = 1 solution refers to the straight-ahead treatment, while N = 2 represents the bi-directional model in current use for engineering design. The figure below shows neutrons, protons, and alphas for various values of N at locations in an aluminum sphere exposed to a solar particle event (SPE) spectrum. The neutron fluence converges quickly in simple geometry with N > 14 directions. The improved code, 3DHZETRN, transports neutrons, light ions, and heavy ions under space-like boundary conditions through general geometry while maintaining a high degree of computational efficiency. A brief overview of the 3D transport formalism for neutrons and light ions is given, and extensive benchmarking results with the Monte Carlo codes Geant4, FLUKA, and
Los Alamos radiation transport code system on desktop computing platforms
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.; West, J.T. )
1990-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. The current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
Radiation transport phenomena and modeling - part A: Codes
Lorence, L.J.
1997-06-01
The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped.
Radiation flux tables for ICRCCM using the GLA GCM radiation codes
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1986-01-01
Tabulated values of longwave and shortwave radiation fluxes and also cooling and heating rates in the atmosphere for standard atmospheric profiles are presented. The radiation codes used in the Goddard general circulation model were employed for the computations. These results were obtained for an international intercomparison projected called Intercomparison of Radiation Codes in Climate Models (ICRCCM).
Recent developments in the Los Alamos radiation transport code system
Forster, R.A.; Parsons, K.
1997-06-01
A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results.
Space radiator simulation manual for computer code
NASA Technical Reports Server (NTRS)
Black, W. Z.; Wulff, W.
1972-01-01
A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.
Overview of HZETRN and BRNTRN Space Radiation Shielding Codes
NASA Technical Reports Server (NTRS)
Wilson, John W.; Cucinotta, F. A.; Shinn, J. L.; Simonsen, L. C.; Badavi, F. F.
1997-01-01
The NASA Radiation Health Program has supported basic research over the last decade in radiation physics to develop ionizing radiation transport codes and corresponding data bases for the protection of astronauts from galactic and solar cosmic rays on future deep space missions. The codes describe the interactions of the incident radiations with shield materials where their content is modified by the atomic and nuclear reactions through which high energy heavy ions are fragmented into less massive reaction products and reaction products are produced as radiations as direct knockout of shield constituents or produced as de-excitation products in the reactions. This defines the radiation fields to which specific devices are subjected onboard a spacecraft. Similar reactions occur in the device itself which is the initiating event for the device response. An overview of the computational procedures and data base with some applications to photonic and data processing devices will be given.
The Continual Intercomparison of Radiation Codes: Results from Phase I
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Cole, Jason; Iacono, Michael; Jin, Zhonghai; Li, Jiangnan; Manners, James; Raisanen, Petri; Rose, Fred; Zhang, Yuanchong; Wilson Michael J.; Rossow, William
2011-01-01
The computer codes that calculate the energy budget of solar and thermal radiation in Global Climate Models (GCMs), our most advanced tools for predicting climate change, have to be computationally efficient in order to not impose undue computational burden to climate simulations. By using approximations to gain execution speed, these codes sacrifice accuracy compared to more accurate, but also much slower, alternatives. International efforts to evaluate the approximate schemes have taken place in the past, but they have suffered from the drawback that the accurate standards were not validated themselves for performance. The manuscript summarizes the main results of the first phase of an effort called "Continual Intercomparison of Radiation Codes" (CIRC) where the cases chosen to evaluate the approximate models are based on observations and where we have ensured that the accurate models perform well when compared to solar and thermal radiation measurements. The effort is endorsed by international organizations such as the GEWEX Radiation Panel and the International Radiation Commission and has a dedicated website (i.e., http://circ.gsfc.nasa.gov) where interested scientists can freely download data and obtain more information about the effort's modus operandi and objectives. In a paper published in the March 2010 issue of the Bulletin of the American Meteorological Society only a brief overview of CIRC was provided with some sample results. In this paper the analysis of submissions of 11 solar and 13 thermal infrared codes relative to accurate reference calculations obtained by so-called "line-by-line" radiation codes is much more detailed. We demonstrate that, while performance of the approximate codes continues to improve, significant issues still remain to be addressed for satisfactory performance within GCMs. We hope that by identifying and quantifying shortcomings, the paper will help establish performance standards to objectively assess radiation code quality
ACCELERATING HIGH-ENERGY PULSAR RADIATION CODES
Venter, C.; De Jager, O. C.
2010-12-20
Curvature radiation (CR) is believed to be a dominant mechanism for creating gamma-ray emission from pulsars and is emitted by relativistic particles that are constrained to move along curved magnetic field lines. Additionally, synchrotron radiation (SR) is expected to be radiated by both relativistic primaries (involving cyclotron resonant absorption of radio photons and re-emission of SR photons), or secondary electron-positron pairs (created by magnetic or photon-photon pair production processes involving CR gamma rays in the pulsar magnetosphere). When calculating these high-energy spectra, especially in the context of pulsar population studies where several millions of CR and SR spectra have to be generated, it is profitable to consider approximations that would save computational time without sacrificing too much accuracy. This paper focuses on one such approximation technique, and we show that one may gain significantly in computational speed while preserving the accuracy of the spectral results.
Acceleration of a Monte Carlo radiation transport code
Hochstedler, R.D.; Smith, L.M.
1996-03-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. {copyright} {ital 1996 American Institute of Physics.}
Stratospheric Relaxation in IMPACT's Radiation Code
Edis, T; Grant, K; Cameron-Smith, P
2006-11-13
While Impact incorporates diagnostic radiation routines from our work in previous years, it has not previously included the stratospheric relaxation required for forcing calculations. We have now implemented the necessary changes for stratospheric relaxation, tested its stability, and compared the results with stratosphere temperatures obtained from CAM3 met data. The relaxation results in stable temperature profiles in the stratosphere, which is encouraging for use in forcing calculations. It does, however, produce a cooling bias when compared to CAM3, which appears to be due to differences in radiation calculations rather than the interactive treatment of ozone. The cause of this bias is unclear as yet, but seems to be systematic and hence cancels out when differences are taken relative to a control simulation.
A Radiation Shielding Code for Spacecraft and Its Validation
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.
2000-01-01
The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.
Rykovanov, S. G.; Chen, M.; Geddes, C. G. R.; Schroeder, C. B.; Esarey, E.; Leemans, W. P.
2012-12-21
The Virtual Detector for Synchrotron Radiation (VDSR) is a parallel C++ code developed to calculate the incoherent radiation from a single charged particle or a beam moving in given external electro-magnetic fields. In this proceedings the code structure and features are introduced. An example of radiation generation from the betatron motion of a beam in the focusing fields of the wake in a laser-plasma accelerator is presented.
MORSE Monte Carlo radiation transport code system
Emmett, M.B.
1983-02-01
This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected. (WHK)
Description of transport codes for space radiation shielding.
Kim, Myung-Hee Y; Wilson, John W; Cucinotta, Francis A
2012-11-01
Exposure to ionizing radiation in the space environment is one of the hazards faced by crews in space missions. As space radiations traverse spacecraft, habitat shielding, or tissues, their energies and compositions are altered by interactions with the shielding. Modifications to the radiation fields arise from atomic interactions of charged particles with orbital electrons and nuclear interactions leading to projectile and target fragmentation, including secondary particles such as neutrons, protons, mesons, and nuclear recoils. The transport of space radiation through shielding can be simulated using Monte Carlo techniques or deterministic solutions of the Boltzmann equation. To determine shielding requirements and to resolve radiation constraints for future human missions, the shielding evaluation of a spacecraft concept is required as an early step in the design process. To do this requires (1) accurate knowledge of space environmental models to define the boundary condition for transport calculations, (2) transport codes with detailed shielding and body geometry models to determine particle transmission into areas of internal shielding and at each critical body organ, and (3) the assessment of organ dosimetric quantities and biological risks by applying the corresponding response models for space radiation against the particle spectra that have been accurately determined from the transport code. This paper reviews current transport codes and analyzes their accuracy through comparison to laboratory and spaceflight data. This paper also introduces a probabilistic risk assessment approach for the evaluation of radiation shielding. PMID:23032892
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
Code for Analyzing and Designing Spacecraft Power System Radiators
NASA Technical Reports Server (NTRS)
Juhasz, Albert
2005-01-01
GPHRAD is a computer code for analysis and design of disk or circular-sector heat-rejecting radiators for spacecraft power systems. A specific application is for Stirling-cycle/linear-alternator electric-power systems coupled to radioisotope general-purpose heat sources. GPHRAD affords capabilities and options to account for thermophysical properties (thermal conductivity, density) of either metal-alloy or composite radiator materials.
The Continual Intercomparison of Radiation Codes: Results from Phase I
Oreopoulos, L.; Mlawer, Eli J.; Delamere, Jennifer; Shippert, Timothy R.; Cole, Jason; Fomin, Boris; Iacono, Michael J.; Jin, Zhonghai; Li, Jiangning; Manners, James; Raisanen, Petri; Rose, Fred; Zhang, Yuanchong; Wilson, Michael J.; Rossow, William B.
2012-01-01
We present results from Phase I of the Continual Intercomparison of Radiation Codes (CIRC), intended as an evolving and regularly updated reference source for evaluation of radiative transfer (RT) codes used in Global Climate Models. CIRC differs from previous intercomparisons in that it relies on an observationally validated catalogue of cases. The seven CIRC Phase I baseline cases, five cloud-free, and two with overcast liquid clouds, are built around observations by the Atmospheric Radiation Measurements (ARM) program that satisfy the goals of Phase I, namely to examine radiative transfer (RT) model performance in realistic, yet not overly complex, atmospheric conditions. In addition to the seven baseline cases, additional idealized "subcases" are also examined to facilitate intrepretation of the causes of model errors. In addition to summarizing individual model performance with respect to reference line-by-line calculations and inter-model differences, we also highlight RT model behavior for conditions of doubled CO2, aspects of utilizing a spectral specification of surface albedo, and the impact of the inclusion of scattering in the thermal infrared. Our analysis suggests that RT models should work towards improving their calculation of diffuse shortwave flux, shortwave absorption, treatment of spectral surface albedo, and shortwave CO2 forcing. On the other hand, LW calculations appear to be significantly closer to the reference results. By enhancing the range of conditions under which participating codes are tested, future CIRC phases will hopefully allow even more rigorous examination of RT code performance.
Radiative transfer code SHARM for atmospheric and terrestrial applications
NASA Astrophysics Data System (ADS)
Lyapustin, A. I.
2005-12-01
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Space shuttle rendezous, radiation and reentry analysis code
NASA Technical Reports Server (NTRS)
Mcglathery, D. M.
1973-01-01
A preliminary space shuttle mission design and analysis tool is reported emphasizing versatility, flexibility, and user interaction through the use of a relatively small computer (IBM-7044). The Space Shuttle Rendezvous, Radiation and Reentry Analysis Code is used to perform mission and space radiation environmental analyses for four typical space shuttle missions. Included also is a version of the proposed Apollo/Soyuz rendezvous and docking test mission. Tangential steering circle to circle low-thrust tug orbit raising and the effects of the trapped radiation environment on trajectory shaping due to solar electric power losses are also features of this mission analysis code. The computational results include a parametric study on single impulse versus double impulse deorbiting for relatively low space shuttle orbits as well as some definitive data on the magnetically trapped protons and electrons encountered on a particular mission.
Prototype demonstration of radiation therapy planning code system
Little, R.C.; Adams, K.J.; Estes, G.P.; Hughes, L.S. III; Waters, L.S.
1996-09-01
This is the final report of a one-year, Laboratory-Directed Research and Development project at the Los Alamos National Laboratory (LANL). Radiation therapy planning is the process by which a radiation oncologist plans a treatment protocol for a patient preparing to undergo radiation therapy. The objective is to develop a protocol that delivers sufficient radiation dose to the entire tumor volume, while minimizing dose to healthy tissue. Radiation therapy planning, as currently practiced in the field, suffers from inaccuracies made in modeling patient anatomy and radiation transport. This project investigated the ability to automatically model patient-specific, three-dimensional (3-D) geometries in advanced Los Alamos radiation transport codes (such as MCNP), and to efficiently generate accurate radiation dose profiles in these geometries via sophisticated physics modeling. Modem scientific visualization techniques were utilized. The long-term goal is that such a system could be used by a non-expert in a distributed computing environment to help plan the treatment protocol for any candidate radiation source. The improved accuracy offered by such a system promises increased efficacy and reduced costs for this important aspect of health care.
The Continuous Intercomparison of Radiation Codes (CIRC): Phase I Cases
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Turner, David D.; Miller, Mark A.; Minnis, Patrick; Clough, Shepard; Barker, Howard; Ellingson, Robert
2007-01-01
CIRC aspires to be the successor to ICRCCM (Intercomparison of Radiation Codes in Climate Models). It is envisioned as an evolving and regularly updated reference source for GCM-type radiative transfer (RT) code evaluation with the principle goal to contribute in the improvement of RT parameterizations. CIRC is jointly endorsed by DOE's Atmospheric Radiation Measurement (ARM) program and the GEWEX Radiation Panel (GRP). CIRC's goal is to provide test cases for which GCM RT algorithms should be performing at their best, i.e, well characterized clear-sky and homogeneous, overcast cloudy cases. What distinguishes CIRC from previous intercomparisons is that its pool of cases is based on observed datasets. The bulk of atmospheric and surface input as well as radiative fluxes come from ARM observations as documented in the Broadband Heating Rate Profile (BBHRP) product. BBHRP also provides reference calculations from AER's RRTM RT algorithms that can be used to select the most optimal set of cases and to provide a first-order estimate of our ability to achieve radiative flux closure given the limitations in our knowledge of the atmospheric state.
NERO- a post-maximum supernova radiation transport code
NASA Astrophysics Data System (ADS)
Maurer, I.; Jerkstrand, A.; Mazzali, P. A.; Taubenberger, S.; Hachinger, S.; Kromer, M.; Sim, S.; Hillebrandt, W.
2011-12-01
The interpretation of supernova (SN) spectra is essential for deriving SN ejecta properties such as density and composition, which in turn can tell us about their progenitors and the explosion mechanism. A very large number of atomic processes are important for spectrum formation. Several tools for calculating SN spectra exist, but they mainly focus on the very early or late epochs. The intermediate phase, which requires a non-local thermodynamic equilibrium (NLTE) treatment of radiation transport has rarely been studied. In this paper, we present a new SN radiation transport code, NERO, which can look at those epochs. All the atomic processes are treated in full NLTE, under a steady-state assumption. This is a valid approach between roughly 50 and 500 days after the explosion depending on SN type. This covers the post-maximum photospheric and the early and the intermediate nebular phase. As a test, we compare NERO to the radiation transport code of Jerkstrand, Fransson & Kozma and to the nebular code of Mazzali et al. All three codes have been developed independently and a comparison provides a valuable opportunity to investigate their reliability. Currently, NERO is one-dimensional and can be used for predicting spectra of synthetic explosion models or for deriving SN properties by spectral modelling. To demonstrate this, we study the spectra of the 'normal' Type Ia supernova (SN Ia) 2005cf between 50 and 350 days after the explosion and identify most of the common SN Ia line features at post-maximum epochs.
A Radiation Solver for the National Combustion Code
NASA Technical Reports Server (NTRS)
Sockol, Peter M.
2015-01-01
A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.
Towards a 3D Space Radiation Transport Code
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathl, R. K.; Cicomptta, F. A.; Heinbockel, J. H.; Tweed, J.
2002-01-01
High-speed computational procedures for space radiation shielding have relied on asymptotic expansions in terms of the off-axis scatter and replacement of the general geometry problem by a collection of flat plates. This type of solution was derived for application to human rated systems in which the radius of the shielded volume is large compared to the off-axis diffusion limiting leakage at lateral boundaries. Over the decades these computational codes are relatively complete and lateral diffusion effects are now being added. The analysis for developing a practical full 3D space shielding code is presented.
Development of the 3DHZETRN code for space radiation protection
NASA Astrophysics Data System (ADS)
Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert
Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.
Documentation of the detailed radiation property data for the radiation-ablation code RASLE
NASA Technical Reports Server (NTRS)
Henline, William D.
1991-01-01
This report is a documentation of the necessary radiation property input data for the radiating shock layer simulation code RASLE. The tabulated data are required to simulate systems which are composed of oxygen, nitrogen, carbon, hydrogen, and silicon. These data are needed to compute the flowfield effects of many practical ablative, hypersonic vehicle heat shield materials. A brief outline description is provided for the RASLE code. A more detailed discussion is provided for the RASLE code non-grey gas spectral radiation model. This model is related to the required radiation property data which are tabulated at the end of the report. Other correlations needed for the RASLE simulations are not discussed, since these are automatically included in the program and no input data are required.
A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes
NASA Astrophysics Data System (ADS)
Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.
2000-10-01
Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.
Validation of a comprehensive space radiation transport code.
Shinn, J L; Cucinotta, F A; Simonsen, L C; Wilson, J W; Badavi, F F; Badhwar, G D; Miller, J; Zeitlin, C; Heilbronn, L; Tripathi, R K; Clowdsley, M S; Heinbockel, J H; Xapsos, M A
1998-12-01
The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation. PMID:11542474
New Parallel computing framework for radiation transport codes
Kostin, M.A.; Mokhov, N.V.; Niita, K.; /JAERI, Tokai
2010-09-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
History of one family of atmospheric radiative transfer codes
NASA Astrophysics Data System (ADS)
Anderson, Gail P.; Wang, Jinxue; Hoke, Michael L.; Kneizys, F. X.; Chetwynd, James H., Jr.; Rothman, Laurence S.; Kimball, L. M.; McClatchey, Robert A.; Shettle, Eric P.; Clough, Shepard (.; Gallery, William O.; Abreu, Leonard W.; Selby, John E. A.
1994-12-01
Beginning in the early 1970's, the then Air Force Cambridge Research Laboratory initiated a program to develop computer-based atmospheric radiative transfer algorithms. The first attempts were translations of graphical procedures described in a 1970 report on The Optical Properties of the Atmosphere, based on empirical transmission functions and effective absorption coefficients derived primarily from controlled laboratory transmittance measurements. The fact that spectrally-averaged atmospheric transmittance (T) does not obey the Beer-Lambert Law (T equals exp(-(sigma) (DOT)(eta) ), where (sigma) is a species absorption cross section, independent of (eta) , the species column amount along the path) at any but the finest spectral resolution was already well known. Band models to describe this gross behavior were developed in the 1950's and 60's. Thus began LOWTRAN, the Low Resolution Transmittance Code, first released in 1972. This limited initial effort has how progressed to a set of codes and related algorithms (including line-of-sight spectral geometry, direct and scattered radiance and irradiance, non-local thermodynamic equilibrium, etc.) that contain thousands of coding lines, hundreds of subroutines, and improved accuracy, efficiency, and, ultimately, accessibility. This review will include LOWTRAN, HITRAN (atlas of high-resolution molecular spectroscopic data), FASCODE (Fast Atmospheric Signature Code), and MODTRAN (Moderate Resolution Transmittance Code), their permutations, validations, and applications, particularly as related to passive remote sensing and energy deposition.
A model code for the radiative theta pinch
Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.
2014-07-15
A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.
Recent radiation damage studies and developments of the Marlowe code
NASA Astrophysics Data System (ADS)
Ortiz, C. J.; Souidi, A.; Becquart, C. S.; Domain, C.; Hou, M.
2014-07-01
Radiation damage in materials relevant to applications evolves over time scales spanning from the femtosecond - the characteristic time for an atomic collision - to decades - the aging time expected for nuclear materials. The relevant kinetic energies of atoms span from thermal motion to the MeV range.The question motivating this contribution is to identify the relationship between elementary atomic displacements triggered by irradiation and the subsequent microstructural evolution of metals in the long term. The Marlowe code, based on the binary collision approximation (BCA) is used to simulate the sequences of atomic displacements generated by energetic primary recoils and the Object Kinetic Monte Carlo code LAKIMOCA, parameterized on a range of ab initio calculations, is used to predict the subsequent long-term evolution of point defect and clusters thereof. In agreement with full Molecular Dynamics, BCA displacement cascades in body-centered cubic (BCC) Fe and a face-centered cubic (FCC) Febond Nibond Cr alloy display recursive properties that are found useful for predictions in the long term.The case of defects evolution in W due to external irradiation with energetic H and He is also discussed. To this purpose, it was useful to extend the inelastic energy loss model available in Marlowe up to the Bethe regime. The last version of the Marlowe code (version 15) was delivered before message passing instructions softwares (such as MPI) were available but the structure of the code was designed in such a way to permit parallel executions within a distributed memory environment. This makes possible to obtain N different cascades simultaneously using N independent nodes without any communication between processors. The parallelization of the code using MPI was recently achieved by one author of this report (C.J.O.). Typically, the parallelized version of Marlowe allows simulating millions of displacement cascades using a limited number of processors (<64) within only
3D unstructured-mesh radiation transport codes
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options: $S{_}n$ (discrete-ordinates), $P{_}n$ (spherical harmonics), and $SP{_}n$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $S{_}n$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.
Status of the MORSE multigroup Monte Carlo radiation transport code
Emmett, M.B.
1993-06-01
There are two versions of the MORSE multigroup Monte Carlo radiation transport computer code system at Oak Ridge National Laboratory. MORSE-CGA is the most well-known and has undergone extensive use for many years. MORSE-SGC was originally developed in about 1980 in order to restructure the cross-section handling and thereby save storage. However, with the advent of new computer systems having much larger storage capacity, that aspect of SGC has become unnecessary. Both versions use data from multigroup cross-section libraries, although in somewhat different formats. MORSE-SGC is the version of MORSE that is part of the SCALE system, but it can also be run stand-alone. Both CGA and SGC use the Multiple Array System (MARS) geometry package. In the last six months the main focus of the work on these two versions has been on making them operational on workstations, in particular, the IBM RISC 6000 family. A new version of SCALE for workstations is being released to the Radiation Shielding Information Center (RSIC). MORSE-CGA, Version 2.0, is also being released to RSIC. Both SGC and CGA have undergone other revisions recently. This paper reports on the current status of the MORSE code system.
Operation of the helicopter antenna radiation prediction code
NASA Technical Reports Server (NTRS)
Braeden, E. W.; Klevenow, F. T.; Newman, E. H.; Rojas, R. G.; Sampath, K. S.; Scheik, J. T.; Shamansky, H. T.
1993-01-01
HARP is a front end as well as a back end for the AMC and NEWAIR computer codes. These codes use the Method of Moments (MM) and the Uniform Geometrical Theory of Diffraction (UTD), respectively, to calculate the electromagnetic radiation patterns for antennas on aircraft. The major difficulty in using these codes is in the creation of proper input files for particular aircraft and in verifying that these files are, in fact, what is intended. HARP creates these input files in a consistent manner and allows the user to verify them for correctness using sophisticated 2 and 3D graphics. After antenna field patterns are calculated using either MM or UTD, HARP can display the results on the user's screen or provide hardcopy output. Because the process of collecting data, building the 3D models, and obtaining the calculated field patterns was completely automated by HARP, the researcher's productivity can be many times what it could be if these operations had to be done by hand. A complete, step by step, guide is provided so that the researcher can quickly learn to make use of all the capabilities of HARP.
NASA Astrophysics Data System (ADS)
Artyomov, K. P.; Ryzhov, V. V.; Naumenko, G. A.; Shevelev, M. V.
2012-05-01
Different types of polarization radiation generated by a relativistic electron beam are simulated using fully electromagnetic particle-in-cell (PIC) code KARAT. The simulation results for diffraction radiation, transition radiation, Smith-Purcell radiation and Vavilov-Cherenkov radiation are in a good agreement with experimental data and analytical models. Modern PIC simulation is a good tool to check and predict experimental results.
VISRAD, 3-D Target Design and Radiation Simulation Code
NASA Astrophysics Data System (ADS)
Li, Yingjie; Macfarlane, Joseph; Golovkin, Igor
2015-11-01
The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, LMJ, Z, and PLX. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. We will discuss recent improvements to the software package and plans for future developments.
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.
HELIOS: A new open-source radiative transfer code
NASA Astrophysics Data System (ADS)
Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin
2015-12-01
I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net
CODE's new solar radiation pressure model for GNSS orbit determination
NASA Astrophysics Data System (ADS)
Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.
2015-08-01
The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which
Modeling Planet-Building Stellar Disks with Radiative Transfer Code
NASA Astrophysics Data System (ADS)
Swearingen, Jeremy R.; Sitko, Michael L.; Whitney, Barbara; Grady, Carol A.; Wagner, Kevin Robert; Champney, Elizabeth H.; Johnson, Alexa N.; Warren, Chelsea C.; Russell, Ray W.; Hammel, Heidi B.; Lisse, Casey M.; Cure, Michel; Kraus, Stefan; Fukagawa, Misato; Calvet, Nuria; Espaillat, Catherine; Monnier, John D.; Millan-Gabet, Rafael; Wilner, David J.
2015-01-01
Understanding the nature of the many planetary systems found outside of our own solar system cannot be completed without knowledge of the beginnings these systems. By detecting planets in very young systems and modeling the disks of material around stars from which they form, we can gain a better understanding of planetary origin and evolution. The efforts presented here have been in modeling two pre-transitional disk systems using a radiative transfer code. With the first of these systems, V1247 Ori, a model that fits the spectral energy distribution (SED) well and whose parameters are consistent with existing interferometry data (Kraus et al 2013) has been achieved. The second of these two systems, SAO 206462, has presented a different set of challenges but encouraging SED agreement between the model and known data gives hope that the model can produce images that can be used in future interferometry work. This work was supported by NASA ADAP grant NNX09AC73G, and the IR&D program at The Aerospace Corporation.
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes, the capability to account for surface-to-surface radiation exchange in complex geometries is critical. This paper presents recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute geometric view factors for radiation problems involving multiple surfaces. Verification of the code's radiation capabilities and results of a code-to-code comparison are presented. Finally, a demonstration case of a two-dimensional ablating cavity with enclosure radiation accounting for a changing geometry is shown.
Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples
Lorence, L.J. Jr.; Beutler, D.E.
1997-09-01
This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices.
Development of a Monte-Carlo Radiative Transfer Code for the Juno/JIRAM Limb Measurements
NASA Astrophysics Data System (ADS)
Sindoni, G.; Adriani, A.; Mayorov, B.; Aoki, S.; Grassi, D.; Moriconi, M.; Oliva, F.
2013-09-01
The Juno/JIRAM instrument will acquire limb spectra of the Jupiter atmosphere in the infrared spectral range. The analysis of these spectra requires a radiative transfer code that takes into account the multiple scattering by particles in a spherical-shell atmosphere. Therefore, we are developing a code based on the Monte-Carlo approach to simulate the JIRAM observations. The validation of the code was performed by comparison with DISORT-based codes.
Code system to compute radiation dose in human phantoms
Ryman, J.C.; Cristy, M.; Eckerman, K.F.; Davis, J.L.; Tang, J.S.; Kerr, G.D.
1986-01-01
Monte Carlo photon transport code and a code using Monte Carlo integration of a point kernel have been revised to incorporate human phantom models for an adult female, juveniles of various ages, and a pregnant female at the end of the first trimester of pregnancy, in addition to the adult male used earlier. An analysis code has been developed for deriving recommended values of specific absorbed fractions of photon energy. The computer code system and calculational method are described, emphasizing recent improvements in methods. (LEW)
International "Intercomparison of 3-Dimensional (3D) Radiation Codes" (13RC)
NASA Technical Reports Server (NTRS)
Cahalan, Robert F.; Einaudi, Franco (Technical Monitor)
2000-01-01
An international "Intercomparison of 3-dimensional (3D) Radiation Codes" 13RC) has been initiated. It is endorsed by the GEWEX Radiation Panel, and funded jointly by the United States Department of Energy ARM program, and by the National Aeronautics and Space Administration Radiation Sciences program. It is a 3-phase effort that has as its goals to: (1) understand the errors and limits of 3D methods; (2) provide 'baseline' cases for future 3D code development; (3) promote sharing of 3D tools; (4) derive guidelines for 3D tool selection; and (5) improve atmospheric science education in 3D radiation.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A Monte Carlo Code for Relativistic Radiation Transport around Kerr Black Holes
NASA Astrophysics Data System (ADS)
Schnittman, Jeremy D.; Krolik, Julian H.
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A MONTE CARLO CODE FOR RELATIVISTIC RADIATION TRANSPORT AROUND KERR BLACK HOLES
Schnittman, Jeremy D.; Krolik, Julian H. E-mail: jhk@pha.jhu.edu
2013-11-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
A simple code for use in shielding and radiation dosage analyses
NASA Technical Reports Server (NTRS)
Wan, C. C.
1972-01-01
A simple code for use in analyses of gamma radiation effects in laminated materials is described. Simple and good geometry is assumed so that all multiple collision and scattering events are excluded from consideration. The code is capable of handling laminates up to six layers. However, for laminates of more than six layers, the same code may be used to incorporate two additional layers at a time, making use of punch-tape outputs from previous computation on all preceding layers. Spectrum of attenuated radiation are obtained as both printed output and punch tape output as desired.
TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.
Quasilinear simulation of auroral kilometric radiation by a relativistic Fokker-Planck code
Matsuda, Y.
1991-01-01
An intense terrestrial radiation called the auroral kilometric radiation (AKR) is believed to be generated by cyclotron maser instability. We study a quasilinear evolution of this instability by means of a two-dimensional relativistic Fokker-Planck code which treats waves and distributions self-consistently, including radiation loss and electron source and sink. We compare the distributions and wave amplitude with spacecraft observations to elucidate physical processes involved. 3 refs., 1 fig.
HETC radiation transport code development for cosmic ray shielding applications in space.
Townsend, L W; Miller, T M; Gabriel, Tony A
2005-01-01
In order to facilitate three-dimensional analyses of space radiation shielding scenarios for future space missions, the Monte Carlo radiation transport code HETC is being extended to include transport of energetic heavy ions, such as are found in the galactic cosmic ray spectrum in space. Recently, an event generator capable of providing nuclear interaction data for use in HETC was developed and incorporated into the code. The event generator predicts the interaction product yields and production angles and energies using nuclear models and Monte Carlo techniques. Testing and validation of the extended transport code has begun. In this work, the current status of code modifications, which enable energetic heavy ions and their nuclear reaction products to be transported through thick shielding, are described. Also, initial results of code testing against available laboratory beam data for energetic heavy ions interacting in thick targets are presented. PMID:16604614
The Performance of Current Atmospheric Radiation Codes in Phase I of CIRC
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Mlawer, E.; Shippert, T.; Cole, J.; Fomin, B.; Iacono, M.; Jin, Z.; Li, J.; Manners, J.; Raisanen, P.; Rose, F.; Zhang, Y.; Wilson, M.; Rossow, W.
2012-01-01
The Continual Intercomparison of Radiation Codes (CIRC) is intended as an evolving and regularly updated reference source for evaluation of radiative transfer (RT) codes used in Global Climate Models and other atmospheric applications. In our presentation we will discuss our evaluation of the performance of 13 shortwave and 11 longwave RT codes that participated in Phase I of CIRC. CIRC differs from previous intercomparisons in that it relies on an observationally validated catalogue of cases. The seven CIRC Phase I baseline cases, five cloud-free, and two with overcast liquid clouds, are built around observations by the Atmospheric Radiation Measurements (ARM) program that satisfy the goals .of Phase I, namely to examine RT model performance in realistic, yet not overly complex, atmospheric conditions. Besides the seven baseline cases, additional idealized "subcases" are also examined to facilitate interpretation of model errors. We will quantify individual model performance with respect to reference line-by-line calculations, and will also highlight RT code behavior for conditions of doubled CO2 , aspects of utilizing a spectral specification of surface albedo, and the impact of the inclusion of scattering in the thermal infrared. Our analysis suggests that RT codes should work towards improving their calculation of diffuse shortwave flux, shortwave absorption, treatment of spectral surface albedo, and shortwave CO2 forcing. Despite practical difficulties in comparing our results to previous results by the Intercomparison of Radiation Codes in Climate Models (ICRCCM) conducted about 20 years ago, it appears that the current generation of RT codes do indeed perform better than the codes of the ICRCCM era. By enhancing the range of conditions under which participating codes are tested, future CIRC phases will hopefully allow even more rigorous examination of RT code performance.
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
CRASH: A Block-adaptive-mesh Code for Radiative Shock Hydrodynamics—Implementation and Verification
NASA Astrophysics Data System (ADS)
van der Holst, B.; Tóth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.; Fryxell, B.; Drake, R. P.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics
NASA Astrophysics Data System (ADS)
van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.
2011-01-01
We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).
Radiation and confinement in 0D fusion systems codes
NASA Astrophysics Data System (ADS)
Lux, H.; Kemp, R.; Fable, E.; Wenninger, R.
2016-07-01
In systems modelling for fusion power plants, it is essential to robustly predict the performance of a given machine design (including its respective operating scenario). One measure of machine performance is the energy confinement time {τ\\text{E}} that is typically predicted from experimentally derived confinement scaling laws (e.g. IPB98(y,2)). However, the conventionally used scaling laws have been derived for ITER which—unlike a fusion power plant—will not have significant radiation inside the separatrix. In the absence of a new high core radiation relevant confinement scaling, we propose an ad hoc correction to the loss power {{P}\\text{L}} used in the ITER confinement scaling and the calculation of the stored energy {{W}\\text{th}} by the radiation losses from the ‘core’ of the plasma {{P}\\text{rad,\\text{core}}} . Using detailed ASTRA / TGLF simulations, we find that an appropriate definition of {{P}\\text{rad,\\text{core}}} is given by 60% of all radiative losses inside a normalised minor radius {ρ\\text{core}}=0.75 . We consider this an improvement for current design predictions, but it is far from an ideal solution. We therefore encourage more detailed experimental and theoretical work on this issue.
NASA Technical Reports Server (NTRS)
Meyer, H. D.
1993-01-01
The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.
Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study.
Kotchenova, Svetlana Y; Vermote, Eric F; Levy, Robert; Lyapustin, Alexei
2008-05-01
Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org. PMID:18449285
Intercomparison of Shortwave Radiative Transfer Codes and Measurements
Halthore, Rangasayi N.; Crisp, David; Schwartz, Stephen E.; Anderson, Gail; Berk, A.; Bonnel, B.; Boucher, Olivier; Chang, Fu-Lung; Chou, Ming-Dah; Clothiaux, Eugene E.; Dubuisson, P.; Fomin, Boris; Fouquart, Y.; Freidenreich, S.; Gautier, Catherine; Kato, Seiji; Laszlo, Istvan; Li, Zhanqing; Mather, Jim H.; Plana-Fattori, Artemio; Ramaswamy, V.; Ricchiazzi, P.; Shiren, Y.; Trishchenko, A.; Wiscombe, Warren J.
2005-06-03
Computation of components of shortwave (SW) or solar irradiance in the surface-atmospheric system forms the basis of intercomparison between 16 radiative transfer models of varying spectral resolution ranging from line-by-line models to broadband and general circulation models. In order of increasing complexity the components are: direct solar irradiance at the surface, diffuse irradiance at the surface, diffuse upward flux at the surface, and diffuse upward flux at the top of the atmosphere. These components allow computation of the atmospheric absorptance. Four cases are considered from pure molecular atmospheres to atmospheres with aerosols and atmosphere with a simple uniform cloud. The molecular and aerosol cases allow comparison of aerosol forcing calculation among models. A cloud-free case with measured atmospheric and aerosol properties and measured shortwave radiation components provides an absolute basis for evaluating the models. For the aerosol-free and cloud-free dry atmospheres, models agree to within 1% (root mean square deviation as a percentage of mean) in broadband direct solar irradiance at surface; the agreement is relatively poor at 5% for a humid atmosphere. A comparison of atmospheric absorptance, computed from components of SW radiation, shows that agreement among models is understandably much worse at 3% and 10% for dry and humid atmospheres, respectively. Inclusion of aerosols generally makes the agreement among models worse than when no aerosols are present, with some exceptions. Modeled diffuse surface irradiance is higher than measurements for all models for the same model inputs. Inclusion of an optically thick low-cloud in a tropical atmosphere, a stringent test for multiple scattering calculations, produces, in general, better agreement among models for a low solar zenith angle (SZA = 30?) than for a high SZA (75?). All models show about a 30% increase in broadband absorptance for 30? SZA relative to the clear-sky case and almost no
Modelling Radiative Stellar Winds with the SIMECA Code
NASA Astrophysics Data System (ADS)
Stee, Ph.
Using the SIMECA code developped by Stee & Araùjo ([CITE]), we report theoretical HI visible and near-IR line profiles, i.e. Hα (6562 Å), Hβ (4861 Å) and Brγ (21 656 Å), and intensity maps for a large set of parameters representative of early to late Be spectral types. We have computed the size of the emitting region in the Brγ line and its nearby continuum which both originate from a very extended region, i.e. at least 40 stellar radii which is twice the size of the Hα emitting region. We predict the relative fluxes from the central star, the envelope contribution in the given lines and in the continuum for a wide range of parameters characterizing the disk models. Finally, we have also studied the effect of changing the spectral type on our results and we obtain a clear correlation between the luminosity in Hα and in the infrared.
Method for calculating internal radiation and ventilation with the ADINAT heat-flow code
Butkovich, T.R.; Montan, D.N.
1980-04-01
One objective of the spent fuel test in Climax Stock granite (SFTC) is to correctly model the thermal transport, and the changes in the stress field and accompanying displacements from the application of the thermal loads. We have chosen the ADINA and ADINAT finite element codes to do these calculations. ADINAT is a heat transfer code compatible to the ADINA displacement and stress analysis code. The heat flow problem encountered at SFTC requires a code with conduction, radiation, and ventilation capabilities, which the present version of ADINAT does not have. We have devised a method for calculating internal radiation and ventilation with the ADINAT code. This method effectively reproduces the results from the TRUMP multi-dimensional finite difference code, which correctly models radiative heat transport between drift surfaces, conductive and convective thermal transport to and through air in the drifts, and mass flow of air in the drifts. The temperature histories for each node in the finite element mesh calculated with ADINAT using this method can be used directly in the ADINA thermal-mechanical calculation.
Simulations of implosions with a 3D, parallel, unstructured-grid, radiation-hydrodynamics code
Kaiser, T B; Milovich, J L; Prasad, M K; Rathkopf, J; Shestakov, A I
1998-12-28
An unstructured-grid, radiation-hydrodynamics code is used to simulate implosions. Although most of the problems are spherically symmetric, they are run on 3D, unstructured grids in order to test the code's ability to maintain spherical symmetry of the converging waves. Three problems, of increasing complexity, are presented. In the first, a cold, spherical, ideal gas bubble is imploded by an enclosing high pressure source. For the second, we add non-linear heat conduction and drive the implosion with twelve laser beams centered on the vertices of an icosahedron. In the third problem, a NIF capsule is driven with a Planckian radiation source.
On the Development of a Deterministic Three-Dimensional Radiation Transport Code
NASA Technical Reports Server (NTRS)
Rockell, Candice; Tweed, John
2011-01-01
Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.
MODTRAN6: a major upgrade of the MODTRAN radiative transfer code
NASA Astrophysics Data System (ADS)
Berk, Alexander; Conforti, Patrick; Kennett, Rosemary; Perkins, Timothy; Hawes, Frederick; van den Bosch, Jeannette
2014-06-01
The MODTRAN6 radiative transfer (RT) code is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. This version of the code incorporates modern software ar- chitecture including an application programming interface, enhanced physics features including a line-by-line algorithm, a supplementary physics toolkit, and new documentation. The application programming interface has been developed for ease of integration into user applications. The MODTRAN code has been restructured towards a modular, object-oriented architecture to simplify upgrades as well as facilitate integration with other developers' codes. MODTRAN now includes a line-by-line algorithm for high resolution RT calculations as well as coupling to optical scattering codes for easy implementation of custom aerosols and clouds.
A multigroup radiation diffusion test problem: Comparison of code results with analytic solution
Shestakov, A I; Harte, J A; Bolstad, J H; Offner, S R
2006-12-21
We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.
RRTMGP: A fast and accurate radiation code for the next decade
NASA Astrophysics Data System (ADS)
Mlawer, E. J.; Pincus, R.; Wehe, A.; Delamere, J.
2015-12-01
Atmospheric radiative processes are key drivers of the Earth's climate and must be accurately represented in global circulations models (GCMs) to allow faithful simulations of the planet's past, present, and future. The radiation code RRTMG is widely utilized by global modeling centers for both climate and weather predictions, but it has become increasingly out-of-date. The code's structure is not well suited for the current generation of computer architectures and its stored absorption coefficients are not consistent with the most recent spectroscopic information. We are developing a new broadband radiation code for the current generation of computational architectures. This code, called RRTMGP, will be a completely restructured and modern version of RRTMG. The new code preserves the strengths of the existing RRTMG parameterization, especially the high accuracy of the k-distribution treatment of absorption by gases, but the entire code is being rewritten to provide highly efficient computation across a range of architectures. Our redesign includes refactoring the code into discrete kernels corresponding to fundamental computational elements (e.g. gas optics), optimizing the code for operating on multiple columns in parallel, simplifying the subroutine interface, revisiting the existing gas optics interpolation scheme to reduce branching, and adding flexibility with respect to run-time choices of streams, need for consideration of scattering, aerosol and cloud optics, etc. The result of the proposed development will be a single, well-supported and well-validated code amenable to optimization across a wide range of platforms. Our main emphasis is on highly-parallel platforms including Graphical Processing Units (GPUs) and Many-Integrated-Core processors (MICs), which experience shows can accelerate broadband radiation calculations by as much as a factor of fifty. RRTMGP will provide highly efficient and accurate radiative fluxes calculations for coupled global
3D Polarized Radiative Transfer for Solar System Applications Using the public-domain HYPERION Code
NASA Astrophysics Data System (ADS)
Wolff, M. J.; Robitaille, T.; Whitney, B. A.
2012-12-01
We present a public-domain radiative transfer tool that will allow researchers to examine a wide-range of interesting solar system applications. Hyperion is a new three-dimensional continuum Monte-Carlo radiative transfer code that is designed to be as general as possible, allowing radiative transfer to be computed through a variety of three-dimensional grids (Robitaille, 2011, Astronomy & Astrophysics 536 A79). The main part of the code is problem-independent, and only requires the user to define the three-dimensional density structure, and the opacity and the illumination properties (as well as a few parameters that control execution and output of the code). Hyperion is written in Fortran 90 and parallelized using the MPI-2 standard. It is bundled with Python libraries that enable very flexible pre- and post-processing options (arbitrary shapes, multiple aerosol components, etc.). These routines are very amenable to user-extensibility. The package is currently distributed at www.hyperion-rt.org. Our presentation will feature 1) a brief overview of the code, including a description of the solar system-specific modifications that we have made beyond the capabilities in the original release; 2) Several solar system applications (i.e., Deep Impact Plume, Martian atmosphere, etc.); 3) discussion of availability and distribution of code components via www.hyperion-rt.org.
Creation and utilization of a World Wide Web based space radiation effects code: SIREST.
Singleterry, R C; Wilson, J W; Shinn, J L; Tripathi, R K; Thibeault, S A; Noor, A K; Cucinotta, F A; Badavi, F F; Chang, C K; Qualls, G D; Clowdsley, M S; Kim, M H; Heinbockel, J H; Norbury, J; Blattning, S R; Miller, J; Zeitlin, C; Heilbronn, L H
2001-01-01
In order for humans and electronics to fully and safely operate in the space environment, codes like HZETRN (High Charge and Energy Transport) must be included in any designer's toolbox for design evaluation with respect to radiation damage. Currently, spacecraft designers do not have easy access to accurate radiation codes like HZETRN to evaluate their design for radiation effects on humans and electronics. Today, the World Wide Web is sophisticated enough to support the entire HZETRN code and all of the associated pre and post processing tools. This package is called SIREST (Space Ionizing Radiation Effects and Shielding Tools). There are many advantages to SIREST. The most important advantage is the instant update capability of the web. Another major advantage is the modularity that the web imposes on the code. Right now, the major disadvantage of SIREST will be its modularity inside the designer's system. This mostly comes from the fact that a consistent interface between the designer and the computer system to evaluate the design is incomplete. This, however, is to be solved in the Intelligent Synthesis Environment (ISE) program currently being funded by NASA. PMID:11770545
NASA Astrophysics Data System (ADS)
Townsend, L. W.; Porter, J.; Spence, H. E.; Golightly, M. J.; Smith, S. S.; Schwadron, N.; Kasper, J. C.; Case, A. W.; Blake, J. B.; Mazur, J. E.; Looper, M. D.; Zeitlin, C. J.
2014-12-01
The Cosmic Ray Telescope for the Effects of Radiation (CRaTER) instrument on the Lunar Reconnaissance Orbiter (LRO) spacecraft measures the energy depositions by solar and galactic cosmic radiations in its silicon detectors. These energy depositions are converted to linear energy transfer (LET) spectra, which can contribute to benchmarking space radiation transport codes and also used to estimate doses for the Lunar environment. In this work the Monte Carlo transport code HETC-HEDS (High Energy Transport Code - Human Exploration and Development in Space) and the deterministic NASA space radiation transport code HZETRN2010 are used to estimate LET and dose contributions from the incident primary ions and their charged secondaries produced in nuclear collisions within the components of the CRaTER instrument. Comparisons of the calculated LET spectra with measurements of LET from the CRaTER instrument are made and clearly show the importance of including corrections to the calculated average energy deposition spectra in the silicon detectors using a Vavilov distribution function.
Multigroup Three-Dimensional Direct Integration Method Radiation Transport Analysis Code System.
1987-09-18
Version 00 TRISTAN solves the three-dimensional, fixed-source, Boltzmann transport equation for neutrons or gamma rays in rectangular geometry. The code can solve an adjoint problem as well as a usual transport problem. TRISTAN is a suitable tool to analyze radiation shielding problems such as streaming and deep penetration problems.
Creation and utilization of a World Wide Web based space radiation effects code: SIREST
NASA Technical Reports Server (NTRS)
Singleterry, R. C. Jr; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.; Thibeault, S. A.; Noor, A. K.; Cucinotta, F. A.; Badavi, F. F.; Chang, C. K.; Qualls, G. D.; Clowdsley, M. S.; Kim, M. H.; Heinbockel, J. H.; Norbury, J.; Blattning, S. R.; Miller, J.; Zeitlin, C.; Heilbronn, L. H.
2001-01-01
In order for humans and electronics to fully and safely operate in the space environment, codes like HZETRN (High Charge and Energy Transport) must be included in any designer's toolbox for design evaluation with respect to radiation damage. Currently, spacecraft designers do not have easy access to accurate radiation codes like HZETRN to evaluate their design for radiation effects on humans and electronics. Today, the World Wide Web is sophisticated enough to support the entire HZETRN code and all of the associated pre and post processing tools. This package is called SIREST (Space Ionizing Radiation Effects and Shielding Tools). There are many advantages to SIREST. The most important advantage is the instant update capability of the web. Another major advantage is the modularity that the web imposes on the code. Right now, the major disadvantage of SIREST will be its modularity inside the designer's system. This mostly comes from the fact that a consistent interface between the designer and the computer system to evaluate the design is incomplete. This, however, is to be solved in the Intelligent Synthesis Environment (ISE) program currently being funded by NASA.
Development of a coupling code for PWR reactor cavity radiation streaming calculation
Zheng, Z.; Wu, H.; Cao, L.; Zheng, Y.; Zhang, H.; Wang, M.
2012-07-01
PWR reactor cavity radiation streaming is important for the safe of the personnel and equipment, thus calculation has to be performed to evaluate the neutron flux distribution around the reactor. For this calculation, the deterministic codes have difficulties in fine geometrical modeling and need huge computer resource; and the Monte Carlo codes require very long sampling time to obtain results with acceptable precision. Therefore, a coupling method has been developed to eliminate the two problems mentioned above in each code. In this study, we develop a coupling code named DORT2MCNP to link the Sn code DORT and Monte Carlo code MCNP. DORT2MCNP is used to produce a combined surface source containing top, bottom and side surface simultaneously. Because SDEF card is unsuitable for the combined surface source, we modify the SOURCE subroutine of MCNP and compile MCNP for this application. Numerical results demonstrate the correctness of the coupling code DORT2MCNP and show reasonable agreement between the coupling method and the other two codes (DORT and MCNP). (authors)
Code System to Calculate Radiation Dose Rates Relative to Spent Fuel Shipping Casks.
1993-05-20
Version 00 QBF calculates and plots in a short running time, three dimensional radiation dose rate distributions in the form of contour maps on specified planes resulting from cylindrical sources loaded into vehicles or ships. Shielding effects by steel walls and shielding material layers are taken into account in addition to the shadow effect among casks. This code system identifies the critical points on which to focus when designing the radiation shielding structure and wheremore » each of the spent fuel shipping casks should be stored. The code GRAPH reads the output data file of QBF and plots it using the HGX graphics library. QBF unifies the functions of the SMART and MANYCASK codes included in CCC-482.« less
User's manual for the Heat Pipe Space Radiator design and analysis Code (HEPSPARC)
NASA Technical Reports Server (NTRS)
Hainley, Donald C.
1991-01-01
A heat pipe space radiatior code (HEPSPARC), was written for the NASA Lewis Research Center and is used for the design and analysis of a radiator that is constructed from a pumped fluid loop that transfers heat to the evaporative section of heat pipes. This manual is designed to familiarize the user with this new code and to serve as a reference for its use. This manual documents the completed work and is intended to be the first step towards verification of the HEPSPARC code. Details are furnished to provide a description of all the requirements and variables used in the design and analysis of a combined pumped loop/heat pipe radiator system. A description of the subroutines used in the program is furnished for those interested in understanding its detailed workings.
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
ICRCCM Phase 2: Verification and calibration of radiation codes in climate models
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1991-01-01
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). SPECTRE will establish an absolute standard against which to compare models, and will aim to remove the hidden variables'' (unknown humidities, aerosols, etc.) which radiation modelers have invoked to excuse disagreements with observation. The data to be collected during SPECTRE will form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used in climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects.
Kirk, B.L. )
1990-01-01
In nuclear applications, the conversion of mainframe software to the personal computer (PC) environment has seen an accelerated pace. Credit has to be extended to the software companies that have made the scientific language FORTRAN available on PCs. Not to be neglected are the scientists who dedicate their time in the conversion of codes and are challenged by the limited PC memory and disk space. The Radiation shielding Information Center (RSIC) at Oak Ridge National Laboratory has encouraged these developments, and the shielding community has cooperated by making these new tools available via RSIC. The PC codes in the shielding and radiation transport area are divided into five categories (these categories are not mutually exclusive): (1) gamma-ray scattering; (2) neutron and gamma-ray transport (also coupled); (3) environmental dose; (4) medical applications; and (5) reactor physics. Each category is discussed.
PEREGRINE: An all-particle Monte Carlo code for radiation therapy
Hartmann Siantar, C.L.; Chandler, W.P.; Rathkopf, J.A.; Svatos, M.M.; White, R.M.
1994-09-01
The goal of radiation therapy is to deliver a lethal dose to the tumor while minimizing the dose to normal tissues. To carry out this task, it is critical to calculate correctly the distribution of dose delivered. Monte Carlo transport methods have the potential to provide more accurate prediction of dose distributions than currently-used methods. PEREGRINE is a new Monte Carlo transport code developed at Lawrence Livermore National Laboratory for the specific purpose of modeling the effects of radiation therapy. PEREGRINE transports neutrons, photons, electrons, positrons, and heavy charged-particles, including protons, deuterons, tritons, helium-3, and alpha particles. This paper describes the PEREGRINE transport code and some preliminary results for clinically relevant materials and radiation sources.
Monte Carlo Code System for High-Energy Radiation Transport Calculations.
2000-02-16
Version 00 HERMES-KFA consists of a set of Monte Carlo Codes used to simulate particle radiation and interaction with matter. The main codes are HETC, MORSE, and EGS. They are supported by a common geometry package, common random routines, a command interpreter, and auxiliary codes like NDEM that is used to generate a gamma-ray source from nuclear de-excitation after spallation processes. The codes have been modified so that any particle history falling outside the domainmore » of the physical theory of one program can be submitted to another program in the suite to complete the work. Also response data can be submitted by each program, to be collected and combined by a statistic package included within the command interpreter.« less
Development of a GPU Compatible Version of the Fast Radiation Code RRTMG
NASA Astrophysics Data System (ADS)
Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.
2012-12-01
The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement
ICRCCM Phase 2: Verification and calibration of radiation codes in climate models
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1992-01-01
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). The data collected during SPECTRE form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used in climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. This report summarizes the activities of our group during the project's Third year to meet our stated objectives. The report is divided into three sections entitled: SPECTRE Activities, ICRCCM Activities, and summary information. The section on SPECTRE activities summarizes the field portion of the project during 1991, and the data reduction/analysis performed by the various participants. The section on ICRCCM activities summarizes our initial attempts to select data for distribution to ICRCCM participants and at comparison of observations with calculations as will be done by the ICRCCM participants. The Summary Information section lists data concerning publications, presentations, graduate students supported, and post-doctoral appointments during the project.
A Radiation Chemistry Code Based on the Green's Function of the Diffusion Equation
NASA Technical Reports Server (NTRS)
Plante, Ianik; Wu, Honglu
2014-01-01
Stochastic radiation track structure codes are of great interest for space radiation studies and hadron therapy in medicine. These codes are used for a many purposes, notably for microdosimetry and DNA damage studies. In the last two decades, they were also used with the Independent Reaction Times (IRT) method in the simulation of chemical reactions, to calculate the yield of various radiolytic species produced during the radiolysis of water and in chemical dosimeters. Recently, we have developed a Green's function based code to simulate reversible chemical reactions with an intermediate state, which yielded results in excellent agreement with those obtained by using the IRT method. This code was also used to simulate and the interaction of particles with membrane receptors. We are in the process of including this program for use with the Monte-Carlo track structure code Relativistic Ion Tracks (RITRACKS). This recent addition should greatly expand the capabilities of RITRACKS, notably to simulate DNA damage by both the direct and indirect effect.
A unified radiative magnetohydrodynamics code for lightning-like discharge simulations
Chen, Qiang Chen, Bin Xiong, Run; Cai, Zhaoyang; Chen, P. F.
2014-03-15
A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs flux splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.
T.J. Urbatsch; T.M. Evans
2006-02-15
We have released Version 2 of Milagro, an object-oriented, C++ code that performs radiative transfer using Fleck and Cummings' Implicit Monte Carlo method. Milagro, a part of the Jayenne program, is a stand-alone driver code used as a methods research vehicle and to verify its underlying classes. These underlying classes are used to construct Implicit Monte Carlo packages for external customers. Milagro-2 represents a design overhaul that allows better parallelism and extensibility. New features in Milagro-2 include verified momentum deposition, restart capability, graphics capability, exact energy conservation, and improved load balancing and parallel efficiency. A users' guide also describes how to configure, make, and run Milagro2.
DOPEX-1D2C: A one-dimensional, two-constraint radiation shield optimization code
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1973-01-01
A one-dimensional, two-constraint radiation sheild weight optimization procedure and a computer program, DOPEX-1D2C, is described. The DOPEX-1D2C uses the steepest descent method to alter a set of initial (input) thicknesses of a spherical shield configuration to achieve a minimum weight while simultaneously satisfying two dose-rate constraints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. Code input instruction, a FORTRAN-4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is less than 1/2 minute on an IBM 7094.
European Code against Cancer 4th Edition: Ionising and non-ionising radiation and cancer.
McColl, Neil; Auvinen, Anssi; Kesminiene, Ausrele; Espina, Carolina; Erdmann, Friederike; de Vries, Esther; Greinert, Rüdiger; Harrison, John; Schüz, Joachim
2015-12-01
Ionising radiation can transfer sufficient energy to ionise molecules, and this can lead to chemical changes, including DNA damage in cells. Key evidence for the carcinogenicity of ionising radiation comes from: follow-up studies of the survivors of the atomic bombings in Japan; other epidemiological studies of groups that have been exposed to radiation from medical, occupational or environmental sources; experimental animal studies; and studies of cellular responses to radiation. Considering exposure to environmental ionising radiation, inhalation of naturally occurring radon is the major source of radiation in the population - in doses orders of magnitude higher than those from nuclear power production or nuclear fallout. Indoor exposure to radon and its decay products is an important cause of lung cancer; radon may cause approximately one in ten lung cancers in Europe. Exposures to radon in buildings can be reduced via a three-step process of identifying those with potentially elevated radon levels, measuring radon levels, and reducing exposure by installation of remediation systems. In the 4th Edition of the European Code against Cancer it is therefore recommended to: "Find out if you are exposed to radiation from naturally high radon levels in your home. Take action to reduce high radon levels". Non-ionising types of radiation (those with insufficient energy to ionise molecules) - including extremely low-frequency electric and magnetic fields as well as radiofrequency electromagnetic fields - are not an established cause of cancer and are therefore not addressed in the recommendations to reduce cancer risk. PMID:26126928
Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes
Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A
2014-01-01
This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed. PMID:26167432
Evaluation of Error-Correcting Codes for Radiation-Tolerant Memory
NASA Astrophysics Data System (ADS)
Jeon, S.; Vijaya Kumar, B. V. K.; Hwang, E.; Cheng, M. K.
2010-05-01
In space, radiation particles can introduce temporary or permanent errors in memory systems. To protect against potential memory faults, either thick shielding or error-correcting codes (ECC) are used by memory modules. Thick shielding translates into increased mass, and conventional ECCs designed for memories are typically capable of correcting only a single error and detecting a double error. Decoding is usually performed through hard decisions where bits are treated as either correct or flipped in polarity. We demonstrate that low-density parity-check (LDPC) codes that are already prevalent in many communication applications can also be used to protect memories in space. Because the achievable code rate monotonically decreases with time due to the accumulation of permanent errors, the achievable rate serves as a useful metric in designing an appropriate ECC. We describe how to compute soft symbol reliabilities on our channel and compare the performance of soft-decision decoding LDPC codes against conventional hard-decision decoding of Reed-Solomon (RS) codes and Bose-Chaudhuri-Hocquenghem (BCH) codes for a specific memory structure.
3D and 4D Simulations of the Dynamics of the Radiation Belts using VERB code
NASA Astrophysics Data System (ADS)
Shprits, Yuri; Kellerman, Adam; Drozdov, Alexander; Orlova, Ksenia
2015-04-01
Modeling and understanding of ring current and higher energy radiation belts has been a grand challenge since the beginning of the space age. In this study we show long term simulations with a 3D VERB code of modeling the radiation belts with boundary conditions derived from observations around geosynchronous orbit. We also present 4D VERB simulations that include convective transport, radial diffusion, pitch angle scattering and local acceleration. We show that while lower energy radial transport is dominated by the convection and higher energy transport is dominated by the diffusive radial transport. We also show there exists an intermediate range of energies for electrons for which both processes work simultaneously.
Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes
NASA Astrophysics Data System (ADS)
Schreier, F.; Garcia, S. Gimeno; Milz, M.; Kottayil, A.; Höpfner, M.; von Clarmann, T.; Stiller, G.
2013-05-01
An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric sounding - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. Results of this intercomparison and a discussion of reasons of the observed differences are presented.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.
Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond
New Particle-in-Cell Code for Numerical Simulation of Coherent Synchrotron Radiation
Balsa Terzic, Rui Li
2010-05-01
We present a first look at the new code for self-consistent, 2D simulations of beam dynamics affected by the coherent synchrotron radiation. The code is of the particle-in-cell variety: the beam bunch is sampled by point-charge particles, which are deposited on the grid; the corresponding forces on the grid are then computed using retarded potentials according to causality, and interpolated so as to advance the particles in time. The retarded potentials are evaluated by integrating over the 2D path history of the bunch, with the charge and current density at the retarded time obtained from interpolation of the particle distributions recorded at discrete timesteps. The code is benchmarked against analytical results obtained for a rigid-line bunch. We also outline the features and applications which are currently being developed.
Application of the new MultiTrans SP3 radiation transport code in BNCT dose planning.
Kotiluoto, P; Hiisamäki, P; Savolainen, S
2001-09-01
Dose planning in boron neutron capture therapy (BNCT) is a complex problem and requires sophisticated numerical methods. In the framework of the Finnish BNCT project, new deterministic three-dimensional radiation transport code MultiTrans SP3 has been developed at VTT Chemical Technology, based on a novel application of the tree multigrid technique. To test the applicability of this new code in a realistic BNCT dose planning problem, cylindrical PMMA (polymethyl-methacrylate) phantom was chosen as a benchmark case. It is a convenient benchmark, as it has been modeled by several different codes, including well-known DORT and MCNP. Extensive measured data also exist. In this paper, a comparison of the new MultiTrans SP3 code with other methods is presented for the PMMA phantom case. Results show that the total neutron dose rate to ICRU adult brain calculated by the MultiTrans SP3 code differs less than 4% in 2 cm depth in phantom (in thermal maximum) from the DORT calculation. Results also show that the calculated 197Au(n,gamma) and 55Mn(n,gamma) reaction rates in 2 cm depth in phantom differ less than 4% and 1% from the measured values, respectively. However, the photon dose calculated by the MultiTrans SP3 code seems to be incorrect in this PMMA phantom case, which requires further studying. As expected, the deterministic MultiTrans SP3 code is over an order of magnitude faster than stochastic Monte Carlo codes (with similar resolution), thus providing a very efficient tool for BNCT dose planning. PMID:11585221
SKIRT: An advanced dust radiative transfer code with a user-friendly architecture
NASA Astrophysics Data System (ADS)
Camps, P.; Baes, M.
2015-03-01
We discuss the architecture and design principles that underpin the latest version of SKIRT, a state-of-the-art open source code for simulating continuum radiation transfer in dusty astrophysical systems, such as spiral galaxies and accretion disks. SKIRT employs the Monte Carlo technique to emulate the relevant physical processes including scattering, absorption and emission by the dust. The code features a wealth of built-in geometries, radiation source spectra, dust characterizations, dust grids, and detectors, in addition to various mechanisms for importing snapshots generated by hydrodynamical simulations. The configuration for a particular simulation is defined at run-time through a user-friendly interface suitable for both occasional and power users. These capabilities are enabled by careful C++ code design. The programming interfaces between components are well defined and narrow. Adding a new feature is usually as simple as adding another class; the user interface automatically adjusts to allow configuring the new options. We argue that many scientific codes, like SKIRT, can benefit from careful object-oriented design and from a friendly user interface, even if it is not a graphical user interface.
Collisional radiative average atom code based on a relativistic Screened Hydrogenic Model
NASA Astrophysics Data System (ADS)
Benita, A. J.; Mínguez, E.; Mendoza, M. A.; Rubiano, J. G.; Gil, J. M.; Rodríguez, R.; Martel, P.
2015-03-01
A steady-state and time-dependent collisional-radiative ''average-atom'' (AA) model (ATMED CR) is presented for the calculation of atomic and radiative properties of plasmas for a wide range of laboratory and theoretical conditions: coronal, local thermodynamic equilibrium or nonlocal thermodynamic equilibrium, optically thin or thick plasmas and photoionized plasmas. The radiative and collisional rates are a set of analytical approximations that compare well with more sophisticated quantum treatment of atomic rates that yield fast calculations. The atomic model is based on a new Relativistic Screened Hydrogenic Model (NRSHM) with a set of universal screening constants including nlj-splitting that has been obtained by fitting to a large database of ionization potentials and excitation energies compiled from the National Institute of Standards and Technology (NIST) database and the Flexible Atomic Code (FAC). The model NRSHM has been validated by comparing the results with ionization energies, transition energies and wave functions computed using sophisticated self-consistent codes and experimental data. All the calculations presented in this work were performed using ATMED CR code.
Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P
2007-01-01
The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported. PMID:17038404
NASA Technical Reports Server (NTRS)
Chambers, Lin Hartung
1994-01-01
The theory for radiation emission, absorption, and transfer in a thermochemical nonequilibrium flow is presented. The expressions developed reduce correctly to the limit at equilibrium. To implement the theory in a practical computer code, some approximations are used, particularly the smearing of molecular radiation. Details of these approximations are presented and helpful information is included concerning the use of the computer code. This user's manual should benefit both occasional users of the Langley Optimized Radiative Nonequilibrium (LORAN) code and those who wish to use it to experiment with improved models or properties.
Radiation Coupling with the FUN3D Unstructured-Grid CFD Code
NASA Technical Reports Server (NTRS)
Wood, William A.
2012-01-01
The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.
EMMA: an adaptive mesh refinement cosmological simulation code with radiative transfer
NASA Astrophysics Data System (ADS)
Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre
2015-11-01
EMMA is a cosmological simulation code aimed at investigating the reionization epoch. It handles simultaneously collisionless and gas dynamics, as well as radiative transfer physics using a moment-based description with the M1 approximation. Field quantities are stored and computed on an adaptive three-dimensional mesh and the spatial resolution can be dynamically modified based on physically motivated criteria. Physical processes can be coupled at all spatial and temporal scales. We also introduce a new and optional approximation to handle radiation: the light is transported at the resolution of the non-refined grid and only once the dynamics has been fully updated, whereas thermo-chemical processes are still tracked on the refined elements. Such an approximation reduces the overheads induced by the treatment of radiation physics. A suite of standard tests are presented and passed by EMMA, providing a validation for its future use in studies of the reionization epoch. The code is parallel and is able to use graphics processing units (GPUs) to accelerate hydrodynamics and radiative transfer calculations. Depending on the optimizations and the compilers used to generate the CPU reference, global GPU acceleration factors between ×3.9 and ×16.9 can be obtained. Vectorization and transfer operations currently prevent better GPU performance and we expect that future optimizations and hardware evolution will lead to greater accelerations.
Reanalysis and forecasting killer electrons in Earth's radiation belts using the VERB code
NASA Astrophysics Data System (ADS)
Kellerman, Adam; Kondrashov, Dmitri; Shprits, Yuri; Podladchikova, Tatiana; Drozdov, Alexander
2016-07-01
The Van Allen radiation belts are torii-shaped regions of trapped energetic particles, that in recent years, have become a principle focus for satellite operators and engineers. During geomagnetic storms, electrons can be accelerated up to relativistic energies, where they may penetrate spacecraft shielding and damage electrical systems, causing permanent damage or loss of spacecraft. Data-assimilation provides an optimal way to combine observations of the radiation belts with a physics-based model in order to more accurately specify the global state of the Earth's radiation belts. We present recent advances to the data-assimilative version of the Versatile Electron Radiation Belt (VERB) code, including more sophisticated error analysis, and incorporation of realistic field-models to more accurately specify fluxes at a given MLT or along a spacecraft trajectory. The effect of recent stream-interaction-region (SIR) driven enhancements are investigated using the improved model. We also present a real-time forecast model based on the data-assimilative VERB code, and discuss the forecast performance over the past 12 months.
MULTI2D - a computer code for two-dimensional radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.
2009-06-01
Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are
Application of the MASH v1.0 Code System to radiological warfare radiation threats
Johnson, J.O.; Santoro, R.T.; Smith, M.S.
1994-03-01
Nuclear hardening capabilities of US and foreign ground force systems is a primary concern of the Department of Defense (DoD) and US Army. The Monte Carlo Adjoint Shielding Code System -- MASH v1.0 was developed at Oak Ridge National Laboratory (ORNL) to analyze these capabilities, i.e. the shielding effectiveness, for prompt radiation from a nuclear weapon detonation. Rapidly changing world events and the proliferation of nuclear weapons related technology have increased the kinds of nuclear threats to include intentionally dispersed radiation sources and fallout from tactical nuclear weapons used in the modern AirLand battlefield scenario. Consequently, a DoD area of increasing interest focuses on determining the shielding effectiveness of foreign and US armored vehicles to radiological warfare and fallout radiation threats. To demonstrate the applicability of MASH for analyzing dispersed radiation source problems, calculations have been completed for two distributed sources; a dispersed radiation environment simulated by a uniformly distributed {sup 60}Co source, and a {sup 235}U fission weapon fallout source. Fluence and dose assessments were performed for the free-field, the inside of a steel-walled two-meter box, in a phantom standing in the free-field, and in a phantom standing in the two-meter box. The results indicate substantial radiation protection factors for the {sup 60}Co dispersed radiation source and the fallout source compared to the prompt radiation protection factors. The dose protection factors ranged from 40 to 95 for the two-meter box and from 55 to 123 for the mid-gut position of the phantom standing in the box. The results further indicate that a {sup 60}Co source might be a good first order approximation for a tactical fission weapon fallout protection factor analysis.
Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code
NASA Astrophysics Data System (ADS)
Robson Rocha, Will; Pilling, Sergio
The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices
Comparison of Radiation Transport Codes, HZETRN, HETC and FLUKA, Using the 1956 Webber SPE Spectrum
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Slaba, Tony C.; Blattnig, Steve R.; Tripathi, Ram K.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.; Reddell, Brandon; Clowdsley, Martha S.; Singleterry, Robert C.; Norbury, John W.; Badavi, Francis F.; Aghara, Sukesh K.
2009-01-01
Protection of astronauts and instrumentation from galactic cosmic rays (GCR) and solar particle events (SPE) in the harsh environment of space is of prime importance in the design of personal shielding, spacec raft, and mission planning. Early entry of radiation constraints into the design process enables optimal shielding strategies, but demands efficient and accurate tools that can be used by design engineers in every phase of an evolving space project. The radiation transport code , HZETRN, is an efficient tool for analyzing the shielding effectiveness of materials exposed to space radiation. In this paper, HZETRN is compared to the Monte Carlo codes HETC-HEDS and FLUKA, for a shield/target configuration comprised of a 20 g/sq cm Aluminum slab in front of a 30 g/cm^2 slab of water exposed to the February 1956 SPE, as mode led by the Webber spectrum. Neutron and proton fluence spectra, as well as dose and dose equivalent values, are compared at various depths in the water target. This study shows that there are many regions where HZETRN agrees with both HETC-HEDS and FLUKA for this shield/target configuration and the SPE environment. However, there are also regions where there are appreciable differences between the three computer c odes.
The Development of the Ducted Fan Noise Propagation and Radiation Code CDUCT-LaRC
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Farassat, F.; Pope, D. Stuart; Vatsa, Veer
2003-01-01
The development of the ducted fan noise propagation and radiation code CDUCT-LaRC at NASA Langley Research Center is described. This code calculates the propagation and radiation of given acoustic modes ahead of the fan face or aft of the exhaust guide vanes in the inlet or exhaust ducts, respectively. This paper gives a description of the modules comprising CDUCT-LaRC. The grid generation module provides automatic creation of numerical grids for complex (non-axisymmetric) geometries that include single or multiple pylons. Files for performing automatic inviscid mean flow calculations are also generated within this module. The duct propagation is based on the parabolic approximation theory of R. P. Dougherty. This theory allows the handling of complex internal geometries and the ability to study the effect of non-uniform (i.e. circumferentially and axially segmented) liners. Finally, the duct radiation module is based on the Ffowcs Williams-Hawkings (FW-H) equation with a penetrable data surface. Refraction of sound through the shear layer between the external flow and bypass duct flow is included. Results for benchmark annular ducts, as well as other geometries with pylons, are presented and compared with available analytical data.
Blakeman, E.D.
2000-05-07
A software system, GRAVE (Geometry Rendering and Visual Editor), has been developed at the Oak Ridge National Laboratory (ORNL) to perform interactive visualization and development of models used as input to the TORT three-dimensional discrete ordinates radiation transport code. Three-dimensional and two-dimensional visualization displays are included. Display capabilities include image rotation, zoom, translation, wire-frame and translucent display, geometry cuts and slices, and display of individual component bodies and material zones. The geometry can be interactively edited and saved in TORT input file format. This system is an advancement over the current, non-interactive, two-dimensional display software. GRAVE is programmed in the Java programming language and can be implemented on a variety of computer platforms. Three- dimensional visualization is enabled through the Visualization Toolkit (VTK), a free-ware C++ software library developed for geometric and data visual display. Future plans include an extension of the system to read inputs using binary zone maps and combinatorial geometry models containing curved surfaces, such as those used for Monte Carlo code inputs. Also GRAVE will be extended to geometry visualization/editing for the DORT two-dimensional transport code and will be integrated into a single GUI-based system for all of the ORNL discrete ordinates transport codes.
NASA Astrophysics Data System (ADS)
Havemann, Stephan; Thelen, Jean-Claude; Taylor, Jonathan P.; Keil, Andreas
2009-03-01
The Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) has been developed for the simulation of highly spectrally resolved measurements from satellite based (i.e. Infrared Atmospheric Sounding Interferometer (IASI), Atmospheric Infrared Sounder (AIRS)) and airborne (i.e. Atmospheric Research Interferometer Evaluation System (ARIES)) instruments. The use of principle components enables the calculation of a complete spectrum in less than a second. The principal compoents are derived from a diverse training set of atmospheres and surfaces and contain their spectral characteristics in a highly compressed form. For any given atmosphere/surface, the HT-FRTC calculates the weightings (also called scores) of a few hundred principal components based on selected monochromatic radiative transfer calculations, which is far cheaper than thousands of channel radiance calculations. By intercomparison with line-by-line and other fast models the HT-FRTC has been shown to be accurate. The HT-FRTC has been successfully applied to simultaneous variational retrievals of atmospheric temperature and humidity profiles, surface temperature and surface emissivity over land. This is the subject of another presentation at this conference. The HT-FRTC has now also been extended to include an exact treatment of scattering by aerosols/clouds. The radiative transfer problem is solved using a discrete ordinate method (DISORT). Modelling results at high-spectral resolution for non-clear sky atmospheres obtained with the HT-FRTC are presented.
Update on the Radiation Code in IMPACT: Clouds, Heating Rates, and Comparisons
Edis, T; Grant, K; Cameron-Smith, P
2005-07-22
This is a summary of work done over two months in the summer of 2005, which was devoted to improving the radiation code of IMPACT, the LLNL 3D global atmospheric chemistry and aerosol model. Most of the work concerned the addition and testing of new cloud optical property routines designed to work with CAM3 meteorological data, and the comparison of CAM3 with the results of IMPACT runs using meteorological data from CAM3 and MACCM3. Additional related work done in the course of these main tasks will be described as necessary.
NASA Astrophysics Data System (ADS)
Ioan, M.-R.
2016-08-01
In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2015-12-21
This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000^{®} problems. These benchmark and scaling studies show promising results.
Improving the Salammbo code modelling and using it to better predict radiation belts dynamics
NASA Astrophysics Data System (ADS)
Maget, Vincent; Sicard-Piet, Angelica; Grimald, Sandrine Rochel; Boscher, Daniel
2016-07-01
In the framework of the FP7-SPACESTORM project, one objective is to improve the reliability of the model-based predictions performed of the radiation belt dynamics (first developed during the FP7-SPACECAST project). In this purpose we have analyzed and improved the way the simulations using the ONERA Salammbô code are performed, especially in : - Better controlling the driving parameters of the simulation; - Improving the initialization of the simulation in order to be more accurate at most energies for L values between 4 to 6; - Improving the physics of the model. For first point a statistical analysis of the accuracy of the Kp index has been conducted. For point two we have based our method on a long duration simulation in order to extract typical radiation belt states depending on the solar wind stress and geomagnetic activity. For last point we have first improved separately the modelling of different processes acting in the radiation belts and then, we have analyzed the global improvements obtained when simulating them together. We'll discuss here on all these points and on the balance that has to be taken into account between modeled processes to globally improve the radiation belt modelling.
HELIOS-CR A 1-D radiation-magnetohydrodynamics code with inline atomic kinetics modeling
NASA Astrophysics Data System (ADS)
Macfarlane, J. J.; Golovkin, I. E.; Woodruff, P. R.
2006-05-01
HELIOS-CR is a user-oriented 1D radiation-magnetohydrodynamics code to simulate the dynamic evolution of laser-produced plasmas and z-pinch plasmas. It includes an in-line collisional-radiative (CR) model for computing non-LTE atomic level populations at each time step of the hydrodynamics simulation. HELIOS-CR has been designed for ease of use, and is well-suited for experimentalists, as well as graduate and undergraduate student researchers. The energy equations employed include models for laser energy deposition, radiation from external sources, and high-current discharges. Radiative transport can be calculated using either a multi-frequency flux-limited diffusion model, or a multi-frequency, multi-angle short characteristics model. HELIOS-CR supports the use of SESAME equation of state (EOS) tables, PROPACEOS EOS/multi-group opacity data tables, and non-LTE plasma properties computed using the inline CR modeling. Time-, space-, and frequency-dependent results from HELIOS-CR calculations are readily displayed with the HydroPLOT graphics tool. In addition, the results of HELIOS simulations can be post-processed using the SPECT3D Imaging and Spectral Analysis Suite to generate images and spectra that can be directly compared with experimental measurements. The HELIOS-CR package runs on Windows, Linux, and Mac OSX platforms, and includes online documentation. We will discuss the major features of HELIOS-CR, and present example results from simulations.
CASTRO: A New AMR Radiation-Hydrodynamics Code for Compressible Astrophysics
NASA Astrophysics Data System (ADS)
Almgren, Ann; Bell, J.; Day, M.; Howell, L.; Joggerst, C.; Myra, E.; Nordhaus, J.; Singer, M.; Zingale, M.
2010-01-01
CASTRO is a new, multi-dimensional, Eulerian AMR radiation-hydrodynamics code designed for astrophysical simulations. The code includes routines for various equations of state and nuclear reaction networks, and can be used with Cartesian, cylindrical or spherical coordinates. Time integration of the hydrodynamics equations is based on a higher-order, unsplit Godunov scheme. Self-gravity can be calculated on the adaptive hierarchy using a simple monopole approximation or a full Poisson solve for the potential. CASTRO includes gray and multigroup radiation diffusion. Multi-species neutrino diffusion for supernovae is nearing completion. The adaptive framework of CASTRO is based on an time-evolving hierarchy of nested rectangular grids with refinement in both space and time; the entire implementation is designed to run on thousands of processors. We describe in more detail how CASTRO is implemented and can be used for a number of different simulations. Our initial applications of CASTRO include Type Ia and Type II supernovae. This work has been supported by the SciDAC Program of the DOE Office of Mathematics, Information, and Computational Sciences under contracts No. DE-AC02-05CH11231 (LBNL), No. DE-FC02-06ER41438 (UCSC), and No. DE-AC52-07NA27344 (LLNL); and LLNL contracts B582735 and B574691(Stony Brook). Calculations shown were carried out on Franklin at NERSC.
WASP-12b According to the Bayesian Atmospheric Radiative Transfer (BART) Code
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Cubillos, Patricio E.; Blecic, Jasmina; Challener, Ryan C.; Rojo, Patricio M.; Lust, Nate B.; Bowman, M. Oliver; Blumenthal, Sarah D.; Foster, Andrew SD; Foster, A. J.
2015-11-01
We present the Bayesian Atmospheric Radiative Transfer (BART) code for atmospheric property retrievals from transit and eclipse spectra, and apply it to WASP-12b, a hot (~3000 K) exoplanet with a high eclipse signal-to-noise ratio. WASP-12b has been controversial. We (Madhusudhan et al. 2011, Nature) claimed it was the first planet with a high C/O abundance ratio. Line et al. (2014, ApJ) suggested a high CO2 abundance to explain the data. Stevenson et al. (2014, ApJ, atmospheric model by Madhusudhan) add additional data and reaffirm the original result, stating that C2H2 and HCN, not included in the Line et al. models, explain the data. We explore several modeling configurations and include Hubble, Spitzer, and ground-based eclipse data.BART consists of a differential-evolution Markov-Chain Monte Carlo sampler that drives a line-by-line radiative transfer code through the phase space of thermal- and abundance-profile parameters. BART is written in Python and C. Python modules generate atmospheric profiles from sets of MCMC parameters and integrate the resulting spectra over observational bandpasses, allowing high flexibility in modeling the planet without interacting with the fast, C portions that calculate the spectra. BART's shared memory and optimized opacity calculation allow it to run on a laptop, enabling classroom use. Runs can scale constant abundance profiles, profiles of thermochemical equilibrium abundances (TEA) calculated by the included TEA code, or arbitrary curves. Several thermal profile parameterizations are available. BART is an open-source, reproducible-research code. Users must release any code or data modifications if they publish results from it, and we encourage the community to use it and to participate in its development via http://github.com/ExOSPORTS/BART.This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. J. Blecic holds a NASA Earth and Space Science
A Random Walk on WASP-12b with the Bayesian Atmospheric Radiative Transfer (BART) Code
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Cubillos, Patricio; Blecic, Jasmina; Challener, Ryan; Rojo, Patricio; Lust, Nathaniel B.; Bowman, Oliver; Blumenthal, Sarah D.; Foster, Andrew S. D.; Foster, Austin James; Stemm, Madison; Bruce, Dylan
2016-01-01
We present the Bayesian Atmospheric Radiative Transfer (BART) code for atmospheric property retrievals from transit and eclipse spectra, and apply it to WASP-12b, a hot (~3000 K) exoplanet with a high eclipse signal-to-noise ratio. WASP-12b has been controversial. We (Madhusudhan et al. 2011, Nature) claimed it was the first planet with a high C/O abundance ratio. Line et al. (2014, ApJ) suggested a high CO2 abundance to explain the data. Stevenson et al. (2014, ApJ, atmospheric model by Madhusudhan) add additional data and reaffirm the original result, stating that C2H2 and HCN, not included in the Line et al. models, explain the data. We explore several modeling configurations and include Hubble, Spitzer, and ground-based eclipse data.BART consists of a differential-evolution Markov-Chain Monte Carlo sampler that drives a line-by-line radiative transfer code through the phase space of thermal- and abundance-profile parameters. BART is written in Python and C. Python modules generate atmospheric profiles from sets of MCMC parameters and integrate the resulting spectra over observational bandpasses, allowing high flexibility in modeling the planet without interacting with the fast, C portions that calculate the spectra. BART's shared memory and optimized opacity calculation allow it to run on a laptop, enabling classroom use. Runs can scale constant abundance profiles, profiles of thermochemical equilibrium abundances (TEA) calculated by the included TEA code, or arbitrary curves. Several thermal profile parameterizations are available. BART is an open-source, reproducible-research code. Users must release any code or data modifications if they publish results from it, and we encourage the community to use it and to participate in its development via http://github.com/ExOSPORTS/BART.This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. J. Blecic holds a NASA Earth and Space Science
A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation
NASA Technical Reports Server (NTRS)
Plante, Ianik; Wu, Honglu
2014-01-01
Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the
Review of Fast Monte Carlo Codes for Dose Calculation in Radiation Therapy Treatment Planning
Jabbari, Keyvan
2011-01-01
An important requirement in radiation therapy is a fast and accurate treatment planning system. This system, using computed tomography (CT) data, direction, and characteristics of the beam, calculates the dose at all points of the patient's volume. The two main factors in treatment planning system are accuracy and speed. According to these factors, various generations of treatment planning systems are developed. This article is a review of the Fast Monte Carlo treatment planning algorithms, which are accurate and fast at the same time. The Monte Carlo techniques are based on the transport of each individual particle (e.g., photon or electron) in the tissue. The transport of the particle is done using the physics of the interaction of the particles with matter. Other techniques transport the particles as a group. For a typical dose calculation in radiation therapy the code has to transport several millions particles, which take a few hours, therefore, the Monte Carlo techniques are accurate, but slow for clinical use. In recent years, with the development of the ‘fast’ Monte Carlo systems, one is able to perform dose calculation in a reasonable time for clinical use. The acceptable time for dose calculation is in the range of one minute. There is currently a growing interest in the fast Monte Carlo treatment planning systems and there are many commercial treatment planning systems that perform dose calculation in radiation therapy based on the Monte Carlo technique. PMID:22606661
NASA Technical Reports Server (NTRS)
Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.
2002-01-01
Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.
1982-11-18
Version 00 LPGS was developed to calculate the radiological impacts resulting from radioactive releases to the hydrosphere. The name LPGS was derived from the Liquid Pathway Generic Study for which the original code was used primarily as an analytic tool in the assessment process. The hydrosphere is represented by the following types of water bodies: estuary, small river, well, lake, and one-dimensional (1-D) river. LPGS is designed to calculate radiation dose (individual and population) tomore » body organs as a function of time for the various exposure pathways. The radiological consequences to the aquatic biota are estimated. Several simplified radionuclide transport models are employed with built-in formulations to describe the release rate of the radionuclides. A tabulated user-supplied release model can be input, if desired. Printer plots of dose versus time for the various exposure pathways are provided.« less
Development and validation of a GEANT4 radiation transport code for CT dosimetry
Carver, DE; Kost, SD; Fernald, MJ; Lewis, KG; Fraser, ND; Pickens, DR; Price, RR; Stabin, MG
2014-01-01
We have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate our simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air, a standard 16-cm acrylic head phantom, and a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of our Monte Carlo simulations. We found that simulated and measured CTDI values were within an overall average of 6% of each other. PMID:25706135
NASA Astrophysics Data System (ADS)
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
An object-oriented implementation of a parallel Monte Carlo code for radiation transport
NASA Astrophysics Data System (ADS)
Santos, Pedro Duarte; Lani, Andrea
2016-05-01
This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.
Development and validation of a GEANT4 radiation transport code for CT dosimetry.
Carver, D E; Kost, S D; Fernald, M J; Lewis, K G; Fraser, N D; Pickens, D R; Price, R R; Stabin, M G
2015-04-01
The authors have created a radiation transport code using the GEANT4 Monte Carlo toolkit to simulate pediatric patients undergoing CT examinations. The focus of this paper is to validate their simulation with real-world physical dosimetry measurements using two independent techniques. Exposure measurements were made with a standard 100-mm CT pencil ionization chamber, and absorbed doses were also measured using optically stimulated luminescent (OSL) dosimeters. Measurements were made in air with a standard 16-cm acrylic head phantom and with a standard 32-cm acrylic body phantom. Physical dose measurements determined from the ionization chamber in air for 100 and 120 kVp beam energies were used to derive photon-fluence calibration factors. Both ion chamber and OSL measurement results provide useful comparisons in the validation of the Monte Carlo simulations. It was found that simulated and measured CTDI values were within an overall average of 6% of each other. PMID:25706135
A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code
NASA Astrophysics Data System (ADS)
Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.
2013-12-01
The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness,
Park, Jinhyoung; Lee, Jungwoo; Lau, Sien Ting; Lee, Changyang; Huang, Ying; Lien, Ching-Ling; Kirk Shung, K
2012-04-01
Acoustic radiation force impulse (ARFI) imaging has been developed as a non-invasive method for quantitative illustration of tissue stiffness or displacement. Conventional ARFI imaging (2-10 MHz) has been implemented in commercial scanners for illustrating elastic properties of several organs. The image resolution, however, is too coarse to study mechanical properties of micro-sized objects such as cells. This article thus presents a high-frequency coded excitation ARFI technique, with the ultimate goal of displaying elastic characteristics of cellular structures. Tissue mimicking phantoms and zebrafish embryos are imaged with a 100-MHz lithium niobate (LiNbO₃) transducer, by cross-correlating tracked RF echoes with the reference. The phantom results show that the contrast of ARFI image (14 dB) with coded excitation is better than that of the conventional ARFI image (9 dB). The depths of penetration are 2.6 and 2.2 mm, respectively. The stiffness data of the zebrafish demonstrate that the envelope is harder than the embryo region. The temporal displacement change at the embryo and the chorion is as large as 36 and 3.6 μm. Consequently, this high-frequency ARFI approach may serve as a remote palpation imaging tool that reveals viscoelastic properties of small biological samples. PMID:22101757
NASA Astrophysics Data System (ADS)
Hilmy, N.; Febrida, A.; Basril, A.
2007-11-01
Problems of tissue allografts in using International Standard (ISO) 11137 for validation of radiation sterilization dose (RSD) are limited and low numbers of uniform samples per production batch, those are products obtained from one donor. Allograft is a graft transplanted between two different individuals of the same species. The minimum number of uniform samples needed for verification dose (VD) experiment at the selected sterility assurance level (SAL) per production batch according to the IAEA Code is 20, i.e., 10 for bio-burden determination and the remaining 10 for sterilization test. Three methods of the IAEA Code have been used for validation of RSD, i.e., method A1 that is a modification of method 1 of ISO 11137:1995, method B (ISO 13409:1996), and method C (AAMI TIR 27:2001). This paper describes VD experiments using uniform products obtained from one cadaver donor, i.e., cancellous bones, demineralized bone powders and amnion grafts from one life donor. Results of the verification dose experiments show that RSD is 15.4 kGy for cancellous and demineralized bone grafts and 19.2 kGy for amnion grafts according to method A1 and 25 kGy according to methods B and C.
Breast Lesions Evaluated by Color-Coded Acoustic Radiation Force Impulse (ARFI) Imaging.
Zhou, JianQiao; Yang, ZhiFang; Zhan, WeiWei; Zhang, JingWen; Hu, Na; Dong, YiJie; Wang, YingYing
2016-07-01
The goal of our study was to investigate the value of color-coded Virtual Touch tissue imaging (VTI) using acoustic radiation force impulse (ARFI) technology in the characterization of breast lesions and to compare it with conventional ultrasound (US). Conventional US and color-coded VTI were performed in 196 solid breast lesions in 196 consecutive women (age range 17-91 y; mean 48.17 ± 14.46 y). A four-point scale VTI score was assigned for each lesion according to the color pattern both in the lesion and in the surrounding breast tissue. The mean VTI score was significantly higher for malignant lesions (3.80 ± 0.66, range 1-4) than for benign ones (2.02 ± 1.20, range 1-4) (p < 0.001), and the optimal cut-off value was between score 3 and score 4. The area under the receiver operating characteristic (ROC) curve for combined conventional US and VTI (0.945) was significantly higher than that for conventional US (0.902) and for VTI (0.871) (p = 0.0021 and p < 0.001, respectively). It was concluded that color-coded VTI with the proposed four-point scale score system combined with conventional US might have the potential to aid in the characterization of benign and malignant breast lesions. PMID:27131841
Ballarini, Francesca; Altieri, Saverio; Bortolussi, Silva; Carante, Mario; Giroletti, Elio; Protti, Nicoletta
2014-08-01
This paper presents a biophysical model of radiation-induced cell death, implemented as a Monte Carlo code called BIophysical ANalysis of Cell death and chromosome Aberrations (BIANCA), based on the assumption that some chromosome aberrations (dicentrics, rings, and large deletions, called ‘‘lethal aberrations’’) lead to clonogenic inactivation. In turn, chromosome aberrations are assumed to derive from clustered, and thus severe, DNA lesions (called ‘‘cluster lesions,’’ or CL) interacting at the micrometer scale; the CL yield and the threshold distance governing CL interaction are the only model parameters. After a pilot study on V79 hamster cells exposed to protons and carbon ions, in the present work the model was extended and applied to AG1522 human cells exposed to photons, He ions, and heavier ions including carbon and neon. The agreement with experimental survival data taken from the literature supported the assumptions. In particular, the inactivation of AG1522 cells was explained by lethal aberrations not only for X-rays, as already reported by others, but also for the aforementioned radiation types. Furthermore, the results are consistent with the hypothesis that the critical initial lesions leading to cell death are DNA cluster lesions having yields in the order of *2 CL Gy-1 cell-1 at low LET and*20 CL Gy-1 cell-1 at high LET, and that the processing of these lesions is modulated by proximity effects at the micrometer scale related to interphase chromatin organization. The model was then applied to calculate the fraction of inactivated cells, as well as the yields of lethal aberrations and cluster lesions, as a function of LET; the results showed a maximum around 130 keV/lm, and such maximum was much higher for cluster lesions and lethal aberrations than for cell inactivation. PMID:24659413
Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji
2013-09-25
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
European Code against Cancer 4th Edition: Ultraviolet radiation and cancer.
Greinert, Rüdiger; de Vries, Esther; Erdmann, Friederike; Espina, Carolina; Auvinen, Anssi; Kesminiene, Ausrele; Schüz, Joachim
2015-12-01
Ultraviolet radiation (UVR) is part of the electromagnetic spectrum emitted naturally from the sun or from artificial sources such as tanning devices. Acute skin reactions induced by UVR exposure are erythema (skin reddening), or sunburn, and the acquisition of a suntan triggered by UVR-induced DNA damage. UVR exposure is the main cause of skin cancer, including cutaneous malignant melanoma, basal-cell carcinoma, and squamous-cell carcinoma. Skin cancer is the most common cancer in fair-skinned populations, and its incidence has increased steeply over recent decades. According to estimates for 2012, about 100,000 new cases of cutaneous melanoma and about 22,000 deaths from it occurred in Europe. The main mechanisms by which UVR causes cancer are well understood. Exposure during childhood appears to be particularly harmful. Exposure to UVR is a risk factor modifiable by individuals' behaviour. Excessive exposure from natural sources can be avoided by seeking shade when the sun is strongest, by wearing appropriate clothing, and by appropriately applying sunscreens if direct sunlight is unavoidable. Exposure from artificial sources can be completely avoided by not using sunbeds. Beneficial effects of sun or UVR exposure, such as for vitamin D production, can be fully achieved while still avoiding too much sun exposure and the use of sunbeds. Taking all the scientific evidence together, the recommendation of the 4th edition of the European Code Against Cancer for ultraviolet radiation is: "Avoid too much sun, especially for children. Use sun protection. Do not use sunbeds." PMID:26096748
Odyssey: A Public GPU-based Code for General Relativistic Radiative Transfer in Kerr Spacetime
NASA Astrophysics Data System (ADS)
Pu, Hung-Yi; Yun, Kiyun; Younsi, Ziri; Yoon, Suk-Jin
2016-04-01
General relativistic radiative transfer calculations coupled with the calculation of geodesics in the Kerr spacetime are an essential tool for determining the images, spectra, and light curves from matter in the vicinity of black holes. Such studies are especially important for ongoing and upcoming millimeter/submillimeter very long baseline interferometry observations of the supermassive black holes at the centers of Sgr A* and M87. To this end we introduce Odyssey, a graphics processing unit (GPU) based code for ray tracing and radiative transfer in the Kerr spacetime. On a single GPU, the performance of Odyssey can exceed 1 ns per photon, per Runge-Kutta integration step. Odyssey is publicly available, fast, accurate, and flexible enough to be modified to suit the specific needs of new users. Along with a Graphical User Interface powered by a video-accelerated display architecture, we also present an educational software tool, Odyssey_Edu, for showing in real time how null geodesics around a Kerr black hole vary as a function of black hole spin and angle of incidence onto the black hole.
MOCRA: a Monte Carlo code for the simulation of radiative transfer in the atmosphere.
Premuda, Margherita; Palazzi, Elisa; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Giovanelli, Giorgio
2012-03-26
This paper describes the radiative transfer model (RTM) MOCRA (MOnte Carlo Radiance Analysis), developed in the frame of DOAS (Differential Optical Absorption Spectroscopy) to correctly interpret remote sensing measurements of trace gas amounts in the atmosphere through the calculation of the Air Mass Factor. Besides the DOAS-related quantities, the MOCRA code yields: 1- the atmospheric transmittance in the vertical and sun directions, 2- the direct and global irradiance, 3- the single- and multiple- scattered radiance for a detector with assigned position, line of sight and field of view. Sample calculations of the main radiometric quantities calculated with MOCRA are presented and compared with the output of another RTM (MODTRAN4). A further comparison is presented between the NO2 slant column densities (SCDs) measured with DOAS at Evora (Portugal) and the ones simulated with MOCRA. Both comparisons (MOCRA-MODTRAN4 and MOCRA-observations) gave more than satisfactory results, and overall make MOCRA a versatile tool for atmospheric radiative transfer simulations and interpretation of remote sensing measurements. PMID:22453470
Michalsky, J.; Harrison, L.
1992-03-17
Two tasks are included in the second year of this project. One task continues the collection of high quality data sets for the testing of radiation codes within climate models. The other task involves the development of accurate spectral instruments for the measurement of shortwave radiation. A third task was completed in the second half of the first year of the project and will be briefly summarized.
Michalsky, J.; Harrison, L.
1992-03-17
Two tasks are included in the second year of this project. One task continues the collection of high quality data sets for the testing of radiation codes within climate models. The other task involves the development of accurate spectral instruments for the measurement of shortwave radiation. A third task was completed in the second half of the first year of the project and will be briefly summarized.
NASA Astrophysics Data System (ADS)
Porter, Jamie A.; Townsend, Lawrence W.; Spence, Harlan; Golightly, Michael; Schwadron, Nathan; Kasper, Justin; Case, Anthony W.; Blake, John B.; Zeitlin, Cary
2014-06-01
The Cosmic Ray Telescope for the Effects of Radiation (CRaTER), an instrument carried on the Lunar Reconnaissance Orbiter spacecraft, directly measures the energy depositions by solar and galactic cosmic radiations in its silicon wafer detectors. These energy depositions are converted to linear energy transfer (LET) spectra. High LET particles, which are mainly high-energy heavy ions found in the incident cosmic ray spectrum, or target fragments and recoils produced by protons and heavier ions, are of particular importance because of their potential to cause significant damage to human tissue and electronic components. Aside from providing LET data useful for space radiation risk analyses for lunar missions, the observed LET spectra can also be used to help validate space radiation transport codes, used for shielding design and risk assessment applications, which is a major thrust of this work. In this work the Monte Carlo transport code HETC-HEDS (High-Energy Transport Code-Human Exploration and Development in Space) is used to estimate LET contributions from the incident primary ions and their charged secondaries produced by nuclear collisions as they pass through the three pairs of silicon detectors. Also in this work, the contributions to the LET of the primary ions and their charged secondaries are analyzed and compared with estimates obtained using the deterministic space radiation code HZETRN 2010, developed at NASA Langley Research Center. LET estimates obtained from the two transport codes are compared with measurements of LET from the CRaTER instrument during the mission. Overall, a comparison of the LET predictions of the HETC-HEDS code to the predictions of the HZETRN code displays good agreement. The code predictions are also in good agreement with the CRaTER LET measurements above 15 keV/µm but differ from the measurements for smaller values of LET. A possible reason for this disagreement between measured and calculated spectra below 15 keV/µm is an
NASA Astrophysics Data System (ADS)
Takabe, Hideaki
A brief review is given of the physics of radiation transport, a topic that is important in the study of astrophysics, laser-plasmas, divertor-plasmas, etc. In general, we must solve non-local thermodynamic equilibrium processes using an appropriate atomic model. The resultant data related to the spectral emissivity and opacity of partially ionized plasmas are then used to solve the radiation transfer equation. In this note, I briefly overview a variety of ways to carry out such a calculation. In addition, similarities and differences in the physical process between laser-plasmas and divertor-plasmas are briefly described.
Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes
NASA Technical Reports Server (NTRS)
Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.
2001-01-01
The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as
NASA Technical Reports Server (NTRS)
Armstrong, T. W.
1972-01-01
Several Monte Carlo radiation transport computer codes are used to predict quantities of interest in the fields of radiotherapy and radiobiology. The calculational methods are described and comparisions of calculated and experimental results are presented for dose distributions produced by protons, neutrons, and negatively charged pions. Comparisons of calculated and experimental cell survival probabilities are also presented.
Modeling the physical structure of star-forming regions with LIME, a 3D radiative transfer code
NASA Astrophysics Data System (ADS)
Quénard, D.; Bottinelli, S.; Caux, E.
2016-05-01
The ability to predict line emission is crucial in order to make a comparison with observations. From LTE to full radiative transfer codes, the goal is always to derive the most accurately possible the physical properties of the source. Non-LTE calculations can be very time consuming but are needed in most of the cases since many studied regions are far from LTE.
Shapiro, A.B.
1983-08-01
The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.
The Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) and its applications
NASA Astrophysics Data System (ADS)
Thelen, Jean-Claude; Havemann, Stephan; Lewis, Warren
2015-09-01
The Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) is a component of the Met Office NEON Tactical Decision Aid (TDA). Within NEON, the HT-FRTC has for a number of years been used to predict the IR apparent thermal contrasts between different surface types as observed by an airborne sensor. To achieve this, the HT-FRTC is supplied with the inherent temperatures and spectral properties of these surfaces (i.e. ground target(s) and background). A key strength of the HT-FRTC is its ability to take into account the detailed properties of the atmosphere, which in the context of NEON tend to be provided by a Numerical Weather Prediction (NWP) forecast model. While water vapour and ozone are generally the most important gases, additional trace gases are now being incorporated into the HT-FRTC. The HT-FRTC also includes an exact treatment of atmospheric scattering based on spherical harmonics. This allows the treatment of several different aerosol species and of liquid and ice clouds. Recent developments can even account for rain and falling snow. The HT-FRTC works in Principal Component (PC) space and is trained on a wide variety of atmospheric and surface conditions, which significantly reduces the computational requirements regarding memory and time. One clear-sky simulation takes approximately one millisecond. Recent developments allow the training to be completely general and sensor independent. This is significant as the user of the code can add new sensors and new surfaces/targets by simply supplying extra files which contain their (possibly classified) spectral properties. The HT-FRTC has been extended to cover the spectral range of Photopic and NVG sensors. One aim here is to give guidance on the expected, directionally resolved sky brightness, especially at night, again taking the actual or forecast atmospheric conditions into account. Recent developments include light level predictions during the period of twilight.
Pymiedap: a versatile radiative transfer code with polarization for terrestrial (exo)planets.
NASA Astrophysics Data System (ADS)
Rossi, Loïc; Stam, Daphne; Hogenboom, Michael
2016-04-01
Polarimetry promises to be an important method to detect exoplanets: the light of a star is usually unpolarized te{kemp1987} while scattering by gas and clouds in an atmosphere can generate high levels of polarization. Furthermore, the polarization of scattered light contains information about the properties of the atmosphere and surface of a planet, allowing a possible characterization te{stam2008}, a method already validated in the solar system with Venus te{hansen1974,rossi2015}. We present here Pymiedap (Python Mie Doubling-Adding Program): a set of Python objects interfaced with Fortran radiative transfer codes that allows to define a planetary atmosphere and compute the flux and polarization of the light that is scattered. Several different properties of the planet can be set interactively by the user through the Python interface such as gravity, distance to the star, surface properties, atmospheric layers, gaseous and aerosol composition. The radiative transfer calculations are then computed following the doubling-adding method te{deHaan1987}. We present some results of the code and show its possible use for different planetary atmospheres for both resolved and disk-integrated measurements. We investigate the effect of gas, clouds and aerosols composition and surface properties for horizontally homogeneous and inhomogenous planets, in the case of Earth-like planets. We also study the effect of gaseous absorption on the flux and polarization as a marker for gaseous abundance and cloud top altitude. [1]{kemp1987} Kemp et al. The optical polarization of the sun measured at a sensitivity of parts in ten million. Nature, 1987, 326, 270-273 [2]{stam2008} Stam, D. M. Spectropolarimetric signatures of Earth-like extrasolar planets. A&A, 2008, 482, 989-1007 [3]{hansen1974} Hansen, J. E. & Hovenier, J. W. Interpretation of the polarization of Venus. Journal of Atmospheric Sciences, 1974, 31, 1137-1160 [4]{rossi2015} Rossi et al. Preliminary study of Venus cloud layers
Dipp, T.M. |
1993-12-01
The generation of radiation via photoelectrons induced off of a conducting surface was explored using Particle-In-Cell (PIC) code computer simulations. Using the MAGIC PIC code, the simulations were performed in one dimension to handle the diverse scale lengths of the particles and fields in the problem. The simulations involved monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. A sinusoidal, 100% modulated, 6.3263 ns pulse train, as well as unmodulated emission, were used to explore the behavior of the particles, fields, and generated radiation. A special postprocessor was written to convert the PIC code simulated electron sheath into far-field radiation parameters by means of rigorous retarded time calculations. The results of the small-spot PIC simulations were used to generate various graphs showing resonance and nonresonance radiation quantities such as radiated lobe patterns, frequency, and power. A database of PIC simulation results was created and, using a nonlinear curve-fitting program, compared with theoretical scaling laws. Overall, the small-spot behavior predicted by the theoretical scaling laws was generally observed in the PIC simulation data, providing confidence in both the theoretical scaling laws and the PIC simulations.
Takahashi, F; Shigemori, Y; Seki, A
2009-01-01
A system has been developed to assess radiation dose distribution inside the body of exposed persons in a radiological accident by utilising radiation transport calculation codes-MCNP and MCNPX. The system consists mainly of two parts, pre-processor and post-processor of the radiation transport calculation. Programs for the pre-processor are used to set up a 'problem-dependent' input file, which defines the accident condition and dosimetric quantities to be estimated. The program developed for the post-processor part can effectively indicate dose information based upon the output file of the code. All of the programs in the dosimetry system can be executed with a generally used personal computer and accurately give the dose profile to an exposed person in a radiological accident without complicated procedures. An experiment using a physical phantom was carried out to verify the availability of the dosimetry system with the developed programs in a gamma ray irradiation field. PMID:19181661
Coronal extension of the MURaM radiative MHD code: From quiet sun to flare simulations
NASA Astrophysics Data System (ADS)
Rempel, Matthias D.; Cheung, Mark
2016-05-01
We present a new version of the MURaM radiative MHD code, which includes a treatment of the solar corona in terms of MHD, optically thin radiative loss and field-aligned heat conduction. In order to relax the severe time-step constraints imposed by large Alfven velocities and heat conduction we use a combination of semi-relativistic MHD with reduced speed of light ("Boris correction") and a hyperbolic formulation of heat conduction. We apply the numerical setup to 4 different setups including a mixed polarity quiet sun, an open flux region, an arcade solution and an active region setup and find all cases an amount of coronal heating sufficient to maintain a corona with temperatures from 1 MK (quiet sun) to 2 MK (active region, arcade). In all our setups the Poynting flux is self-consistently created by photospheric and sub-photospheric magneto-convection in the lower part of our simulation domain. Varying the maximum allowed Alfven velocity ("reduced speed of light") leads to only minor changes in the coronal structure as long as the limited Alfven velocity remains larger than the speed of sound and about 1.5-3 times larger than the peak advection velocity. We also found that varying details of the numerical diffusivities that govern the resistive and viscous energy dissipation do not strongly affect the overall coronal heating, but the ratio of resistive and viscous energy dependence is strongly dependent on the effective numerical magnetic Prandtl number. We use our active region setup in order to simulate a flare triggered by the emergence of a twisted flux rope into a pre-existing bipolar active region. Our simulation yields a series of flares, with the strongest one reaching GOES M1 class. The simulation reproduces many observed properties of eruptions such as flare ribbons, post flare loops and a sunquake.
C5 Benchmark Problem with Discrete Ordinate Radiation Transport Code DENOVO
Yesilyurt, Gokhan; Clarno, Kevin T; Evans, Thomas M; Davidson, Gregory G; Fox, Patricia B
2011-01-01
The C5 benchmark problem proposed by the Organisation for Economic Co-operation and Development/Nuclear Energy Agency was modeled to examine the capabilities of Denovo, a three-dimensional (3-D) parallel discrete ordinates (S{sub N}) radiation transport code, for problems with no spatial homogenization. Denovo uses state-of-the-art numerical methods to obtain accurate solutions to the Boltzmann transport equation. Problems were run in parallel on Jaguar, a high-performance supercomputer located at Oak Ridge National Laboratory. Both the two-dimensional (2-D) and 3-D configurations were analyzed, and the results were compared with the reference MCNP Monte Carlo calculations. For an additional comparison, SCALE/KENO-V.a Monte Carlo solutions were also included. In addition, a sensitivity analysis was performed for the optimal angular quadrature and mesh resolution for both the 2-D and 3-D infinite lattices of UO{sub 2} fuel pin cells. Denovo was verified with the C5 problem. The effective multiplication factors, pin powers, and assembly powers were found to be in good agreement with the reference MCNP and SCALE/KENO-V.a Monte Carlo calculations.
XTAT: A New Multilevel-Multiline Polarized Radiative Transfer Code with PRD
NASA Astrophysics Data System (ADS)
Bommier, V.
2014-10-01
This work is intended to the interpretation of the so-called "Second Solar Spectrum" (Stenflo 1996), which is the spectrum of the linear polarization formed by scattering and observed close to the solar internal limb. The lines are also optically thick, and the problem is to solve in a coherent manner, the statistical equilibrium of the atomic density matrix and the polarized radiative transfer in the atmosphere. Following Belluzzi & Landi Degl'Innocenti (2009), 30 % of the solar visible line linear polarization profiles display the M-type shape typical of coherent scattering effect in the far wings. A new theory including both coherent (Rayleigh) and resonant scatterings was developed by Bommier (1997a,b). Raman scattering was later added (Bommier 1999, SPW2). In this theory, which is straightly derived from the Schrödinger equation for the atomic density matrix, the radiative line broadening appears as a non-Markovian process of atom-photon interaction. The collisional broadening is included. The Rayleigh (Raman) scattering appears as an additional term in the emissivity from the fourth order of the atom-photon interaction perturbation development. The development is pursued and finally summed up, leading to a non-perturbative final result. In this formalism, the use of redistribution functions is avoided. The published formalism was limited to the two-level atom without lower level alignment. But most of the solar lines are more complex. We will present how the theory has to be complemented for multi-level atom modeling, including lower level alignment. The role of the collisions as balancing coherent and resonant scatterings is fully taken into account. Progress report will be given about the development of a new code for the numerical iterative solution of the statistical equilibrium and polarized radiative transfer equations, for multi-level atoms and their multi-line spectrum. Fine and hyperfine structures, and Hanle, Kemp (Kemp et al. 1984), Zeeman
Improvements of the Radiation Code "MstrnX" in AORI/NIES/JAMSTEC Models
NASA Astrophysics Data System (ADS)
Sekiguchi, M.; Suzuki, K.; Takemura, T.; Watanabe, M.; Ogura, T.
2015-12-01
There is a large demand for an accurate yet rapid radiation transfer scheme accurate for general climate models. The broadband radiative transfer code "mstrnX", ,which was developed by Atmosphere and Ocean Research Institute (AORI) and was implemented in several global and regional climate models cooperatively developed in the Japanese research community, for example, MIROC (the Model for Interdisciplinary Research on Climate) [Watanabe et al., 2010], NICAM (Non-hydrostatic Icosahedral Atmospheric Model) [Satoh et al, 2008], and CReSS (Cloud Resolving Storm Simulator) [Tsuboki and Sakakibara, 2002]. In this study, we improve the gas absorption process and the scattering process of ice particles. For update of gas absorption process, the absorption line database is replaced by the latest versions of the Harvard-Smithsonian Center, HITRAN2012. An optimization method is adopted in mstrnX to decrease the number of integration points for the wavenumber integration using the correlated k-distribution method and to increase the computational efficiency in each band. The integration points and weights of the correlated k-distribution are optimized for accurate calculation of the heating rate up to altitude of 70 km. For this purpose we adopted a new non-linear optimization method of the correlated k-distribution and studied an optimal initial condition and the cost function for the non-linear optimization. It is known that mstrnX has a considerable bias in case of quadrapled carbon dioxide concentrations [Pincus et al., 2015], however, the bias is decreased by this improvement. For update of scattering process of ice particles, we adopt a solid column as an ice crystal habit [Yang et al., 2013]. The single scattering properties are calculated and tabulated in advance. The size parameter of this table is ranged from 0.1 to 1000 in mstrnX, we expand the maximum to 50000 in order to correspond to large particles, like fog and rain drop. Those update will be introduced to
Napier, B.A.; Peloquin, R.A.; Strenge, D.L.; Ramsdell, J.V.
1988-09-01
The Hanford Environmental Dosimetry Upgrade Project was undertaken to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) in updated versions of the environmental pathway analysis models used at Hanford. The resulting second generation of Hanford environmental dosimetry computer codes is compiled in the Hanford Environmental Dosimetry System (Generation II, or GENII). This coupled system of computer codes is intended for analysis of environmental contamination resulting from acute or chronic releases to, or initial contamination of, air, water, or soil, on through the calculation of radiation doses to individuals or populations. GENII is described in three volumes of documentation. This volume is a Code Maintenance Manual for the serious user, including code logic diagrams, global dictionary, worksheets to assist with hand calculations, and listings of the code and its associated data libraries. The first volume describes the theoretical considerations of the system. The second volume is a Users' Manual, providing code structure, users' instructions, required system configurations, and QA-related topics. 7 figs., 5 tabs.
Ralchenko, Yu.; Abdallah, J. Jr.; Colgan, J.; Fontes, C. J.; Foster, M.; Zhang, H. L.; Bar-Shalom, A.; Oreg, J.; Bauche, J.; Bauche-Arnoult, C.; Bowen, C.; Faussurier, G.; Chung, H.-K.; Hansen, S. B.; Lee, R. W.; Scott, H.; Gaufridy de Dortan, F. de; Poirier, M.; Golovkin, I.; Novikov, V.
2009-09-10
We present calculations of ionization balance and radiative power losses for tungsten in magnetic fusion plasmas. The simulations were performed within the framework of Non-Local Thermodynamic Equilibrium (NLTE) Code Comparison Workshops utilizing several independent collisional-radiative models. The calculations generally agree with each other; however, a clear disagreement with experimental ionization distributions at low temperatures 2 keV
NASA Astrophysics Data System (ADS)
Chubar, Oleg
2014-09-01
Recent updates in the "Synchrotron Radiation Workshop" physical optics computer code, including the transition to the Open Source development format, the results of the on-going collaborative development efforts in the area of X-ray optics, in particular grazing incidence mirrors, gratings and crystal monochromators, and in other areas, as well as some simulation activities for storage ring and X-ray free-electron laser sources are reported. Future development plans are discussed.
Peter Cebull
2004-05-01
The Attila radiation transport code, which solves the Boltzmann neutron transport equation on three-dimensional unstructured tetrahedral meshes, was ported to a Cray SV1. Cray's performance analysis tools pointed to two subroutines that together accounted for 80%-90% of the total CPU time. Source code modifications were performed to enable vectorization of the most significant loops, to correct unfavorable strides through memory, and to replace a conjugate gradient solver subroutine with a call to the Cray Scientific Library. These optimizations resulted in a speedup of 7.79 for the INEEL's largest ATR model. Parallel scalability of the OpenMP version of the code is also discussed, and timing results are given for other non-vector platforms.
HZETRN: A heavy ion/nucleon transport code for space radiations
NASA Astrophysics Data System (ADS)
Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.
1991-12-01
The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.
HZETRN: A heavy ion/nucleon transport code for space radiations
NASA Technical Reports Server (NTRS)
Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.
1991-01-01
The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew
2014-11-01
We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
Roussin, R.W.
1993-03-01
From the very early days in its history Radiation Shielding Information Center (RSIC) has been involved with high energy radiation transport. The National Aeronautics and Space Administration was an early sponsor of RSIC until the completion of the Apollo Moon Exploration Program. In addition, the intranuclear cascade work of Bertini at Oak Ridge National Laboratory provided valuable resources which were made available through RSIC. Over the years, RSIC has had interactions with many of the developers of high energy radiation transport computing technology and data libraries and has been able to collect and disseminate this technology. The current status of this technology will be reviewed and prospects for new advancements will be examined.
Roussin, R.W.
1993-01-01
From the very early days in its history Radiation Shielding Information Center (RSIC) has been involved with high energy radiation transport. The National Aeronautics and Space Administration was an early sponsor of RSIC until the completion of the Apollo Moon Exploration Program. In addition, the intranuclear cascade work of Bertini at Oak Ridge National Laboratory provided valuable resources which were made available through RSIC. Over the years, RSIC has had interactions with many of the developers of high energy radiation transport computing technology and data libraries and has been able to collect and disseminate this technology. The current status of this technology will be reviewed and prospects for new advancements will be examined.
Chatani, K. )
1992-08-01
This report summarizes the calculational results from analyses of a Clinch River Breeder Reactor (CRBR) prototypic coolant pipe chaseway neutron streaming experiment Comparisons of calculated and measured results are presented, major emphasis being placed on results at bends in the chaseway. Calculations were performed with three three-dimensional radiation transport codes: the discrete ordinates code TORT and the Monte Carlo code MORSE, both developed by the Oak Ridge National Laboratory (ORNL), and the discrete ordinates code ENSEMBLE, developed by Japan. The calculated results from the three codes are compared (1) with previously-calculated DOT3.5 two-dimensional results, (2) among themselves, and (3) with measured results. Calculations with TORT used both the weighted-difference and nodal methods. Only the weighted-difference method was used in ENSEMBLE. When the calculated results were compared to measured results, it was found that calculation-to-experiment (C/E) ratios were good in the regions of the chaseway where two-dimensional modeling might be difficult and where there were no significant discrete ordinates ray effects. Excellent agreement was observed for responses dominated by thermal neutron contributions. MORSE-calculated results and comparisons are described also, and detailed results are presented in an appendix.
NASA Astrophysics Data System (ADS)
Sijoy, C. D.; Chaturvedi, S.
2016-06-01
Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good
NASA Astrophysics Data System (ADS)
Sijoy, C. D.; Chaturvedi, S.
2016-06-01
Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good
NASA Astrophysics Data System (ADS)
Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian
2014-04-01
A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Jabbari, Keyvan; Seuntjens, Jan
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 106 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer. PMID:25190994
Roussin, R.W.
1994-10-01
From the very early days in its history RSIC has been involved with high energy radiation transport. The National Aeronautics and Space Administration was an early sponsor of RSIC until the completion of the Apollo Moon Exploration Program. In addition, the intranuclear cascade work of Bertini at Oak Ridge National Laboratory provided valuable resources which were made available through RSIC. Over the years, RSIC has had interactions with many of the developers of high energy radiation transport computing technology and data libraries and has been able to collect and disseminate this technology. The current status of this technology will be reviewed and prospects for new advancements will be examined.
Code of Practice for the Use of Ionizing Radiations in Secondary Schools.
ERIC Educational Resources Information Center
National Health and Medical Research Council, Canberra (Australia).
The appreciation of the potential hazard of ionizing radiation led to the setting up of national, and later, international commissions for the defining of standards of protection for the occupationally exposed worker in the use of ionizing radiation. However, in the last twenty years, with the large scale development of nuclear energy, the need…
NASA Astrophysics Data System (ADS)
Class, G.
1987-07-01
A program to simulate gas motion and shine through of thermal radiation in fusion reactor vacuum flow channels was developed. The inner surface of the flow channel is described by plane areas (triangles, parallelograms) and by surfaces of revolution. By introducing control planes in the flow path, a variance reduction and shortening of the computation, respectively, are achieved through particle splitting and Russian roulette. The code is written in PL/I and verified using published data. Computer aided input of model data is performed interactively either under IBM-TSO or at a microprocessor (IBM PC-AT). The data files are exchangeable between the IBM-mainframe and IBM-PC computers. Both computers can produce plots of the elaborated channel model. For testing, the simulating computation can likewise be run interactively, whereas the production computation can be issued batchwise. The results of code verification are explained, and examples of channel models and of the interactive mode are given.
NASA Astrophysics Data System (ADS)
Tian, Zhen; Jiang Graves, Yan; Jia, Xun; Jiang, Steve B.
2014-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate method for radiation dose calculations. Commissioning of a beam model in the MC code against a clinical linear accelerator beam is of crucial importance for its clinical implementation. In this paper, we propose an automatic commissioning method for our GPU-based MC dose engine, gDPM. gDPM utilizes a beam model based on a concept of phase-space-let (PSL). A PSL contains a group of particles that are of the same type and close in space and energy. A set of generic PSLs was generated by splitting a reference phase-space file. Each PSL was associated with a weighting factor, and in dose calculations the particle carried a weight corresponding to the PSL where it was from. Dose for each PSL in water was pre-computed, and hence the dose in water for a whole beam under a given set of PSL weighting factors was the weighted sum of the PSL doses. At the commissioning stage, an optimization problem was solved to adjust the PSL weights in order to minimize the difference between the calculated dose and measured one. Symmetry and smoothness regularizations were utilized to uniquely determine the solution. An augmented Lagrangian method was employed to solve the optimization problem. To validate our method, a phase-space file of a Varian TrueBeam 6 MV beam was used to generate the PSLs for 6 MV beams. In a simulation study, we commissioned a Siemens 6 MV beam on which a set of field-dependent phase-space files was available. The dose data of this desired beam for different open fields and a small off-axis open field were obtained by calculating doses using these phase-space files. The 3D γ-index test passing rate within the regions with dose above 10% of dmax dose for those open fields tested was improved averagely from 70.56 to 99.36% for 2%/2 mm criteria and from 32.22 to 89.65% for 1%/1 mm criteria. We also tested our commissioning method on a six-field head-and-neck cancer IMRT plan. The
NASA Astrophysics Data System (ADS)
Stone, James M.; Norman, Michael L.
1992-06-01
A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.
MESTRN: A Deterministic Meson-Muon Transport Code for Space Radiation
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Norbury, John W.; Norman, Ryan B.; Wilson, John W.; Singleterry, Robert C., Jr.; Tripathi, Ram K.
2004-01-01
A safe and efficient exploration of space requires an understanding of space radiations, so that human life and sensitive equipment can be protected. On the way to these sensitive sites, the radiation fields are modified in both quality and quantity. Many of these modifications are thought to be due to the production of pions and muons in the interactions between the radiation and intervening matter. A method used to predict the effects of the presence of these particles on the transport of radiation through materials is developed. This method was then used to develop software, which was used to calculate the fluxes of pions and muons after the transport of a cosmic ray spectrum through aluminum and water. Software descriptions are given in the appendices.
Parameterized code SHARM-3D for radiative transfer over inhomogeneous surfaces
NASA Astrophysics Data System (ADS)
Lyapustin, Alexei; Wang, Yujie
2005-12-01
The code SHARM-3D, developed for fast and accurate simulations of the monochromatic radiance at the top of the atmosphere over spatially variable surfaces with Lambertian or anisotropic reflectance, is described. The atmosphere is assumed to be laterally uniform across the image and to consist of two layers with aerosols contained in the bottom layer. The SHARM-3D code performs simultaneous calculations for all specified incidence-view geometries and multiple wavelengths in one run. The numerical efficiency of the current version of code is close to its potential limit and is achieved by means of two innovations. The first is the development of a comprehensive precomputed lookup table of the three-dimensional atmospheric optical transfer function for various atmospheric conditions. The second is the use of a linear kernel model of the land surface bidirectional reflectance factor (BRF) in our algorithm that has led to a fully parameterized solution in terms of the surface BRF parameters. The code is also able to model inland lakes and rivers. The water pixels are described with the Nakajima-Tanaka BRF model of wind-roughened water surface with a Lambertian offset, which is designed to model approximately the reflectance of suspended matter and of a shallow lake or river bottom.
Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code
NASA Astrophysics Data System (ADS)
Longoni, Gianluca; Anderson, Stanwood L.
2009-08-01
The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Lisak, D.; Tran, H.; Hartmann, J.-M.
2013-11-01
We demonstrate that a previously proposed model opens the route for the inclusion of refined non-Voigt profiles in spectroscopic databases and atmospheric radiative transfer codes. Indeed, this model fulfills many essential requirements: (i) it takes both velocity changes and the speed dependences of the pressure-broadening and -shifting coefficients into account. (ii) It leads to accurate descriptions of the line shapes of very different molecular systems. Tests made for pure H2, CO2 and O2 and for H2O diluted in N2 show that residuals are down to ≃0.2% of the peak absorption, (except for the untypical system of H2 where a maximum residual of ±3% is reached), thus fulfilling the precision requirements of the most demanding remote sensing experiments. (iii) It is based on a limited set of parameters for each absorption line that have known dependences on pressure and can thus be stored in databases. (iv) Its calculation requires very reasonable computer costs, only a few times higher than that of a usual Voigt profile. Its inclusion in radiative transfer codes will thus induce bearable CPU time increases. (v) It can be extended in order to take line-mixing effects into account, at least within the so-called first-order approximation.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Cahalan, R. F.
2001-05-01
The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards
Evans, T.E.; Leonard, A.W.; West, W.P.; Finkenthal, D.F.; Fenstermacher, M.E.; Porter, G.D.
1998-08-01
Experimentally measured carbon line emissions and total radiated power distributions from the DIII-D divertor and Scrape-Off Layer (SOL) are compared to those calculated with the Monte Carlo Impurity (MCI) model. A UEDGE background plasma is used in MCI with the Roth and Garcia-Rosales (RG-R) chemical sputtering model and/or one of six physical sputtering models. While results from these simulations do not reproduce all of the features seen in the experimentally measured radiation patterns, the total radiated power calculated in MCI is in relatively good agreement with that measured by the DIII-D bolometric system when the Smith78 physical sputtering model is coupled to RG-R chemical sputtering in an unaltered UEDGE plasma. Alternatively, MCI simulations done with UEDGE background ion temperatures along the divertor target plates adjusted to better match those measured in the experiment resulted in three physical sputtering models which when coupled to the RG-R model gave a total radiated power that was within 10% of measured value.
Bayesian Atmospheric Radiative Transfer (BART) Code and Application to WASP-43b
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Cubillos, Patricio; Bowman, Oliver; Rojo, Patricio; Stemm, Madison; Lust, Nathaniel B.; Challener, Ryan; Foster, Austin James; Foster, Andrew S.; Blumenthal, Sarah D.; Bruce, Dylan
2016-01-01
We present a new open-source Bayesian radiative-transfer framework, Bayesian Atmospheric Radiative Transfer (BART, https://github.com/exosports/BART), and its application to WASP-43b. BART initializes a model for the atmospheric retrieval calculation, generates thousands of theoretical model spectra using parametrized pressure and temperature profiles and line-by-line radiative-transfer calculation, and employs a statistical package to compare the models with the observations. It consists of three self-sufficient modules available to the community under the reproducible-research license, the Thermochemical Equilibrium Abundances module (TEA, https://github.com/dzesmin/TEA, Blecic et al. 2015}, the radiative-transfer module (Transit, https://github.com/exosports/transit), and the Multi-core Markov-chain Monte Carlo statistical module (MCcubed, https://github.com/pcubillos/MCcubed, Cubillos et al. 2015). We applied BART on all available WASP-43b secondary eclipse data from the space- and ground-based observations constraining the temperature-pressure profile and molecular abundances of the dayside atmosphere of WASP-43b. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
FY05 LDRD Final Report Molecular Radiation Biodosimetry LDRD Project Tracking Code: 04-ERD-076
Jones, I M; A.Coleman, M; Lehmann, J; Manohar, C F; Marchetti, F; Mariella, R; Miles, R; Nelson, D O; Wyrobek, A J
2006-02-03
In the event of a nuclear or radiological accident or terrorist event, it is important to identify individuals that can benefit from prompt medical care and to reassure those that do not need it. Achieving these goals will maximize the ability to manage the medical consequences of radiation exposure that unfold over a period of hours, days, weeks, years, depending on dose. Medical interventions that reduce near term morbidity and mortality from high but non-lethal exposures require advanced medical support and must be focused on those in need as soon as possible. There are two traditional approaches to radiation dosimetry, physical and biological. Each as currently practiced has strengths and limitations. Physical dosimetry for radiation exposure is routine for selected sites and for individual nuclear workers in certain industries, medical centers and research institutions. No monitoring of individuals in the general population is currently performed. When physical dosimetry is available at the time of an accident/event or soon thereafter, it can provide valuable information in support of accident/event triage. Lack of data for most individuals is a major limitation, as differences in exposure can be significant due to shielding, atmospherics, etc. A smaller issue in terms of number of people affected is that the same dose may have more or less biological effect on subsets of the population. Biological dosimetry is the estimation of exposure based on physiological or cellular alterations induced in an individual by radiation. The best established and precise biodosimetric methods are measurement of the decline of blood cells over time and measurement of the frequency of chromosome aberrations. In accidents or events affecting small numbers of people, it is practical to allocate the resources and time (days of clinical follow-up or specialists laboratory time) to conduct these studies. However, if large numbers of people have been exposed, or fear they may have
NASA Astrophysics Data System (ADS)
Gillespie, K. M.; Speirs, D. C.; Ronald, K.; McConville, S. L.; Phelps, A. D. R.; Bingham, R.; Cross, A. W.; Robertson, C. W.; Whyte, C. G.; He, W.; Vorgul, I.; Cairns, R. A.; Kellett, B. J.
2008-12-01
Auroral Kilometric Radiation (AKR), occurs naturally in the polar regions of the Earth's magnetosphere where electrons are accelerated by electric fields into the increasing planetary magnetic dipole. Here conservation of the magnetic moment converts axial to rotational momentum forming a horseshoe distribution in velocity phase space. This distribution is unstable to cyclotron emission with radiation emitted in the X-mode. In a scaled laboratory reproduction of this process, a 75-85 keV electron beam of 5-40 A was magnetically compressed by a system of solenoids and emissions were observed for cyclotron frequencies of 4.42 GHz and 11.7 GHz resonating with near cut-off TE0,1 and TE0,3 modes, respectively. Here we compare these measurements with numerical predictions from the 3D PiC code KARAT. The 3D simulations accurately predicted the radiation modes and frequencies produced by the experiment. The predicted conversion efficiency between electron kinetic and wave field energy of around 1% is close to the experimental measurements and broadly consistent with quasi-linear theoretical analysis and geophysical observations.
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2015-09-01
Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.
Plante, Ianik; Devroye, Luc
2015-09-15
Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.
NUSTART: A PC code for NUclear STructure And Radiative Transition analysis and supplementation
Larsen, G.L.; Gardner, D.G.; Gardner, M.A.
1990-10-01
NUSTART is a computer program for the IBM PC/At. It is designed for use with the nuclear reaction cross-section code STAPLUS, which is a STAPRE-based CRAY computer code that is being developed at Lawrence Livermore National Laboratory. The NUSTART code was developed to handle large sets of discrete nuclear levels and the multipole transitions among these levels; it operates in three modes. The Data File Error Analysis mode analyzes an existing STAPLUS input file containing the levels and their multipole transition branches for a number of physics and/or typographical errors. The Interactive Data File Generation mode allows the user to create input files of discrete levels and their branching fractions in the format required by STAPLUS, even though the user enters the information in the (different) format used by many people in the nuclear structure field. In the Branching Fractions Calculations mode, the discrete nuclear level set is read, and the multipole transitions among the levels are computed under one of two possible assumptions: (1) the levels have no collective character, or (2) the levels are all rotational band heads. Only E1, M1, and E2 transitions are considered, and the respective strength functions may be constants or, in the case of E1 transitions, the strength function may be energy dependent. The first option is used for nuclei closed shells; the bandhead option may be used to vary the E1, M1, and E2 strengths for interband transitions. K-quantum number selection rules may be invoked if desired. 19 refs.
1980-02-29
Version 00 LADTAP II calculates the radiation exposure to man from potable water, aquatic foods, shoreline deposits, swimming, boating, and irrigated foods, and also the dose to biota. Doses are calculated for both the maximum individual and for the population and are summarized for each pathway by age group and organ. It also calculates the doses to certain representative biota other than man in the aquatic environment such as fish, invertebrates, algae, muskrats, raccoons, herons, and ducks using models presented in WASH-1258.
PORTA: A Massively Parallel Code for 3D Non-LTE Polarized Radiative Transfer
NASA Astrophysics Data System (ADS)
Štěpán, J.
2014-10-01
The interpretation of the Stokes profiles of the solar (stellar) spectral line radiation requires solving a non-LTE radiative transfer problem that can be very complex, especially when the main interest lies in modeling the linear polarization signals produced by scattering processes and their modification by the Hanle effect. One of the main difficulties is due to the fact that the plasma of a stellar atmosphere can be highly inhomogeneous and dynamic, which implies the need to solve the non-equilibrium problem of generation and transfer of polarized radiation in realistic three-dimensional stellar atmospheric models. Here we present PORTA, a computer program we have developed for solving, in three-dimensional (3D) models of stellar atmospheres, the problem of the generation and transfer of spectral line polarization taking into account anisotropic radiation pumping and the Hanle and Zeeman effects in multilevel atoms. The numerical method of solution is based on a highly convergent iterative algorithm, whose convergence rate is insensitive to the grid size, and on an accurate short-characteristics formal solver of the Stokes-vector transfer equation which uses monotonic Bezier interpolation. In addition to the iterative method and the 3D formal solver, another important feature of PORTA is a novel parallelization strategy suitable for taking advantage of massively parallel computers. Linear scaling of the solution with the number of processors allows to reduce the solution time by several orders of magnitude. We present useful benchmarks and a few illustrations of applications using a 3D model of the solar chromosphere resulting from MHD simulations. Finally, we present our conclusions with a view to future research. For more details see Štěpán & Trujillo Bueno (2013).
MagRad: A code to optimize the operation of superconducting magnets in a radiation environment
Yeaw, C.T.
1995-12-31
A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated.
NASA Technical Reports Server (NTRS)
Staenz, K.; Williams, D. J.; Fedosejevs, G.; Teillet, P. M.
1995-01-01
Surface reflectance retrieval from imaging spectrometer data as acquired with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has become important for quantitative analysis. In order to calculate surface reflectance from remotely measured radiance, radiative transfer codes such as 5S and MODTRAN2 play an increasing role for removal of scattering and absorption effects of the atmosphere. Accurate knowledge of the exo-atmospheric solar irradiance (E(sub 0)) spectrum at the spectral resolution of the sensor is important for this purpose. The present study investigates the impact of differences in the solar irradiance function, as implemented in a modified version of 5S (M5S), 6S, and MODTRAN2, and as proposed by Green and Gao, on the surface reflectance retrieved from AVIRIS data. Reflectance measured in situ is used as a basis of comparison.
Mao, S.; Liu, J.; Nelson, W.R.
1992-01-01
The EGS computer code, developed for the Monte Carlo simulation of the transport of electrons and photons, has been used since 1970 in the design of accelerators and detectors for high-energy physics. In this paper we present three examples demonstrating how the current version, EGS4, is used to determine energy-loss patterns and source terms along beam pipes, (i.e., including flanges, collimators, etc.). This information is useful for further shielding and dosimetry studies. The calculated results from the analysis are in close agreement with the measured values. To facilitate this review, a new add-on package called SHOWGRAF, is used in order to display shower trajectories for the three examples.
1980-02-29
Version 00 LADTAP II calculates the radiation exposure to man from potable water, aquatic foods, shoreline deposits, swimming, boating, and irrigated foods, and also the dose to biota. Doses are calculated for both the maximum individual and for the population and are summarized for each pathway by age group and organ. It also calculates the doses to certain representative biota other than man in the aquatic environment such as fish, invertebrates, algae, muskrats, raccoons, herons,more » and ducks using models presented in WASH-1258.« less
FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code
Hakel, Peter
2016-06-16
Here we report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.
Modelling of the Global Space Radiation Field at Aircraft Altitudes by the European Code EPCARD
NASA Astrophysics Data System (ADS)
Heinrich, W.; Schraube, H.; Roesler, S.
Supported by the European Commission the European Program Package for the Calculation of Aviation Route Doses (EPCARD) was developed. For this purpose we combined the state of the art models to (i) describe the cosmic radiation field with respect to solar modulation, geomagnetic shielding and to (ii) describe the particle interaction and production in the atmosphere and to (iii) determine the appropriate dose quantities. Spectral fluence rates of different particles (n, p, , , e, μ) produced in the atmosphere by interactions of primary cosmic rays have been determined by Monte Carlo calculations for different periods of solar modulation, geomagnetic shielding conditions and depths in the atmosphere. These data are used as basis in EPCARD. For any chosen flight route and profile, operational and effective doses can be determined in full agreement with the ICRU/ICRP definitions, and also the readings of airborne instruments can be determined. The results of the model predictions agree generally within +/-30% or significantly better with experimental data. Differences are caused by model uncertainties and also by uncertainties in the fundamental understanding of the response characteristics of experimental devices employed. Several examples of comparison between model predictions and experimental data are given. Finally we discuss the capabilities of model predictions for the estimation of radiation doses due to solar particle events. Large uncertainties arise due to the extremely complicated situation of the incident solar particles: their non-isotropy, asymptotic arrival directions, time dependence of spectral fluxes and geomagnetic disturbances, which are known to exist, but are not known in detail.
Karlykhanov, N. G.; Glazyrin, I. V.; Lykov, V. A.; Politov, V. Yu.; Sofronov, A. A.; Timakova, M. S.
1997-04-15
1D ERA code allows to simulate the kinetic of ionization and nonequilibrium radiation transfer in line and continua. This code is expanded to account the processes of ions ionization and excitation in the field of laser radiation. For the descriptions of the processes the wave equation is solved. The results of calculations of X-ray yield at the irradiation of Al plate by picosecond laser pulse at intensity of 10{sup 16}-10{sup 17} W/cm{sup 2} are presented.
NASA Astrophysics Data System (ADS)
Yamada, Takayoshi; Kasai, Yasuko; Yoshida, Naohiro
2016-07-01
The Submillimeter Wave Instrument (SWI) is one of the scientific instruments on the JUpiter Icy moon Explorer (JUICE). We plan to observe atmospheric compositions including water vapor and its isotopomers in Galilean moons (Io, Europa, Ganymede, and Callisto). The frequency windows of SWI are 530 to 625 GHz and 1080 to 1275 GHz with 100 kHz spectral resolution. We are developing a radiative transfer code in Japan with line-by-line method for Ganymede atmosphere in THz region (0 - 3 THz). Molecular line parameters (line intensity and partition function) were taken from JPL (Jet Propulsion Laboratory) catalogue. The pencil beam was assumed to calculate a spectrum of H _{2}O and CO in rotational transitions at the THz region. We performed comparisons between our model and ARTS (Atmospheric Radiative Transfer Simulator). The difference were less than 10% and 5% for H _{2}O and CO, respectively, under the condition of the local thermodynamic equilibrium (LTE). Comparison with several models with non-LTE assumption will be presented.
Coupling external radiation transport code results to the GADRAS detector response function.
Mitchell, Dean J; Thoreson, Gregory G.; Horne, Steven M.
2014-01-01
Simulating gamma spectra is useful for analyzing special nuclear materials. Gamma spectra are influenced not only by the source and the detector, but also by the external, and potentially complex scattering environment. The scattering environment can make accurate representations of gamma spectra difficult to obtain. By coupling the Monte Carlo Nuclear Particle (MCNP) code with the Gamma Detector Response and Analysis Software (GADRAS) detector response function, gamma spectrum simulations can be computed with a high degree of fidelity even in the presence of a complex scattering environment. Traditionally, GADRAS represents the external scattering environment with empirically derived scattering parameters. By modeling the external scattering environment in MCNP and using the results as input for the GADRAS detector response function, gamma spectra can be obtained with a high degree of fidelity. This method was verified with experimental data obtained in an environment with a significant amount of scattering material. The experiment used both gamma-emitting sources and moderated and bare neutron-emitting sources. The sources were modeled using GADRAS and MCNP in the presence of the external scattering environment, producing accurate representations of the experimental data.
NASA Astrophysics Data System (ADS)
Poirier, M.; de Gaufridy de Dortan, F.
2009-12-01
The behavior of non-local thermal equilibrium (NLTE) plasmas plays a central role in many fields of modern-day physics, such as laser- produced plasmas, astrophysics, inertial or magnetic confinement fusion devices, and X-ray sources. In steady-state cases the proper description of these plasmas may require the solution of thousands of linear rate equations. A possible simplification for this numerical task lies in some form of statistical averaging, such as the averaging over configurations or superconfigurations. However, to assess the validity of such an averaging procedure and to handle cases where isolated lines play an important role, it will be necessary to treat detailed levels systems. This involves matrices with potentially billions of elements, which are rather sparse but still involve thousands of diagonals above and below the main one. We propose here a numerical algorithm based on the LU decomposition for such linear systems. It will be shown that this method is orders of magnitude faster than the traditional Gauss elimination. Moreover, it is found that there are no convergence or accuracy issues, which are found when using methods based on conjugate gradients or minimization. Among cases treated at the last NLTE-kinetics-code meeting, krypton and tungsten plasmas are considered. Furthermore, to assess the validity of configuration averaging, several criteria are discussed. While a criterion based on detailed balance is relevant in cases not too far from LTE, it is found to be insufficient in general. An alternate criterion based on the inspection of the influence of an arbitrary configuration temperature is proposed and tested successfully.
NASA Astrophysics Data System (ADS)
Kuroda, Takami; Takiwaki, Tomoya; Kotake, Kei
2016-02-01
We present a new multi-dimensional radiation-hydrodynamics code for massive stellar core-collapse in full general relativity (GR). Employing an M1 analytical closure scheme, we solve spectral neutrino transport of the radiation energy and momentum based on a truncated moment formalism. Regarding neutrino opacities, we take into account a baseline set in state-of-the-art simulations, in which inelastic neutrino-electron scattering, thermal neutrino production via pair annihilation, and nucleon-nucleon bremsstrahlung are included. While the Einstein field equations and the spatial advection terms in the radiation-hydrodynamics equations are evolved explicitly, the source terms due to neutrino-matter interactions and energy shift in the radiation moment equations are integrated implicitly by an iteration method. To verify our code, we first perform a series of standard radiation tests with analytical solutions that include the check of gravitational redshift and Doppler shift. A good agreement in these tests supports the reliability of the GR multi-energy neutrino transport scheme. We then conduct several test simulations of core-collapse, bounce, and shock stall of a 15{M}⊙ star in the Cartesian coordinates and make a detailed comparison with published results. Our code performs quite well to reproduce the results of full Boltzmann neutrino transport especially before bounce. In the postbounce phase, our code basically performs well, however, there are several differences that are most likely to come from the insufficient spatial resolution in our current 3D-GR models. For clarifying the resolution dependence and extending the code comparison in the late postbounce phase, we discuss that next-generation Exaflops class supercomputers are needed at least.
Faden, R R; Lederer, S E; Moreno, J D
1996-11-27
The Advisory Committee on Human Radiation Experiments (ACHRE), established to review allegations of abuses of human subjects in federally sponsored radiation research, was charged with identifying appropriate standards to evaluate the ethics of cold war radiation experiments. One central question for ACHRE was to determine what role, if any, the Nuremberg Code played in the norms and practices of US medical researchers. Based on the evidence from ACHRE's Ethics Oral History Project and extensive archival research, we conclude that the Code, at the time it was promulgated, had little effect on mainstream medical researchers engaged in human subjects research. Although some clinical investigators raised questions about the conduct of research involving human beings, the medical profession did not pursue this issue until the 1960s. PMID:8922454
Emery, L.
1995-07-01
The interface program shower to the FGS Monte Carlo electromagnetic cascade shower simulation code system was written to facilitate the definition of complicated target and shielding geometries and to simplify the handling of input and output of data. The geometry is defined by a series of namelist commands in an input file. The input and output beam data files follow the SPDDS (self-describing data set) protocol, which makes the files compatible with other physics codes that follow the same protocol. For instance, one can use the results of the cascade shower simulation as the input data for an accelerator tracking code. The shower code has also been used to calculate the bremsstrahlung component of radiation doses for possible beam loss scenarios at the Advanced Photon Source (APS) at Argonne National Laboratory.
NASA Astrophysics Data System (ADS)
Stone, James M.; Norman, Michael L.
1992-06-01
In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.
Shestakov, Aleksei I. Offner, Stella S.R.
2008-01-10
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({psi}tc). We analyze the magnitude of the {psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates
NASA Astrophysics Data System (ADS)
Shestakov, Aleksei I.; Offner, Stella S. R.
2008-01-01
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate "level-solve" packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation (Ψtc). We analyze the magnitude of the Ψtc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the "partial temperature" scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of Ψtc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates the
Shestakov, A I; Offner, S R
2006-09-21
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1992-12-31
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). The data collected during SPECTRE form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used in climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. This report summarizes the activities of our group during the project`s Third year to meet our stated objectives. The report is divided into three sections entitled: SPECTRE Activities, ICRCCM Activities, and summary information. The section on SPECTRE activities summarizes the field portion of the project during 1991, and the data reduction/analysis performed by the various participants. The section on ICRCCM activities summarizes our initial attempts to select data for distribution to ICRCCM participants and at comparison of observations with calculations as will be done by the ICRCCM participants. The Summary Information section lists data concerning publications, presentations, graduate students supported, and post-doctoral appointments during the project.
Kotchenova, Svetlana Y; Vermote, Eric F; Matarrese, Raffaella; Klemm, Frank J
2006-09-10
A vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), which enables accounting for radiation polarization, has been developed and validated against a Monte Carlo code, Coulson's tabulated values, and MOBY (Marine Optical Buoy System) water-leaving reflectance measurements. The developed code was also tested against the scalar codes SHARM, DISORT, and MODTRAN to evaluate its performance in scalar mode and the influence of polarization. The obtained results have shown a good agreement of 0.7% in comparison with the Monte Carlo code, 0.2% for Coulson's tabulated values, and 0.001-0.002 for the 400-550 nm region for the MOBY reflectances. Ignoring the effects of polarization led to large errors in calculated top-of-atmosphere reflectances: more than 10% for a molecular atmosphere and up to 5% for an aerosol atmosphere. This new version of 6S is intended to replace the previous scalar version used for calculation of lookup tables in the MODIS (Moderate Resolution Imaging Spectroradiometer) atmospheric correction algorithm. PMID:16926910
NASA Astrophysics Data System (ADS)
Thelen, Jean-Claude; Havemann, Stephan; Wong, Gerald
2015-10-01
The Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) is a core component of the Met Office NEON Tactical Decision Aid (TDA). Within NEON, the HT-FRTC has for a number of years been used to predict the infrared apparent thermal contrasts between different surface types as observed by an airborne sensor. To achieve this, the HT-FRTC is supplied with the inherent temperatures and spectral properties of these surfaces (i.e. ground target(s) and backgrounds). A key strength of the HT-FRTC is its ability to take into account the detailed properties of the atmosphere, which in the context of NEON tend to be provided by a Numerical Weather Prediction (NWP) forecast model. While water vapour and ozone are generally the most important gases, additional trace gases are now being incorporated into the HT-FRTC. The HT-FRTC also includes an exact treatment of atmospheric scattering based on spherical harmonics. This allows for the treatment of several different aerosol species and of liquid and ice clouds. Recent developments can even account for rain and falling snow. The HT-FRTC works in Principal Component (PC) space and is trained on a wide variety of atmospheric and surface conditions, which significantly reduces the computational requirements regarding memory and processing time. One clear-sky simulation takes approximately one millisecond at the time of writing. Recent developments allow the training of HT-FRTC to be both completely generalised and sensor independent. This is significant as the user of the code can add new sensors and new surfaces/targets by supplying extra files which contain their (possibly classified) spectral properties. The HT-FRTC has been extended to cover the spectral range of Photopic and NVG sensors. One aim here is to give guidance on the expected, directionally resolved sky brightness, especially at night, again taking the actual or forecast atmospheric conditions into account. Recent developments include light level predictions during
NASA Technical Reports Server (NTRS)
Reddell, Brandon
2015-01-01
Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.
NASA Astrophysics Data System (ADS)
Sijoy, C. D.; Chaturvedi, S.
2015-05-01
Three-temperature (3T), unstructured-mesh, non-equilibrium radiation hydrodynamics (RHD) code have been developed for the simulation of intense thermal radiation or high-power laser driven radiative shock hydrodynamics in two-dimensional (2D) axis-symmetric geometries. The governing hydrodynamics equations are solved using a compatible unstructured Lagrangian method based on a control volume differencing (CVD) scheme. A second-order predictor-corrector (PC) integration scheme is used for the temporal discretization of the hydrodynamics equations. For the radiation energy transport, frequency averaged gray model is used in which the flux-limited diffusion (FLD) approximation is used to recover the free-streaming limit of the radiation propagation in optically thin regions. The proposed RHD model allows to have different temperatures for the electrons and ions. In addition to this, the electron and thermal radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. For this, the coupled flux limited electron heat conduction and the non-equilibrium radiation diffusion equations are solved simultaneously by using an implicit, axis-symmetric, cell-centered, monotonic, nonlinear finite volume (NLFV) scheme. In this paper, we have described the details of the 2D, 3T, non-equilibrium RHD code developed along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We have also conducted a performance analysis with different linearity preserving interpolation schemes that are used for the evaluation of the nodal values in the NLFV scheme. Finally, in order to demonstrate full capability of the code implementation, we have presented the simulation of laser driven thin Aluminum (Al) foil acceleration. The simulation results are found to be in good agreement
Lubin, D.; Cutchin, D.; Conant, W.; Grassl, H.; Schmid, U.; Biselli, W.
1995-02-01
Longwave emission by the tropical western Pacific atmosphere has been measured at the ocean surface by a Fourier Transform Infrared (FTIR) spectroradiometer deployed aboard the research vessel John Vickers as part of the Central Equatorial Pacific Experiment. The instrument operated throughout a Pacific Ocean crossing, beginning on 7 March 1993 in Honiara, Solomon Islands, and ending on 29 March 1993 in Los Angeles, and recorded longwave emission spectra under atmospheres associated with sea surface temperatures ranging from 291.0 to 302.8 K. Precipitable water vapor abundances ranged from 1.9 to 5.5 column centimeters. Measured emission spectra (downwelling zenith radiance) covered the middled infrared (5-20 {mu}m) with one inverse centimeter spectral resolution. FTIR measurements made under an entirely clear field of view are compared with spectra generated by LOWTRAN 7 and MODTRAN 2, as well as downwelling flux calculated by the NCAR COmmunity Climate Model (CCM-2) radiation code, using radiosonde profiles as input data for these calculations. In the spectral interval 800-1000 cm{sup -1}, these comparisons show a discrepance between FTIR data and MODTRAN 2 having an overall variability of 6-7 mW m{sup -2} sr{sup -1} cm and a concave shape that may be related to the representation of water vapor continuum emission in MODTRAN 2. Another discrepancy appears in the spectral interval 1200-1300 cm{sup -1}, whether MODTRAN 2 appears to overestimate zenith radiance by 5 mW m{sup -2} sr-1 cm. These discrepancies appear consistently; however, they become only slightly larger at the highest water vapor abundances. Because these radiance discrepancies correspond to broadband (500-2000 cm{sup -1}) flux uncertainties of around 3 W m{sup -2}, there appear to be no serious inadequacies with the performance of MODTRAN 2 or LOWTRAN 7 at high atmospheric temperatures and water vapor abundances. 23 refs., 10 figs.
NASA Astrophysics Data System (ADS)
Kellerman, Adam; Shprits, Yuri; Podladchikova, Tatiana; Kondrashov, Dmitri
2016-04-01
The Versatile Electron Radiation Belt (VERB) code 2.0 models the dynamics of radiation-belt electron phase space density (PSD) in Earth's magnetosphere. Recently, a data-assimilative version of this code has been developed, which utilizes a split-operator Kalman-filtering approach to solve for electron PSD in terms of adiabatic invariants. A new dataset based on the TS07d magnetic field model is presented, which may be utilized for analysis of past geomagnetic storms, and for initial and boundary conditions in running simulations. Further, a data-assimilative forecast model is introduced, which has the capability to forecast electron PSD several days into the future, given a forecast Kp index. The model assimilates an empirical model capable of forecasting the conditions at geosynchronous orbit. The model currently runs in real time and a forecast is available to view online http://rbm.epss.ucla.edu.
Parks, C.V.; Broadhead, B.L.; Hermann, O.W.; Tang, J.S.; Cramer, S.N.; Gauthey, J.C.; Kirk, B.L.; Roussin, R.W.
1988-07-01
This report provides a preliminary assessment of the computational tools and existing methods used to obtain radiation dose rates from shielded spent nuclear fuel and high-level radioactive waste (HLW). Particular emphasis is placed on analysis tools and techniques applicable to facilities/equipment designed for the transport or storage of spent nuclear fuel or HLW. Applications to cask transport, storage, and facility handling are considered. The report reviews the analytic techniques for generating appropriate radiation sources, evaluating the radiation transport through the shield, and calculating the dose at a desired point or surface exterior to the shield. Discrete ordinates, Monte Carlo, and point kernel methods for evaluating radiation transport are reviewed, along with existing codes and data that utilize these methods. A literature survey was employed to select a cadre of codes and data libraries to be reviewed. The selection process was based on specific criteria presented in the report. Separate summaries were written for several codes (or family of codes) that provided information on the method of solution, limitations and advantages, availability, data access, ease of use, and known accuracy. For each data library, the summary covers the source of the data, applicability of these data, and known verification efforts. Finally, the report discusses the overall status of spent fuel shielding analysis techniques and attempts to illustrate areas where inaccuracy and/or uncertainty exist. The report notes the advantages and limitations of several analysis procedures and illustrates the importance of using adequate cross-section data sets. Additional work is recommended to enable final selection/validation of analysis tools that will best meet the US Department of Energy's requirements for use in developing a viable HLW management system. 188 refs., 16 figs., 27 tabs.
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1993-12-31
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, the team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). SPECTRE was to establish an absolute standard against which to compare models, and aimed to remove the hidden variables (unknown humidities, aerosols, etc.) which radiation modelers had invoked to excuse disagreements with observation. The data collected during SPECTRE were to form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used in climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. This report summarizes the activities during the project`s Third year to meet stated objectives. The report is divided into three sections entitled: (1) SPECTRE Activities, (2) ICRCCM Activities, and (3) Summary Information. The section on SPECTRE activities summarizes the field portion of the project during 1991, and the data reduction/analysis performed by the various participants. The section on ICRCCM activities summarizes their initial attempts to select data for distribution to ICRCCM participants and at comparison of observations with calculations as will be done by the ICRCCM participants. The Summary Information section lists data concerning publications, presentations, graduate students supported, and post-doctoral appointments during the project.
Ellingson, R.G.; Baer, F.
1992-01-01
Research by the US Department of Energy (DOE) has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climatic responses to human activity. However, these effects are still not known at the levels needed for climate prediction. Consequently, DOE has launched a major initiative-- the Atmospheric Radiation Measurements (ARM) Program -- directed at improving the parameterization of the physics governing cloud and radiative processes in general circulation models (GCM's). One specific goal of ARM is to improve the treatment of radiative transfer in GCM's under clear-sky, general overcast and broken cloud conditions. Our approach to developing the radiation model will be to test existing models in an iterative, predictive fashion. We will supply the Clouds and Radiative Testbed (CART) with a set of models to be compared with operationally observed data. The differences we find will lead to the development of new models to be tested with new data. Similarly, our GCM studies will use existing GCM's to study the radiation sensitivity problem. We anticipate that the outcome of this approach will provide both a better longwave radiative forcing algorithm and a better understanding of how longwave radiative forcing influences the equilibrium climate of the atmosphere.
Harrison, L.; Michalsky, J.
1991-03-13
Three separate tasks are included in the first year of the project. Two involve assembling data sets useful for testing radiation models in global climate modeling (GCM) codes, and the third is concerned with the development of advance instrumentation for performing accurate spectral radiation measurements. Task 1: Three existing data sets have been merged for two locations, one in the wet northeastern US and a second in the dry western US. The data sets are meteorological data from the WBAN network, upper air data from the NCDC, and high quality solar radiation measurements from Albany, New York and Golden, Colorado. These represent test data sets for those modelers developing radiation codes for the GCM models. Task 2: Existing data are not quite adequate from a modeler`s perspective without downwelling infrared data and surface albedo, or reflectance, data. Before the deployment of the first CART site in ARM the authors are establishing this more complete set of radiation measurements at the Albany site to be operational only until CART is operational. The authors will have the site running by April 1991, which will provide about one year`s data from this location. They will coordinate their measurements with satellite overpasses, and, to the extent possible, with radiosonde releases, in order that the data set be coincident in time. Task 3: Work has concentrated on the multiple filter instrument. The mechanical, optical, and software engineering for this instrument is complete, and the first field prototype is running at the Rattlesnake Mountain Observatory (RMO) test site. This instrument is performing well, and is already delivering reliable and useful information.
Giantsoudi, D; Schuemann, J; Dowdell, S; Paganetti, H; Jia, X; Jiang, S
2014-06-15
Purpose: For proton radiation therapy, Monte Carlo simulation (MCS) methods are recognized as the gold-standard dose calculation approach. Although previously unrealistic due to limitations in available computing power, GPU-based applications allow MCS of proton treatment fields to be performed in routine clinical use, on time scales comparable to that of conventional pencil-beam algorithms. This study focuses on validating the results of our GPU-based code (gPMC) versus fully implemented proton therapy based MCS code (TOPAS) for clinical patient cases. Methods: Two treatment sites were selected to provide clinical cases for this study: head-and-neck cases due to anatomical geometrical complexity (air cavities and density heterogeneities), making dose calculation very challenging, and prostate cases due to higher proton energies used and close proximity of the treatment target to sensitive organs at risk. Both gPMC and TOPAS methods were used to calculate 3-dimensional dose distributions for all patients in this study. Comparisons were performed based on target coverage indices (mean dose, V90 and D90) and gamma index distributions for 2% of the prescription dose and 2mm. Results: For seven out of eight studied cases, mean target dose, V90 and D90 differed less than 2% between TOPAS and gPMC dose distributions. Gamma index analysis for all prostate patients resulted in passing rate of more than 99% of voxels in the target. Four out of five head-neck-cases showed passing rate of gamma index for the target of more than 99%, the fifth having a gamma index passing rate of 93%. Conclusion: Our current work showed excellent agreement between our GPU-based MCS code and fully implemented proton therapy based MC code for a group of dosimetrically challenging patient cases.
Robert G. Ellingson
2004-09-28
One specific goal of the Atmospheric Radiation Measurements (ARM) program is to improve the treatment of radiative transfer in General Circulation Models (GCMs) under clear-sky, general overcast and broken cloud conditions. Our project was geared to contribute to this goal by attacking major problems associated with one of the dominant radiation components of the problem --longwave radiation. The primary long-term project objectives were to: (1) develop an optimum longwave radiation model for use in GCMs that has been calibrated with state-of-the-art observations for clear and cloudy conditions, and (2) determine how the longwave radiative forcing with an improved algorithm contributes relatively in a GCM when compared to shortwave radiative forcing, sensible heating, thermal advection and convection. The approach has been to build upon existing models in an iterative, predictive fashion. We focused on comparing calculations from a set of models with operationally observed data for clear, overcast and broken cloud conditions. The differences found through the comparisons and physical insights have been used to develop new models, most of which have been tested with new data. Our initial GCM studies used existing GCMs to study the climate model-radiation sensitivity problem. Although this portion of our initial plans was curtailed midway through the project, we anticipate that the eventual outcome of this approach will provide both a better longwave radiative forcing algorithm and from our better understanding of how longwave radiative forcing influences the model equilibrium climate, how improvements in climate prediction using this algorithm can be achieved.
NASA Technical Reports Server (NTRS)
Egan, Michael P.; Leung, Chun Ming; Spagna, George F., Jr.
1988-01-01
The program solves the radiation transport problem in a dusty medium with one-dimensional planar, spherical or cylindrical geometry. It determines self-consistently the effects of multiple scattering, absorption, and re-emission of photons on the temperature of dust grains and the characteristics of the internal radiation field. The program can treat radiation field anisotropy, linear anisotropic scattering, and multi-grain components. The program output consists of the dust-temperature distribution, flux spectrum, surface brightness at each frequency and the observed intensities (involving a convolution with a telescope beam pattern).
Radiation Oncology Treatment Team
... Upper GI What is Radiation Therapy? Find a Radiation Oncologist Last Name: Facility: City: State: Zip Code: ... who specializes in using radiation to treat cancer . Radiation Oncologists Radiation oncologists are the doctors who will ...
Carver, D; Kost, S; Pickens, D; Price, R; Stabin, M
2014-06-15
Purpose: To assess the utility of optically stimulated luminescent (OSL) dosimeter technology in calibrating and validating a Monte Carlo radiation transport code for computed tomography (CT). Methods: Exposure data were taken using both a standard CT 100-mm pencil ionization chamber and a series of 150-mm OSL CT dosimeters. Measurements were made at system isocenter in air as well as in standard 16-cm (head) and 32-cm (body) CTDI phantoms at isocenter and at the 12 o'clock positions. Scans were performed on a Philips Brilliance 64 CT scanner for 100 and 120 kVp at 300 mAs with a nominal beam width of 40 mm. A radiation transport code to simulate the CT scanner conditions was developed using the GEANT4 physics toolkit. The imaging geometry and associated parameters were simulated for each ionization chamber and phantom combination. Simulated absorbed doses were compared to both CTDI{sub 100} values determined from the ion chamber and to CTDI{sub 100} values reported from the OSLs. The dose profiles from each simulation were also compared to the physical OSL dose profiles. Results: CTDI{sub 100} values reported by the ion chamber and OSLs are generally in good agreement (average percent difference of 9%), and provide a suitable way to calibrate doses obtained from simulation to real absorbed doses. Simulated and real CTDI{sub 100} values agree to within 10% or less, and the simulated dose profiles also predict the physical profiles reported by the OSLs. Conclusion: Ionization chambers are generally considered the standard for absolute dose measurements. However, OSL dosimeters may also serve as a useful tool with the significant benefit of also assessing the radiation dose profile. This may offer an advantage to those developing simulations for assessing radiation dosimetry such as verification of spatial dose distribution and beam width.
Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju
2015-01-01
SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed. PMID:26658477
NASA Astrophysics Data System (ADS)
Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju
2015-12-01
SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed.
NASA Astrophysics Data System (ADS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-05-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.
Morgan C. White
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Lee, K. T.
2007-02-12
The long term human exploration goals that NASA has embraced, requires the need to understand the primary radiation and secondary particle production under a variety of environmental conditions. In order to perform accurate transport simulations for the incident particles found in the space environment, accurate nucleus-nucleus inelastic event generators are needed, and NASA is funding their development. For the first time, NASA is including the radiation problem into the . design of the next manned exploration vehicle. The NASA-funded FLUER-S (FLUKA Executing Under ROOT-Space) project has several goals beyond the improvement of the internal nuclear physics simulations. These include making FLUKA more user-friendly. Several tools have been developed to simplify the use of FLUKA without compromising its accuracy or versatility. Among these tools are a general source input, ability of distributive computing, simplification of geometry input, geometry and event visualization, and standard FLUKA scoring output analysis using a ROOT GUI. In addition to describing these tools we will show how they have been used for space radiation environment data analysis in MARIE, IVCPDS, and EVCPDS. Similar analyses can be performed for future radiation measurement detectors before they are deployed in order to optimize their design. These tools can also be used in the design of nuclear-based power systems on manned exploration vehicles and planetary surfaces. In addition to these space applications, the simulations are being used to support accelerator based experiments like the cross-section measurements being performed at HIMAC and NSRL at BNL.
Hayes, J C; Norman, M
1999-10-28
This report details an investigation into the efficacy of two approaches to solving the radiation diffusion equation within a radiation hydrodynamic simulation. Because leading-edge scientific computing platforms have evolved from large single-node vector processors to parallel aggregates containing tens to thousands of individual CPU's, the ability of an algorithm to maintain high compute efficiency when distributed over a large array of nodes is critically important. The viability of an algorithm thus hinges upon the tripartite question of numerical accuracy, total time to solution, and parallel efficiency.
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-21
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was
Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study
NASA Astrophysics Data System (ADS)
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-01
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was
Ellingson, R.G.; Baer, F.
1993-12-31
This report summarizes the activities of our group to meet our stated objectives. The report is divided into sections entitled: Radiation Model Testing Activities, General Circulation Model Testing Activities, Science Team Activities, and Publications, Presentations and Meetings. The section on Science Team Activities summarizes our participation with the science team to further advance the observation and modeling programs. Appendix A lists graduate students supported, and post-doctoral appointments during the project. Reports on the activities during each of the first two years are included as Appendix B. Significant progress has been made in: determining the ability of line-by-line radiation models to calculate the downward longwave flux at the surface; determining the uncertainties in calculated the downwelling radiance and flux at the surface associated with the use of different proposed profiling techniques; intercomparing clear-sky radiance and flux observations with calculations from radiation codes from different climate models; determining the uncertainties associated with estimating N* from surface longwave flux observations; and determining the sensitivity of model calculations to different formulations of the effects of finite sized clouds.
Compressible Astrophysics Simulation Code
2007-07-18
This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.
Monte Carlo Simulation of a 6 MV X-Ray Beam for Open and Wedge Radiation Fields, Using GATE Code
Bahreyni-Toosi, Mohammad-Taghi; Nasseri, Shahrokh; Momennezhad, Mahdi; Hasanabadi, Fatemeh; Gholamhosseinian, Hamid
2014-01-01
The aim of this study is to provide a control software system, based on Monte Carlo simulation, and calculations of dosimetric parameters of standard and wedge radiation fields, using a Monte Carlo method. GATE version 6.1 (OpenGATE Collaboration), was used to simulate a compact 6 MV linear accelerator system. In order to accelerate the calculations, the phase-space technique and cluster computing (Condor version 7.2.4, Condor Team, University of Wisconsin–Madison) were used. Dosimetric parameters used in treatment planning systems for the standard and wedge radiation fields (10 cm × 10 cm to 30 cm × 30 cm and a 60° wedge), including the percentage depth dose and dose profiles, were measured by both computational and experimental methods. Gamma index was applied to compare calculated and measured results with 3%/3 mm criteria. Gamma index was applied to compare calculated and measured results. Almost all calculated data points have satisfied gamma index criteria of 3% to 3 mm. Based on the good agreement between calculated and measured results obtained for various radiation fields in this study, GATE may be used as a useful tool for quality control or pretreatment verification procedures in radiotherapy. PMID:25426430
NASA Astrophysics Data System (ADS)
Valdivia, Valeska; Hennebelle, Patrick
2014-11-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We
NASA Astrophysics Data System (ADS)
Espy, P. J.; Daae, M.; Shprits, Y.
2010-12-01
The correlation between the inner edge of the outer radiation belt phase space density (PSD) and the plasmapause location (Lpp) using reanalysis is investigated. A large data set is applied for the statistical analysis, using data from 1990-1991 from the CRRES satellite, GEO 1989, GPS-ns18 and Akebono. These data are incorporated into reanalysis by means of a Kalman filter with the UCLA 1-D VERB code. The result is a continuous radial and temporal distribution of the PSD from L*=3 to L*=7. The innovation vector of the reconstructed PSD can give us information about regions where local loss or source processes are dominating. We analyze both the PSD and the innovation vector by binning them into slots of Dst and Kp values. This has been done by finding the time for when the Dst (Kp) is within each bin-size of 20 nT (1) from 10 nT to -130 nT (1 to 8). The PSD and innovation vector was then averaged over each of those times. The result shows a good correlation between the location of the inner edge of the outer radiation belt in the PSD and the location of the plasmapause, which is consistent with previous observations. The boundary between the inner edge of the radiation belt and the Lpp becomes sharper, and the radiation belt becomes thinner, during times of high geomagnetic activity. The innovation vector shows that the inner edge of the source region also lines up well with the Lpp, and further showing a battle between losses and sources during active times. This study also illustrates how data assimilation in the radiation belts can be used to understand the underlining processes of acceleration and loss in the inner magnetosphere.
NASA Technical Reports Server (NTRS)
Soden, B.; Tjemkes, S.; Schmetz, J.; Saunders, R.; Bates, J.; Ellingson, B.; Engelen, R.; Garand, L.; Jackson, D.; Jedlovec, G.
1999-01-01
An intercomparison of radiation codes used in retrieving upper tropospheric humidity (UTH) from observations in the v2 (6.3 microns) water vapor absorption band was performed. This intercomparison is one part of a coordinated effort within the GEWEX Water Vapor Project (GVaP) to assess our ability to monitor the distribution and variations of upper tropospheric moisture from space-borne sensors. A total of 23 different codes, ranging from detailed line-by-line (LBL) models, to coarser resolution narrow-band (NB) models, to highly-parameterized single-band (SB) models participated in the study. Forward calculations were performed using a carefully selected set of temperature and moisture profiles chosen to be representative of a wide range of atmospheric conditions. The LBL model calculations exhibited the greatest consistency with each other, typically agreeing to within 0.5 K in terms of the equivalent blackbody brightness temperature (T(sub b)). The majority of NB and SB models agreed to within +/- 1 K of the LBL models, although a few older models exhibited systematic T(sub b) biases in excess of 2 K. A discussion of the discrepancies between various models, their association with differences in model physics (e.g. continuum absorption), and their implications for UTH retrieval and radiance assimilation is presented.
Sarrut, David; Bardiès, Manuel; Marcatili, Sara; Mauxion, Thibault; Boussion, Nicolas; Freud, Nicolas; Létang, Jean-Michel; Jan, Sébastien; Maigne, Lydia; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; and others
2014-06-15
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same frameworkis emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities.
Sarrut, David; Bardiès, Manuel; Boussion, Nicolas; Freud, Nicolas; Jan, Sébastien; Létang, Jean-Michel; Loudos, George; Maigne, Lydia; Marcatili, Sara; Mauxion, Thibault; Papadimitroulas, Panagiotis; Perrot, Yann; Pietrzyk, Uwe; Robert, Charlotte; Schaart, Dennis R; Visvikis, Dimitris; Buvat, Irène
2014-06-01
In this paper, the authors' review the applicability of the open-source GATE Monte Carlo simulation platform based on the GEANT4 toolkit for radiation therapy and dosimetry applications. The many applications of GATE for state-of-the-art radiotherapy simulations are described including external beam radiotherapy, brachytherapy, intraoperative radiotherapy, hadrontherapy, molecular radiotherapy, and in vivo dose monitoring. Investigations that have been performed using GEANT4 only are also mentioned to illustrate the potential of GATE. The very practical feature of GATE making it easy to model both a treatment and an imaging acquisition within the same framework is emphasized. The computational times associated with several applications are provided to illustrate the practical feasibility of the simulations using current computing facilities. PMID:24877844
NASA Astrophysics Data System (ADS)
Vandervort, Robert; Elgin, Laura; Farag, Ebraheem; Mussack, Katie; Baumgaertel, Jessica Ann; Keiter, Paul; Klein, Sallee; Orban, Christopher; Drake, R. Paul
2015-11-01
A sound speed discrepancy between solar models and data collected using helioseismology exists. The sound speed discrepancy is the most pronounced at the base of the convective zone (CZ) for otherwise consistent solar models. One potential solution is that the opacity models for important elements such as carbon, nitrogen and oxygen are incomplete. At these high energy-density conditions few relevant opacity measurements exist to compare to the models. Only relatively recently have user facilities been able to reach the temperatures and densities that resemble the convective zone base. It is our long term goal to determine the opacities of carbon, nitrogen and oxygen at the relevant conditions. Preliminary testing has occurred at the Omega Laser Facility in Rochester, New York. Presented are the results of the shots taken on April 22, 2015. A half hohlraum was used to drive a supersonic radiation front through a dominantly carbon, CRF, foam. These results are compared to diffusive xRage simulations. (LA-UR-15-25495)
NASA Astrophysics Data System (ADS)
Catalano, M.; Agosteo, S.; Moretti, R.; Andreoli, S.
2007-06-01
The principle of optimisation of the EURATOM 97/43 directive foresees that for all medical exposure of individuals for radiotherapeutic purposes, exposures of target volumes shall be individually planned, taking into account that doses of non-target volumes and tissues shall be as low as reasonably achievable and consistent with the intended radiotherapeutic purpose of the exposure. Treatment optimisation has to be carried out especially in non conventional radiotherapic procedures, as Intra Operative Radiation Therapy (IORT) with mobile dedicated LINear ACcelerator (LINAC), which does not make use of a Treatment Planning System. IORT is carried out with electron beams and refers to the application of radiation during a surgical intervention, after the removal of a neoplastic mass and it can also be used as a one-time/stand alone treatment in initial cancer of small volume. IORT foresees a single session and a single beam only; therefore it is necessary to use protection systems (disks) temporary positioned between the target volume and the underlying tissues, along the beam axis. A single high Z shielding disk is used to stop the electrons of the beam at a certain depth and protect the tissues located below. Electron back scatter produces an enhancement in the dose above the disk, and this can be reduced if a second low Z disk is placed above the first. Therefore two protection disks are used in clinical application. On the other hand the dose enhancement at the interface of the high Z disk and the target, due to back scattering radiation, can be usefully used to improve the uniformity in treatment of thicker target volumes. Furthermore the dose above the disks of different Z material has to be evaluated in order to study the optimal combination of shielding disks that allow both to protect the underlying tissues and to obtain the most uniform dose distribution in target volumes of different thicknesses. The dose enhancement can be evaluated using the electron
Outside the protective cocoon of Earth's atmosphere, the universe is full of harmful radiation. Astronauts who live and work in space are exposed not only to ultraviolet rays but also to space radi...
Choi, Ena; Ostriker, Jeremiah P.; Naab, Thorsten; Johansson, Peter H.
2012-08-01
We study the growth of black holes (BHs) in galaxies using three-dimensional smoothed particle hydrodynamic simulations with new implementations of the momentum mechanical feedback, and restriction of accreted elements to those that are gravitationally bound to the BH. We also include the feedback from the X-ray radiation emitted by the BH, which heats the surrounding gas in the host galaxies, and adds radial momentum to the fluid. We perform simulations of isolated galaxies and merging galaxies and test various feedback models with the new treatment of the Bondi radius criterion. We find that overall the BH growth is similar to what has been obtained by earlier works using the Springel, Di Matteo, and Hernquist algorithms. However, the outflowing wind velocities and mechanical energy emitted by winds are considerably higher (v{sub w} {approx} 1000-3000 km s{sup -1}) compared to the standard thermal feedback model (v{sub w} {approx} 50-100 km s{sup -1}). While the thermal feedback model emits only 0.1% of BH released energy in winds, the momentum feedback model emits more than 30% of the total energy released by the BH in winds. In the momentum feedback model, the degree of fluctuation in both radiant and wind output is considerably larger than in standard treatments. We check that the new model of BH mass accretion agrees with analytic results for the standard Bondi problem.
NASA Technical Reports Server (NTRS)
1988-01-01
American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Clinical coding. Code breakers.
Mathieson, Steve
2005-02-24
--The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716
NASA Technical Reports Server (NTRS)
Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina
2010-01-01
Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event
Spirydovich, S; Huq, M
2014-06-15
Purpose: The improvement of quality in healthcare can be assessed by Failure Mode and Effects Analysis (FMEA). In radiation oncology, FMEA, as applied to the billing CPT code 77336, can improve both charge capture and, most importantly, quality of the performed services. Methods: We created an FMEA table for the process performed under CPT code 77336. For a given process step, each member of the assembled team (physicist, dosimetrist, and therapist) independently assigned numerical values for: probability of occurrence (O, 1–10), severity (S, 1–10), and probability of detection (D, 1–10) for every failure mode cause and effect combination. The risk priority number, RPN, was then calculated as a product of O, S and D from which an average RPN was calculated for each combination mentioned above. A fault tree diagram, with each process sorted into 6 categories, was created with linked RPN. For processes with high RPN recommended actions were assigned. 2 separate R and V systems (Lantis and EMR-based ARIA) were considered. Results: We identified 9 potential failure modes and corresponding 19 potential causes of these failure modes all resulting in unjustified 77336 charge and compromised quality of care. In Lantis, the range of RPN was 24.5–110.8, and of S values – 2–10. The highest ranking RPN of 110.8 came from the failure mode described as “end-of-treatment check not done before the completion of treatment”, and the highest S value of 10 (RPN=105) from “overrides not checked”. For the same failure modes, within ARIA electronic environment with its additional controls, RPN values were significantly lower (44.3 for end-of-treatment missing check and 20.0 for overrides not checked). Conclusion: Our work has shown that when charge capture was missed that also resulted in some services not being performed. Absence of such necessary services may result in sub-optimal quality of care rendered to patients.
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Ellingson, R.G.; Baer, F.
1998-09-01
DOE has launched a major initiative -- the Atmospheric Radiation Measurements (ARM) Program -- directed at improving the parameterization of the physics governing cloud and radiative processes in general circulation models (GCMs). One specific goal of ARM is to improve the treatment of radiative transfer in GCMs under clear-sky, general overcast and broken cloud conditions. In 1990, the authors proposed to contribute to this goal by attacking major problems connected with one of the dominant radiation components of the problem -- longwave radiation. In particular, their long-term research goals are to: develop an optimum longwave radiation model for use in GCMs that has been calibrated with state-of-the-art observations, assess the impact of the longwave radiative forcing in a GCM, determine the sensitivity of a GCM to the radiative model used in it, and determine how the longwave radiative forcing contributes relatively when compared to shortwave radiative forcing, sensible heating, thermal advection and expansion.
NASA Astrophysics Data System (ADS)
Gersho, Allen
1990-05-01
Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.
Electrical Circuit Simulation Code
2001-08-09
Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).
NASA Astrophysics Data System (ADS)
Koontz, S. L.; Atwell, W. A.; Reddell, B.; Rojdev, K.
2010-12-01
In the this paper, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event effect (SEE) environments behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i.e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations are fully three dimensional with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. FLUKA is a fully integrated and extensively verified Monte Carlo simulation package for the interaction and transport of high-energy particles and nuclei in matter. The effects are reported of both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. SPE heavy ion spectra are not addressed. Our results, in agreement with previous studies, show that use of the Exponential form of the event spectra can seriously underestimate spacecraft SPE TID and SEE environments in some, but not all, shielding mass cases. The SPE spectra investigated are taken from four specific SPEs that produced ground-level events (GLEs) during solar cycle 23 (1997-2008). GLEs are produced by highly energetic solar particle events (ESP), i.e., those that contain significant fluences of 700 MeV to 10 GeV protons. Highly energetic SPEs are implicated in increased rates of spacecraft anomalies and spacecraft failures. High-energy protons interact with Earth’s atmosphere via nuclear reaction to produce secondary particles, some of which are neutrons that can be detected at the Earth’s surface by the global neutron monitor network. GLEs are one part of the overall SPE resulting from a particular solar flare or coronal mass ejection event on the sun. The ESP part of the particle event, detected by spacecraft
Eckerman, K.F.; Congel, F.J.; Roecklein, A.K.; Pasciak, W.J.
1980-06-01
The document is a user's guide for the GASPAR code, a computer program written for the evaluation of radiological impacts due to the release of radioactive material to the atmosphere during normal operation of light water reactors. The GASPAR code implements the radiological impact models of NRC Regulatory Guide 1.109, Revision 1, for atmospheric releases. The code is currently used by NRC in reactor licensing evaluations to estimate (1) the collective or population dose to the population within a 50-mile radius of a facility, (2) the total collective dose to the U.S. population, and (3) the maximum individual doses at selected locations in the vicinity of the plant.
NASA Astrophysics Data System (ADS)
Mihalas, Dimitri
Basic Radiation Theory Specific Intensity Photon Number Density Photon Distribution Function Mean Intensity Radiation Energy Density Radiation Energy Flux Radiation Momentum Density Radiation Stress Tensor (Radiation Pressure Tensor) Thermal Radiation Thermodynamics of Thermal Radiation and a Perfect Gas The Transfer Equation Absorption, Emission, and Scattering The Equation of Transfer Moments of the Transfer Equation Lorentz Transformation of the Transfer Equation Lorentz Transformation of the Photon 4-Momentum Lorentz Transformation of the Specific Intensity, Opacity, and - Emissivity Lorentz Transformation of the Radiation Stress Energy Tensor The Radiation 4-Force Density Vector Covariant Form of the Transfer Equation Inertial-Frame Equations of Radiation Hydrodynamics Inertial-Frame Radiation Equations Inertial-Frame Equations of Radiation Hydrodynamics Comoving-Frame Equation of Transfer Special Relativistic Derivation (D. Mihalas) Consistency Between Comoving-Frame and Inertial-Frame Equations Noninertial Frame Derivation (J. I. Castor) Analysis of O (v/c) Terms Lagrangian Equations of Radiation Hydrodynamics Momentum Equation Gas Energy Equation First Law of Thermodynamics for the Radiation Field First Law of Thermodynamics for the Radiating Fluid Mechanical Energy Equation Total Energy Equation Consistency of Different Forms of the Radiating-Fluid Energy - and Momentum Equations Consistency of Inertial-Frame and Comoving-Frame Radiation Energy - and Momentum Equations Radiation Diffusion Radiation Diffusion Nonequilibrium Diffusion The Problem of Flux Limiting Shock Propagation: Numerical Methods Acoustic Waves Numerical Stability Systems of Equations Implications of Shock Development Implications of Diffusive Energy Transport Illustrative Example Numerical Radiation Hydrodynamics Radiating Fluid Energy and Momentum Equations Computational Strategy Energy Conservation Formal Solution Multigroup Equations An Astrophysical Example Adaptive-Grid Radiation
Ellingson, R.G.; Baer, F.
1992-06-01
Research by the US Department of Energy (DOE) has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climatic responses to human activity. However, these effects are still not known at the levels needed for climate prediction. Consequently, DOE has launched a major initiative-- the Atmospheric Radiation Measurements (ARM) Program -- directed at improving the parameterization of the physics governing cloud and radiative processes in general circulation models (GCM`s). One specific goal of ARM is to improve the treatment of radiative transfer in GCM`s under clear-sky, general overcast and broken cloud conditions. Our approach to developing the radiation model will be to test existing models in an iterative, predictive fashion. We will supply the Clouds and Radiative Testbed (CART) with a set of models to be compared with operationally observed data. The differences we find will lead to the development of new models to be tested with new data. Similarly, our GCM studies will use existing GCM`s to study the radiation sensitivity problem. We anticipate that the outcome of this approach will provide both a better longwave radiative forcing algorithm and a better understanding of how longwave radiative forcing influences the equilibrium climate of the atmosphere.
NASA Astrophysics Data System (ADS)
Juhasz, Albert J.
Radiator technology is discussed in the context of the Civilian Space Technology Initiative's (CSTI's) high capacity power-thermal management project. The CSTI project is a subset of a project to develop a piloted Mars nuclear electric propulsion (NEP) vehicle. The following topics are presented in vugraph form: advanced radiator concepts; heat pipe codes and testing; composite materials; radiator design and integration; and surface morphology.
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
1993-01-01
Radiator technology is discussed in the context of the Civilian Space Technology Initiative's (CSTI's) high capacity power-thermal management project. The CSTI project is a subset of a project to develop a piloted Mars nuclear electric propulsion (NEP) vehicle. The following topics are presented in vugraph form: advanced radiator concepts; heat pipe codes and testing; composite materials; radiator design and integration; and surface morphology.
NASA Technical Reports Server (NTRS)
1985-01-01
COSMIC MINIVER, a computer code developed by NASA for analyzing aerodynamic heating and heat transfer on the Space Shuttle, has been used by Marquardt Company to analyze heat transfer on Navy/Air Force missile bodies. The code analyzes heat transfer by four different methods which can be compared for accuracy. MINIVER saved Marquardt three months in computer time and $15,000.
Torney, D. C.
2001-01-01
We have begun to characterize a variety of codes, motivated by potential implementation as (quaternary) DNA n-sequences, with letters denoted A, C The first codes we studied are the most reminiscent of conventional group codes. For these codes, Hamming similarity was generalized so that the score for matched letters takes more than one value, depending upon which letters are matched [2]. These codes consist of n-sequences satisfying an upper bound on the similarities, summed over the letter positions, of distinct codewords. We chose similarity 2 for matches of letters A and T and 3 for matches of the letters C and G, providing a rough approximation to double-strand bond energies in DNA. An inherent novelty of DNA codes is 'reverse complementation'. The latter may be defined, as follows, not only for alphabets of size four, but, more generally, for any even-size alphabet. All that is required is a matching of the letters of the alphabet: a partition into pairs. Then, the reverse complement of a codeword is obtained by reversing the order of its letters and replacing each letter by its match. For DNA, the matching is AT/CG because these are the Watson-Crick bonding pairs. Reversal arises because two DNA sequences form a double strand with opposite relative orientations. Thus, as will be described in detail, because in vitro decoding involves the formation of double-stranded DNA from two codewords, it is reasonable to assume - for universal applicability - that the reverse complement of any codeword is also a codeword. In particular, self-reverse complementary codewords are expressly forbidden in reverse-complement codes. Thus, an appropriate distance between all pairs of codewords must, when large, effectively prohibit binding between the respective codewords: to form a double strand. Only reverse-complement pairs of codewords should be able to bind. For most applications, a DNA code is to be bi-partitioned, such that the reverse-complementary pairs are separated
Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M.V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P.R.; /Milan U. /INFN, Milan /Pavia U. /INFN, Pavia /CERN /Siegen U. /Houston U. /SLAC /Frascati /NASA, Houston /ENEA, Frascati
2005-11-09
FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.
NASA Technical Reports Server (NTRS)
Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannicchio, D.; Trovati, S.; Villari, R.; Wilson, T.
2006-01-01
FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.
1987-01-20
Version 00 The suite of computer codes, SPEEDI, predicts the dose to the public from a plume released from a nuclear power plant. The main codes comprising SPEEDI are: WIND04, PRWDA, and CIDE. WIND04 calculates three-dimensional mass-conservative windfields. PRWDA calculates concentration distributions, and CIDE estimates the external and internal doses. These models can take into account the spatial and temporal variation of wind, variable topography, deposition and variable source intensity for use in real-time assessment.more » We recommend that you also review the emergency response supporting system CCC-661/ EXPRESS documentation.« less
Confocal coded aperture imaging
Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.
2001-01-01
A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.
NASA Astrophysics Data System (ADS)
Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar
2013-07-01
Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.
Not Available
1991-01-01
This bill was introduced into the US House of Representatives on June 24, 1991 to amend title 38, United States Code with respect to benefits for individuals who may have been exposed to ionizing radiation during military service. Key features addressed in separate sections include the following: expansion of a list of diseases presumed to be service-connected for certain radiation-exposed veteran and elimination of latency-period limitations; and adjudication of claims based on exposure to ionizing radiation.
Michalsky, J.; Harrison, L.
1993-04-30
The ARM goal is to help improve both longwave and shortwave models by providing improved radiometric shortwave data. These data can be used directly to test shortwave model predictions. As will be described below they can also provide inferred values for aerosol and cloud properties that are useful for longwave modeling efforts as well. The current ARM research program includes three tasks all related to the study of shortwave radiation transfer through clouds and aerosol. Two of the tasks involve the assembly of archived and new radiation and meteorological data sets; the third and dominant task has been the development and use of new shortwave radiometric sensors. Archived data from Golden, Colorado, and Albany, New York, were combined with National Weather Service ground and upper air data for testing radiation models for the era when the Earth Radiation Budget Experiment (ERBE) was operational. These data do not include optimum surface radiation measurements; consequently we are acquiring downwelling shortwave, including direct and diffuse irradiance, plus downwelling longwave, upwelling shortwave, and aerosol optical depth, at our own institution, as an additional dataset for ARM modelers.
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
NASA Technical Reports Server (NTRS)
Friedel, R. H. W.; Bourdarie, S.; Fennell, J.; Kanekal, S.; Cayton, T. E.
2004-01-01
The highly energetic electron environment in the inner magnetosphere (GEO inward) has received a lot of research attention in resent years, as the dynamics of relativistic electron acceleration and transport are not yet fully understood. These electrons can cause deep dielectric charging in any space hardware in the MEO to GEO region. We use a new and novel approach to obtain a global representation of the inner magnetospheric energetic electron environment, which can reproduce the absolute environment (flux) for any spacecraft orbit in that region to within a factor of 2 for the energy range of 100 KeV to 5 MeV electrons, for any levels of magnetospheric activity. We combine the extensive set of inner magnetospheric energetic electron observations available at Los Alamos with the physics based Salammbo transport code, using the data assimilation technique of "nudging". This in effect input in-situ data into the code and allows the diffusion mechanisms in the code to interpolate the data into regions and times of no data availability. We present here details of the methods used, both in the data assimilation process and in the necessary inter-calibration of the input data used. We will present sample runs of the model/data code and compare the results to test spacecraft data not used in the data assimilation process.
ERIC Educational Resources Information Center
Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien
2013-01-01
This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…
NASA Astrophysics Data System (ADS)
Maget, V.; Sicard-Piet, A.; Bourdarie, S.; Lazaro, D.; Turner, D. L.; Daglis, I. A.; Sandberg, I.
2015-07-01
Over the last decade, efforts have been made in the radiation belt community to develop data assimilation tools in order to improve the accuracy of radiation belts models. In this paper we present a new method to correctly take into account the outer boundary conditions at L* = 8 in such an enhanced model of the radiation belts. To do that we based our work on the Time History of Events and Macroscale Interactions during Substorms/Solid State Telescope data set. Statistics are developed to define a consistent electron distribution at L* = 8 (in both equatorial pitch angle and energy), and a variance-covariance matrix is estimated in order to more realistically drive the Monte Carlo sampling required by the Ensemble Kalman Filter (EnKF). Data processing is first described as well as caveats avoided, and then the use of these information in a machinery such as the EnKF is described. It is shown that the way the Monte Carlo simulations are performed is of great importance to realistically reproduced outer boundary distribution needed by the physic-based Salammbô model. Finally, EnKF simulations are performed and compared during September 2011 in order to analyze the improvements gained using this new method of defining outer boundary conditions. In particular, we highlight in this study that such a method provides great improvement in the reconstruction of the dynamics observed at geosynchronous orbit, both during quiet and active magnetic conditions.
Spaceflight Validation of Hzetrn Code
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Shinn, J. L.; Singleterry, R. C.; Badavi, F. F.; Badhwar, G. D.; Reitz, G.; Beaujean, R.; Cucinotta, F. A.
1999-01-01
HZETRN is being developed as a fast deterministic radiation transport code applicable to neutrons, protons, and multiply charged ions in the space environment. It was recently applied to 50 hours of IMP8 data measured during the August 4, 1972 solar event to map the hourly exposures within the human body under several shield configurations. This calculation required only 18 hours on a VAX 4000 machine. A similar calculation using the Monte Carlo method would have required two years of dedicated computer time. The code has been benchmarked against well documented and tested Monte Carlo proton transport codes with good success. The code will allow important trade studies to be made with relative ease due to the computational speed and will be useful in assessing design alternatives in an integrated system software environment. Since there are no well tested Monte Carlo codes for HZE particles, we have been engaged in flight validation of the HZETRN results. To date we have made comparison with TEPC, CR-39, charge particle telescopes, and Bonner spheres. This broad range of detectors allows us to test a number of functions related to differing physical processes which add to the complicated radiation fields within a spacecraft or the human body, which functions can be calculated by the HZETRN code system. In the present report we will review these results.
Development of the Code RITRACKS
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cucinotta, Francis A.
2013-01-01
A document discusses the code RITRACKS (Relativistic Ion Tracks), which was developed to simulate heavy ion track structure at the microscopic and nanoscopic scales. It is a Monte-Carlo code that simulates the production of radiolytic species in water, event-by-event, and which may be used to simulate tracks and also to calculate dose in targets and voxels of different sizes. The dose deposited by the radiation can be calculated in nanovolumes (voxels). RITRACKS allows simulation of radiation tracks without the need of extensive knowledge of computer programming or Monte-Carlo simulations. It is installed as a regular application on Windows systems. The main input parameters entered by the user are the type and energy of the ion, the length and size of the irradiated volume, the number of ions impacting the volume, and the number of histories. The simulation can be started after the input parameters are entered in the GUI. The number of each kind of interactions for each track is shown in the result details window. The tracks can be visualized in 3D after the simulation is complete. It is also possible to see the time evolution of the tracks and zoom on specific parts of the tracks. The software RITRACKS can be very useful for radiation scientists to investigate various problems in the fields of radiation physics, radiation chemistry, and radiation biology. For example, it can be used to simulate electron ejection experiments (radiation physics).
Monte Carlo Ion Transport Analysis Code.
2009-04-15
Version: 00 TRIPOS is a versatile Monte Carlo ion transport analysis code. It has been applied to the treatment of both surface and bulk radiation effects. The media considered is composed of multilayer polyatomic materials.
NASA Astrophysics Data System (ADS)
Dirscherl, R.
1993-06-01
The electromagnetic radiation originating from the exhaust plume of tactical missile motors is of outstanding importance for military system designers. Both missile- and countermeasure engineer rely on the knowledge of plume radiation properties, be it for guidance/interference control or for passive detection of adversary missiles. To allow access to plume radiation properties, they are characterized with respect to the radiation producing mechanisms like afterburning, its chemical constituents, and reactions as well as particle radiation. A classification of plume spectral emissivity regions is given due to the constraints imposed by available sensor technology and atmospheric propagation windows. Additionally assessment methods are presented that allow a common and general grouping of rocket motor properties into various categories. These methods describe state of the art experimental evaluation techniques as well as calculation codes that are most commonly used by developers of NATO countries. Dominant aspects influencing plume radiation are discussed and a standardized test technique is proposed for the assessment of plume radiation properties that include prediction procedures. These recommendations on terminology and assessment methods should be common to all employers of plume radiation. Special emphasis is put on the omnipresent need for self-protection by the passive detection of plume radiation in the ultraviolet (UV) and infrared (IR) spectral band.
The Integrated TIGER Series Codes
2006-01-15
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with anmore » input scheme based on order-independent descriptive keywords that makes maximum use of defaults and intemal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less
The Integrated TIGER Series Codes
Kensek, Ronald P.; Franke, Brian C.; Laub, Thomas W.
2006-01-15
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and intemal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Codes with special correlation.
NASA Technical Reports Server (NTRS)
Baumert, L. D.
1964-01-01
Uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets
Grahn, D.; Fox, C.; Wright, B.J.; Carnes, B.A.
1994-05-01
Between 1953 and 1970, studies on the long-term effects of external x-ray and {gamma} irradiation on inbred and hybrid mouse stocks were carried out at the Biological and Medical Research Division, Argonne National Laboratory. The results of these studies, plus the mating, litter, and pre-experimental stock records, were routinely coded on IBM cards for statistical analysis and record maintenance. Also retained were the survival data from studies performed in the period 1943-1953 at the National Cancer Institute, National Institutes of Health, Bethesda, Maryland. The card-image data files have been corrected where necessary and refiled on hard disks for long-term storage and ease of accessibility. In this report, the individual studies and data files are described, and pertinent factors regarding caging, husbandry, radiation procedures, choice of animals, and other logistical details are summarized. Some of the findings are also presented. Descriptions of the different mouse stocks and hybrids are included in an appendix; more than three dozen stocks were involved in these studies. Two other appendices detail the data files in their original card-image format and the numerical codes used to describe the animal`s exit from an experiment and, for some studies, any associated pathologic findings. Tabular summaries of sample sizes, dose levels, and other variables are also given to assist investigators in their selection of data for analysis. The archive is open to any investigator with legitimate interests and a willingness to collaborate and acknowledge the source of the data and to recognize appropriate conditions or caveats.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Radiation transport calculations for cosmic radiation.
Endo, A; Sato, T
2012-01-01
The radiation environment inside and near spacecraft consists of various components of primary radiation in space and secondary radiation produced by the interaction of the primary radiation with the walls and equipment of the spacecraft. Radiation fields inside astronauts are different from those outside them, because of the body's self-shielding as well as the nuclear fragmentation reactions occurring in the human body. Several computer codes have been developed to simulate the physical processes of the coupled transport of protons, high-charge and high-energy nuclei, and the secondary radiation produced in atomic and nuclear collision processes in matter. These computer codes have been used in various space radiation protection applications: shielding design for spacecraft and planetary habitats, simulation of instrument and detector responses, analysis of absorbed doses and quality factors in organs and tissues, and study of biological effects. This paper focuses on the methods and computer codes used for radiation transport calculations on cosmic radiation, and their application to the analysis of radiation fields inside spacecraft, evaluation of organ doses in the human body, and calculation of dose conversion coefficients using the reference phantoms defined in ICRP Publication 110. PMID:23089013
Anderson, Jonas T.
2013-03-15
In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.
Electromagnetic particle simulation codes
NASA Technical Reports Server (NTRS)
Pritchett, P. L.
1985-01-01
Electromagnetic particle simulations solve the full set of Maxwell's equations. They thus include the effects of self-consistent electric and magnetic fields, magnetic induction, and electromagnetic radiation. The algorithms for an electromagnetic code which works directly with the electric and magnetic fields are described. The fields and current are separated into transverse and longitudinal components. The transverse E and B fields are integrated in time using a leapfrog scheme applied to the Fourier components. The particle pushing is performed via the relativistic Lorentz force equation for the particle momentum. As an example, simulation results are presented for the electron cyclotron maser instability which illustrate the importance of relativistic effects on the wave-particle resonance condition and on wave dispersion.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
2015-12-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue. PMID:26633789
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
The VISC code: A user's manual
NASA Technical Reports Server (NTRS)
Wilson, K.
1973-01-01
The VISC code is a computer automated scheme for solving the equations describing the fully coupled viscous, radiating flow at the stagnation-point of a blunt body which may or may not be ablating. The code provides a basis for obtaining prediction of the stagnation-point heating to a body entering any planetary atmosphere at hyperbolic velocities. The code is written in FORTRAN V and is operational on both the Univac 1108 (EXEC 8) system and the CDC 7600 system. The report gives an overview of the VISC code computational logic flow, a description of the input requirements and output results and comments on the practical use of the code. As such the report forms a users manual for operation of the VISC code.
NASA Technical Reports Server (NTRS)
Chou, Y. S.
1973-01-01
The SL-4 code is a computer automated scheme for solving the equations describing the fully-coupled viscous, radiating flow over the front face of a blunt body which may or may not be ablating. The code provides a basis for obtaining predictions of the surface beating to a body entering any planetary atmosphere at hyperbolic velocities. The code is written in FORTRAN V and is operational on both the Univac 1108 (EXEC 8) system in use at LMSC and the CDC 7600 system in use at the University of California, Berkeley. An overview of the SL-4 code computational logic flow, a description of the input requirements and output results, and comments on the practical use of the code are presented. As such this report forms a users manual for operation of the SL-4 code.
International assessment of PCA codes
Neymotin, L.; Lui, C.; Glynn, J.; Archarya, S.
1993-11-01
Over the past three years (1991-1993), an extensive international exercise for intercomparison of a group of six Probabilistic Consequence Assessment (PCA) codes was undertaken. The exercise was jointly sponsored by the Commission of European Communities (CEC) and OECD Nuclear Energy Agency. This exercise was a logical continuation of a similar effort undertaken by OECD/NEA/CSNI in 1979-1981. The PCA codes are currently used by different countries for predicting radiological health and economic consequences of severe accidents at nuclear power plants (and certain types of non-reactor nuclear facilities) resulting in releases of radioactive materials into the atmosphere. The codes participating in the exercise were: ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this inter-code comparison effort, two separate groups performed a similar set of calculations using two of the participating codes, MACCS and COSYMA. Results of the intercode and inter-MACCS comparisons are presented in this paper. The MACCS group included four participants: GREECE: Institute of Nuclear Technology and Radiation Protection, NCSR Demokritos; ITALY: ENEL, ENEA/DISP, and ENEA/NUC-RIN; SPAIN: Universidad Politecnica de Madrid (UPM) and Consejo de Seguridad Nuclear; USA: Brookhaven National Laboratory, US NRC and DOE.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Discussion on LDPC Codes and Uplink Coding
NASA Technical Reports Server (NTRS)
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
2007-01-01
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
Manually operated coded switch
Barnette, Jon H.
1978-01-01
The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.
Binary primitive alternant codes
NASA Technical Reports Server (NTRS)
Helgert, H. J.
1975-01-01
In this note we investigate the properties of two classes of binary primitive alternant codes that are generalizations of the primitive BCH codes. For these codes we establish certain equivalence and invariance relations and obtain values of d and d*, the minimum distances of the prime and dual codes.
Systems Improved Numerical Fluids Analysis Code
NASA Technical Reports Server (NTRS)
Costello, F. A.
1990-01-01
Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to April, 1983, version of SINDA. Additional routines provide for mathematical modeling of active heat-transfer loops. Simulates steady-state and pseudo-transient operations of 16 different components of heat-transfer loops, including radiators, evaporators, condensers, mechanical pumps, reservoirs, and many types of valves and fittings. Program contains property-analysis routine used to compute thermodynamic properties of 20 different refrigerants. Source code written in FORTRAN 77.
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1991-01-01
The performance characteristics are discussed of certain algebraic geometric codes. Algebraic geometric codes have good minimum distance properties. On many channels they outperform other comparable block codes; therefore, one would expect them eventually to replace some of the block codes used in communications systems. It is suggested that it is unlikely that they will become useful substitutes for the Reed-Solomon codes used by the Deep Space Network in the near future. However, they may be applicable to systems where the signal to noise ratio is sufficiently high so that block codes would be more suitable than convolutional or concatenated codes.
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2008-01-01
An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
Anderson, S R; Bihari, B L; Salari, K; Woodward, C S
2006-12-29
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
Asymmetric quantum convolutional codes
NASA Astrophysics Data System (ADS)
La Guardia, Giuliano G.
2016-01-01
In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.
Radiation load to the SNAP CCD
N. V. Mokhov, I. L. Rakhno and S. I. Striganov
2003-08-14
Results of an express Monte Carlo analysis with the MARS14 code of radiation load to the CCD optical detectors in the Supernova Acceleration Project (SNAP) mission presented for realistic radiation environment over the satellite orbit.
How Should I Care for Myself During Radiation Therapy?
... Upper GI What is Radiation Therapy? Find a Radiation Oncologist Last Name: Facility: City: State: Zip Code: ... information How Should I Care for Myself During Radiation Therapy? Get plenty of rest. Many patients experience ...
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-01-01
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
Cellulases and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2001-02-20
The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
A description is given of multiple turbo codes and a suitable decoder structure derived from an approximation to the maximum a posteriori probability (MAP) decision rule, which is substantially different from the decoder for two-code-based encoders.
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
NASA Technical Reports Server (NTRS)
Goerke, W. S.
1972-01-01
A manual is presented as an aid in using the STEEP32 code. The code is the EXEC 8 version of the STEEP code (STEEP is an acronym for shock two-dimensional Eulerian elastic plastic). The major steps in a STEEP32 run are illustrated in a sample problem. There is a detailed discussion of the internal organization of the code, including a description of each subroutine.
Color code identification in coded structured light.
Zhang, Xu; Li, Youfu; Zhu, Limin
2012-08-01
Color code is widely employed in coded structured light to reconstruct the three-dimensional shape of objects. Before determining the correspondence, a very important step is to identify the color code. Until now, the lack of an effective evaluation standard has hindered the progress in this unsupervised classification. In this paper, we propose a framework based on the benchmark to explore the new frontier. Two basic facets of the color code identification are discussed, including color feature selection and clustering algorithm design. First, we adopt analysis methods to evaluate the performance of different color features, and the order of these color features in the discriminating power is concluded after a large number of experiments. Second, in order to overcome the drawback of K-means, a decision-directed method is introduced to find the initial centroids. Quantitative comparisons affirm that our method is robust with high accuracy, and it can find or closely approach the global peak. PMID:22859022
Arcetri Spectral Code for Thin Plasmas
NASA Astrophysics Data System (ADS)
Landi, E.; Landini, M.
2010-07-01
The Arcetri spectral code allows to evaluate the spectrum of the radiation emitted by hot and optically thin plasmas in the spectral range 1 - 2000 Angstroms. The database has been updated including atomic data and radiative and collisional rates to calculate level population and line emissivities for a number of ions of the minor elements; a critical compilation of the electron collision excitation for these elements has been performed. The present version of the program includes the CHIANTI database for the most abundant elements, the minor elements data, and Fe III atomic model, radiative and collisional data.
The Arcetri spectral code for thin plasmas
NASA Astrophysics Data System (ADS)
Landi, E.; Landini, M.
1998-12-01
The Arcetri spectral code allows to evaluate the spectrum of the radiation emitted by hot and optically thin plasmas in the spectral range 1 - 2000 Angstroms. The database has been updated including atomic data and radiative and collisional rates to calculate level population and line emissivities for a number of ions of the minor elements; a critical compilation of the electron collision excitation for these elements has been performed. The present version of the program includes the CHIANTI database for the most abundant elements, the minor elements data, and Fe III atomic model, radiative and collisional data.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
1993-11-01
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.
Greg Flach, Frank Smith
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.
2014-05-14
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read frommore » files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.« less
STATUS OF THE MCNPX TRANSPORT CODE
Hughes, H.G.; Chadwick, M.B.
2000-10-01
The Monte Carlo particle transport code MCNPX and its associated data have been the focus of a major development effort at Los Alamos for several years. The system has reached a mature state, and has become a significant tool for many intermediate and high-energy particle transport applications. A recent version has been released to the Radiation Safety Information Computational Center (RSICC). A recent report provides an overview of the code and an extensive set of references for the component physics modules used in the code. In this paper we review the status of the developmental version of MCNPX, and describe some important new enhancements, including the use of evaluated nuclear data files for proton transport; the use of photonuclear reaction data; improved elastic and inelastic react ion cross sections for nucleons, antinucleons, pions, and kaons; and two new modes of operation of the code. We also illustrate the use of the new proton and photonuclear data in two representative applications.
Synchrotron Radiation Workshop (SRW)
Chubar, O.; Elleaume, P.
2013-03-01
"Synchrotron Radiation Workshop" (SRW) is a physical optics computer code for calculation of detailed characteristics of Synchrotron Radiation (SR) generated by relativistic electrons in magnetic fields of arbitrary configuration and for simulation of the radiation wavefront propagation through optical systems of beamlines. Frequency-domain near-field methods are used for the SR calculation, and the Fourier-optics based approach is generally used for the wavefront propagation simulation. The code enables both fully- and partially-coherent radiation propagation simulations in steady-state and in frequency-/time-dependent regimes. With these features, the code has already proven its utility for a large number of applications in infrared, UV, soft and hard X-ray spectral range, in such important areas as analysis of spectral performances of new synchrotron radiation sources, optimization of user beamlines, development of new optical elements, source and beamline diagnostics, and even complete simulation of SR based experiments. Besides the SR applications, the code can be efficiently used for various simulations involving conventional lasers and other sources. SRW versions interfaced to Python and to IGOR Pro (WaveMetrics), as well as cross-platform library with C API, are available.
Synchrotron Radiation Workshop (SRW)
2013-03-01
"Synchrotron Radiation Workshop" (SRW) is a physical optics computer code for calculation of detailed characteristics of Synchrotron Radiation (SR) generated by relativistic electrons in magnetic fields of arbitrary configuration and for simulation of the radiation wavefront propagation through optical systems of beamlines. Frequency-domain near-field methods are used for the SR calculation, and the Fourier-optics based approach is generally used for the wavefront propagation simulation. The code enables both fully- and partially-coherent radiation propagation simulations inmore » steady-state and in frequency-/time-dependent regimes. With these features, the code has already proven its utility for a large number of applications in infrared, UV, soft and hard X-ray spectral range, in such important areas as analysis of spectral performances of new synchrotron radiation sources, optimization of user beamlines, development of new optical elements, source and beamline diagnostics, and even complete simulation of SR based experiments. Besides the SR applications, the code can be efficiently used for various simulations involving conventional lasers and other sources. SRW versions interfaced to Python and to IGOR Pro (WaveMetrics), as well as cross-platform library with C API, are available.« less
Type I X-ray burst simulation code
2007-07-01
dAGILE is an astrophysical code that simulates accretion of matter onto a neutron star and the subsequent x-ray burst. It is a one-dimensional time-dependent spherically symmetric code with generalized nuclear reaction networks, diffusive radiation/conduction, realistic boundary conditions, and general relativistic hydrodynamics. The code is described in more detail in Astrophysical Journal 650(2006)332 and Astrophysical Journal Supplements 174(2008)261.
Adaptive entropy coded subband coding of images.
Kim, Y H; Modestino, J W
1992-01-01
The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Peter, Frank J.; Dalton, Larry J.; Plummer, David W.
2002-01-01
A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.
Elder, D
1984-06-01
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward. PMID:6748695
Robinson, David; Comp, Dip; Schulz, Erich; Brown, Philip; Price, Colin
1997-01-01
Abstract The Read Codes are a hierarchically-arranged controlled clinical vocabulary introduced in the early 1980s and now consisting of three maintained versions of differing complexity. The code sets are dynamic, and are updated quarterly in response to requests from users including clinicians in both primary and secondary care, software suppliers, and advice from a network of specialist healthcare professionals. The codes' continual evolution of content, both across and within versions, highlights tensions between different users and uses of coded clinical data. Internal processes, external interactions and new structural features implemented by the NHS Centre for Coding and Classification (NHSCCC) for user interactive maintenance of the Read Codes are described, and over 2000 items of user feedback episodes received over a 15-month period are analysed. PMID:9391934
Brownell, J.H.; Bowers, R.L.
1997-04-01
The Los Alamos foil implosion program has the goal of producing an intense, high-energy density x-ray source by converting the energy of a magnetically imploded plasma into radiation and material energy. One of the methods for converting the plasma energy into thermal energy and radiation and utilizing it for experiments is called the flying radiation case (FRC). In this paper the authors shall model the FRC and provide a physical description of the processes involved. An analytic model of a planar FRC in the hydrodynamic approximation is used to describe the assembly and shock heating of a central cushion by a conducting liner driver. The results are also used to benchmark a hydrodynamics code for modeling an FRC. They then use a radiation-hydrodynamics computational model to explore the effects of radiation production and transport when a gold plasma assembles on a CH cushion. Results are presented for the structure and evolution of the radiation hohlraum.
NASA Astrophysics Data System (ADS)
Bravyi, Sergey
Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
Phonological coding during reading.
Leinenger, Mallorie
2014-11-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. PMID:25150679
Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.
1985-03-01
The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.
Domino, Stefan; Luketa-Hanlin, Anay; Gallegos, Carlos
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.
NASA Astrophysics Data System (ADS)
Tang, Bin; Yang, Shenghao; Ye, Baoliu; Yin, Yitong; Lu, Sanglu
2015-12-01
Chunked codes are efficient random linear network coding (RLNC) schemes with low computational cost, where the input packets are encoded into small chunks (i.e., subsets of the coded packets). During the network transmission, RLNC is performed within each chunk. In this paper, we first introduce a simple transfer matrix model to characterize the transmission of chunks and derive some basic properties of the model to facilitate the performance analysis. We then focus on the design of overlapped chunked codes, a class of chunked codes whose chunks are non-disjoint subsets of input packets, which are of special interest since they can be encoded with negligible computational cost and in a causal fashion. We propose expander chunked (EC) codes, the first class of overlapped chunked codes that have an analyzable performance, where the construction of the chunks makes use of regular graphs. Numerical and simulation results show that in some practical settings, EC codes can achieve rates within 91 to 97 % of the optimum and outperform the state-of-the-art overlapped chunked codes significantly.
A user's manual for Electromagnetic Surface Patch (ESP) code. Version 2: Polygonal plates and wires
NASA Astrophysics Data System (ADS)
Newman, E. H.; Alexandropoulos, P.
1983-09-01
This report serves as a user's manual for the Electromagnetic Surface Patch (ESP) Code. This code is a method of moments solution for interconnections of thin wires and polygonal plates. The code can compute currents, input impedance, efficiency, mutual coupling, and far-zone radiation and scattering patterns. In addition to describing the code input and output, the use of the code is illustrated by simple examples. Subroutine descriptions are also given.
Research on Universal Combinatorial Coding
Lu, Jun; Zhang, Zhuo; Mo, Juan
2014-01-01
The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019
Research on universal combinatorial coding.
Lu, Jun; Zhang, Zhuo; Mo, Juan
2014-01-01
The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value. PMID:24772019
Benchmarking of Neutron Production of Heavy-Ion Transport Codes
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
2012-01-01
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
GALPROP: New Developments in CR Propagation Code
NASA Technical Reports Server (NTRS)
Moskalenko, I. V.; Jones, F. C.; Mashnik, S. G.; Strong, A. W.; Ptuskin, V. S.
2003-01-01
The numerical Galactic CR propagation code GALPROP has been shown to reproduce simultaneously observational data of many kinds related to CR origin and propagation. It has been validated on direct measurements of nuclei, antiprotons, electrons, positrons as well as on astronomical measurements of gamma rays and synchrotron radiation. Such data provide many independent constraints on model parameters while revealing some contradictions in the conventional view of Galactic CR propagation. Using a new version of GALPROP we study new effects such as processes of wave-particle interactions in the interstellar medium. We also report about other developments in the CR propagation code.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Lichenase and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2000-08-15
The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.
ERIC Educational Resources Information Center
Million, June
2004-01-01
Most schools have a code of conduct, pledge, or behavioral standards, set by the district or school board with the school community. In this article, the author features some schools that created a new vision of instilling code of conducts to students based on work quality, respect, safety and courtesy. She suggests that communicating the code…
ERIC Educational Resources Information Center
Division for Early Childhood, Council for Exceptional Children, 2009
2009-01-01
The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
Lakhani, Gopal
2003-01-01
It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method. PMID:18237897
Binary concatenated coding system
NASA Technical Reports Server (NTRS)
Monford, L. G., Jr.
1973-01-01
Coding, using 3-bit binary words, is applicable to any measurement having integer scale up to 100. System using 6-bit data words can be expanded to read from 1 to 10,000, and 9-bit data words can increase range to 1,000,000. Code may be ''read'' directly by observation after memorizing simple listing of 9's and 10's.
Computerized mega code recording.
Burt, T W; Bock, H C
1988-04-01
A system has been developed to facilitate recording of advanced cardiac life support mega code testing scenarios. By scanning a paper "keyboard" using a bar code wand attached to a portable microcomputer, the person assigned to record the scenario can easily generate an accurate, complete, timed, and typewritten record of the given situations and the obtained responses. PMID:3354937
NASA Technical Reports Server (NTRS)
Baumert, L. D.; Mceliece, R. J.; Rumsey, H., Jr.
1979-01-01
In a previous paper Pierce considered the problem of optical communication from a novel viewpoint, and concluded that performance will likely be limited by issues of coding complexity rather than by thermal noise. This paper reviews the model proposed by Pierce and presents some results on the analysis and design of codes for this application.
Energy Conservation Code Decoded
Cole, Pam C.; Taylor, Zachary T.
2006-09-01
Designing an energy-efficient, affordable, and comfortable home is a lot easier thanks to a slime, easier to read booklet, the 2006 International Energy Conservation Code (IECC), published in March 2006. States, counties, and cities have begun reviewing the new code as a potential upgrade to their existing codes. Maintained under the public consensus process of the International Code Council, the IECC is designed to do just what its title says: promote the design and construction of energy-efficient homes and commercial buildings. Homes in this case means traditional single-family homes, duplexes, condominiums, and apartment buildings having three or fewer stories. The U.S. Department of Energy, which played a key role in proposing the changes that resulted in the new code, is offering a free training course that covers the residential provisions of the 2006 IECC.
Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.
2013-10-01
The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.
The program RADLST (Radiation Listing)
Burrows, T.W.
1988-02-29
The program RADLST (Radiation Listing) is designed to calculate the nuclear and atomic radiations associated with the radioactive decay of nuclei. It uses as its primary input nuclear decay data in the Evaluated Nuclear Structure Data File (ENSDF) format. The code is written in FORTRAN 77 and, with a few exceptions, is consistent with the ANSI standard. 65 refs.
Effects of Nuclear Interactions in Space Radiation Transport
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei; Barghouty, A. F.
2004-01-01
Space radiation transport codes have been developed to calculate radiation effects behind materials in human missions to the Moon, Mars or beyond. We study how nuclear fragmentation processes affect predictions from such radiation transport codes. In particular, we investigate the effects of fragmentation cross sections at different energies on fluxes, dose and dose-equivalent from galactic cosmic rays behind typical shielding materials.
Effects of Nuclear Interactions in Space Radiation Transport
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei; Barghouty, A. F.
2005-01-01
Space radiation transport codes have been developed to calculate radiation effects behind materials in human mission to the Moon, Mars or beyond. We study how nuclear fragmentation processes affect predictions from such radiation transport codes. In particular, we investigate the effects of fragmentation cross sections at different energies on fluxes, dose and dose-equivalent from galactic cosmic rays behind typical shielding materials.
Quantum convolutional codes derived from constacyclic codes
NASA Astrophysics Data System (ADS)
Yan, Tingsu; Huang, Xinmei; Tang, Yuansheng
2014-12-01
In this paper, three families of quantum convolutional codes are constructed. The first one and the second one can be regarded as a generalization of Theorems 3, 4, 7 and 8 [J. Chen, J. Li, F. Yang and Y. Huang, Int. J. Theor. Phys., doi:10.1007/s10773-014-2214-6 (2014)], in the sense that we drop the constraint q ≡ 1 (mod 4). Furthermore, the second one and the third one attain the quantum generalized Singleton bound.
Huffman coding in advanced audio coding standard
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2012-05-01
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Collaborative Comparison of High-Energy-Density Physics Codes
NASA Astrophysics Data System (ADS)
Fatenejad, M.; Fryer, C.; Fryxell, B.; Lamb, D.; Myra, E.; Wohlbier, J.
2011-10-01
We will describe a collaborative effort involving the Flash Center for Computational Science, The Center for Radiative Shock Hydrodynamics (CRASH), LANL, and LLNL to compare several sophisticated radiation-hydrodynamics codes on a variety of HEDP test problems and experiments. Currently we are comparing efforts to simulate ongoing radiative shock experiments being conducted by CRASH at the OMEGA laser facility that are relevant to a wide range of astrophysical problems. The experiments drive a collapsed planar radiative shock through a Xenon-filled shock tube. Attempts to simulate these experiments have uncovered various challenges to obtaining agreement with experimental results. We will present the results of code-to-code comparisons that have enabled us to understand the impact of differences in numerical methods, physical approximations, microphysical parameters, etc. This work was supported in part by the US Department of Energy.
Coded aperture computed tomography
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Brady, David J.
2009-08-01
Diverse physical measurements can be modeled by X-ray transforms. While X-ray tomography is the canonical example, reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI) are examples of physically unrelated but mathematically equivalent sensor systems. Historically, most x-ray transform based systems sample continuous distributions and apply analytical inversion processes. On the other hand, RST and CASSI generate discrete multiplexed measurements implemented with coded apertures. This multiplexing of coded measurements allows for compression of measurements from a compressed sensing perspective. Compressed sensing (CS) is a revelation that if the object has a sparse representation in some basis, then a certain number, but typically much less than what is prescribed by Shannon's sampling rate, of random projections captures enough information for a highly accurate reconstruction of the object. This paper investigates the role of coded apertures in x-ray transform measurement systems (XTMs) in terms of data efficiency and reconstruction fidelity from a CS perspective. To conduct this, we construct a unified analysis using RST and CASSI measurement models. Also, we propose a novel compressive x-ray tomography measurement scheme which also exploits coding and multiplexing, and hence shares the analysis of the other two XTMs. Using this analysis, we perform a qualitative study on how coded apertures can be exploited to implement physical random projections by "regularizing" the measurement systems. Numerical studies and simulation results demonstrate several examples of the impact of coding.
Nelson, R.N.
1985-05-01
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
CosmoRec: Cosmological Recombination code
NASA Astrophysics Data System (ADS)
Chluba, Jens; Thomas, Rajat Mani
2013-04-01
CosmoRec solves the recombination problem including recombinations to highly excited states, corrections to the 2s-1s two-photon channel, HI Lyn-feedback, n>2 two-photon profile corrections, and n≥2 Raman-processes. The code can solve the radiative transfer equation of the Lyman-series photon field to obtain the required modifications to the rate equations of the resolved levels, and handles electron scattering, the effect of HeI intercombination transitions, and absorption of helium photons by hydrogen. It also allows accounting for dark matter annihilation and optionally includes detailed helium radiative transfer effects.
Hirayama, Hideo; Namito, Yoshihito; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC
2005-12-20
In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version
Weaver, H.J.
1981-11-01
The TRANSF code is a semi-interactive FORTRAN IV program which is designed to calculate the model parameters of a (structural) system by performing a least square parameter fit to measured transfer function data. The code is available at LLNL on both the 7600 and the Cray machines. The transfer function data to be fit is read into the code via a disk file. The primary mode of output is FR80 graphics, although, it is also possible to have results written to either the TTY or to a disk file.
Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
The continued development and improvement of the viscous shock layer (VSL) nonequilibrium chemistry blunt body engineering code, the incorporation in a coupled manner of radiation models into the VSL code, and the initial development of appropriate precursor models are presented.
Hybrid Compton camera/coded aperture imaging system
Mihailescu, Lucian; Vetter, Kai M.
2012-04-10
A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.
Indirect solar loading of waste heat radiators
Kirkpatrick, R.C.; Tabor, J.E.; Lindman, E.L.; Cooper, A.J.
1988-01-01
Waste heat from space based power systems must ultimately be radiated away into space. The local topology around the radiators must be considered from two stand-points: the scattering of sunlight onto the surfaces of the radiator and the heat load that the radiator may put on near-by components of the system. A view factor code (SNAP) developed at Los Alamos allows the computation of the steady-state radiation environment for complex 3-D geometries. An example of the code's utility is given. 4 refs., 2 figs., 1 tab.
FORTRAN code-evaluation system
NASA Technical Reports Server (NTRS)
Capps, J. D.; Kleir, R.
1977-01-01
Automated code evaluation system can be used to detect coding errors and unsound coding practices in any ANSI FORTRAN IV source code before they can cause execution-time malfunctions. System concentrates on acceptable FORTRAN code features which are likely to produce undesirable results.
2013-04-18
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating insidents involving redioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
2010-03-02
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
Wilson, R.E.; Freeman, L.N.; Walker, S.N.
1995-09-01
The FAST2 Code which is capable of determining structural loads of a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data at two wind speeds for the ESI-80 are given. The FAST2 Code models a two-bladed HAWT with degrees of freedom for blade flap, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffness, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms and azimuth averaged bin plots. It is concluded that agreement between the FAST2 Code and test results is good.
NASA Technical Reports Server (NTRS)
1991-01-01
In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.
TACO: a finite element heat transfer code
Mason, W.E. Jr.
1980-02-01
TACO is a two-dimensional implicit finite element code for heat transfer analysis. It can perform both linear and nonlinear analyses and can be used to solve either transient or steady state problems. Either plane or axisymmetric geometries can be analyzed. TACO has the capability to handle time or temperature dependent material properties and materials may be either isotropic or orthotropic. A variety of time and temperature dependent loadings and boundary conditions are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additionally, TACO has some specialized features such as internal surface conditions (e.g., contact resistance), bulk nodes, enclosure radiation with view factor calculations, and chemical reactive kinetics. A user subprogram feature allows for any type of functional representation of any independent variable. A bandwidth and profile minimization option is also available in the code. Graphical representation of data generated by TACO is provided by a companion post-processor named POSTACO. The theory on which TACO is based is outlined, the capabilities of the code are explained, the input data required to perform an analysis with TACO are described. Some simple examples are provided to illustrate the use of the code.
Coded-aperture imaging in nuclear medicine
NASA Astrophysics Data System (ADS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-11-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
Coded-aperture imaging in nuclear medicine
NASA Technical Reports Server (NTRS)
Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.
1989-01-01
Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Radiation transport Part B: Applications with examples
Beutler, D.E.
1997-06-01
In the previous sections Len Lorence has described the need, theory, and types of radiation codes that can be applied to model the results of radiation effects tests or working environments for electronics. For the rest of this segment, the author will concentrate on the specific ways the codes can be used to predict device response or analyze radiation test results. Regardless of whether one is predicting responses in a working or test environment, the procedures are virtually the same. The same can be said for the use of 1-, 2-, or 3-dimensional codes and Monte Carlo or discrete ordinates codes. No attempt is made to instruct the student on the specifics of the code. For example, the author will not discuss the details, such as the number of meshes, energy groups, etc. that are appropriate for a discrete ordinates code. For the sake of simplicity, he will restrict himself to the 1-dimensional code CEPXS/ONELD. This code along with a wide variety of other radiation codes can be obtained form the Radiation Safety Information Computational Center (RSICC) for a nominal handling fee.
Radiation from advanced solid rocket motor plumes
NASA Technical Reports Server (NTRS)
Farmer, Richard C.; Smith, Sheldon D.; Myruski, Brian L.
1994-01-01
The overall objective of this study was to develop an understanding of solid rocket motor (SRM) plumes in sufficient detail to accurately explain the majority of plume radiation test data. Improved flowfield and radiation analysis codes were developed to accurately and efficiently account for all the factors which effect radiation heating from rocket plumes. These codes were verified by comparing predicted plume behavior with measured NASA/MSFC ASRM test data. Upon conducting a thorough review of the current state-of-the-art of SRM plume flowfield and radiation prediction methodology and the pertinent data base, the following analyses were developed for future design use. The NOZZRAD code was developed for preliminary base heating design and Al2O3 particle optical property data evaluation using a generalized two-flux solution to the radiative transfer equation. The IDARAD code was developed for rapid evaluation of plume radiation effects using the spherical harmonics method of differential approximation to the radiative transfer equation. The FDNS CFD code with fully coupled Euler-Lagrange particle tracking was validated by comparison to predictions made with the industry standard RAMP code for SRM nozzle flowfield analysis. The FDNS code provides the ability to analyze not only rocket nozzle flow, but also axisymmetric and three-dimensional plume flowfields with state-of-the-art CFD methodology. Procedures for conducting meaningful thermo-vision camera studies were developed.
Modular optimization code package: MOZAIK
NASA Astrophysics Data System (ADS)
Bekar, Kursat B.
This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the
... to determine the amount of radiation exposure from nuclear accidents, the best signs of the severity of the ... doses of radiation, such as radiation from a nuclear power plant accident Exposure to excessive radiation for medical treatments
The PARTRAC code: Status and recent developments
NASA Astrophysics Data System (ADS)
Friedland, Werner; Kundrat, Pavel
Biophysical modeling is of particular value for predictions of radiation effects due to manned space missions. PARTRAC is an established tool for Monte Carlo-based simulations of radiation track structures, damage induction in cellular DNA and its repair [1]. Dedicated modules describe interactions of ionizing particles with the traversed medium, the production and reactions of reactive species, and score DNA damage determined by overlapping track structures with multi-scale chromatin models. The DNA repair module describes the repair of DNA double-strand breaks (DSB) via the non-homologous end-joining pathway; the code explicitly simulates the spatial mobility of individual DNA ends in parallel with their processing by major repair enzymes [2]. To simulate the yields and kinetics of radiation-induced chromosome aberrations, the repair module has been extended by tracking the information on the chromosome origin of ligated fragments as well as the presence of centromeres [3]. PARTRAC calculations have been benchmarked against experimental data on various biological endpoints induced by photon and ion irradiation. The calculated DNA fragment distributions after photon and ion irradiation reproduce corresponding experimental data and their dose- and LET-dependence. However, in particular for high-LET radiation many short DNA fragments are predicted below the detection limits of the measurements, so that the experiments significantly underestimate DSB yields by high-LET radiation [4]. The DNA repair module correctly describes the LET-dependent repair kinetics after (60) Co gamma-rays and different N-ion radiation qualities [2]. First calculations on the induction of chromosome aberrations have overestimated the absolute yields of dicentrics, but correctly reproduced their relative dose-dependence and the difference between gamma- and alpha particle irradiation [3]. Recent developments of the PARTRAC code include a model of hetero- vs euchromatin structures to enable
Cameron, J
1991-01-01
This article summarizes the basic facts about the measurement of ionizing radiation, usually referred to as radiation dosimetry. The article defines the common radiation quantities and units; gives typical levels of natural radiation and medical exposures; and describes the most important biological effects of radiation and the methods used to measure radiation. Finally, a proposal is made for a new radiation risk unit to make radiation risks more understandable to nonspecialists. PMID:2040250
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode
Bingham, Philip R; Santos-Villalobos, Hector J
2011-01-01
Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1993-11-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
Phase-coded pulse aperiodic transmitter coding
NASA Astrophysics Data System (ADS)
Virtanen, I. I.; Vierinen, J.; Lehtinen, M. S.
2009-07-01
Both ionospheric and weather radar communities have already adopted the method of transmitting radar pulses in an aperiodic manner when measuring moderately overspread targets. Among the users of the ionospheric radars, this method is called Aperiodic Transmitter Coding (ATC), whereas the weather radar users have adopted the term Simultaneous Multiple Pulse-Repetition Frequency (SMPRF). When probing the ionosphere at the carrier frequencies of the EISCAT Incoherent Scatter Radar facilities, the range extent of the detectable target is typically of the order of one thousand kilometers - about seven milliseconds - whereas the characteristic correlation time of the scattered signal varies from a few milliseconds in the D-region to only tens of microseconds in the F-region. If one is interested in estimating the scattering autocorrelation function (ACF) at time lags shorter than the F-region correlation time, the D-region must be considered as a moderately overspread target, whereas the F-region is a severely overspread one. Given the technical restrictions of the radar hardware, a combination of ATC and phase-coded long pulses is advantageous for this kind of target. We evaluate such an experiment under infinitely low signal-to-noise ratio (SNR) conditions using lag profile inversion. In addition, a qualitative evaluation under high-SNR conditions is performed by analysing simulated data. The results show that an acceptable estimation accuracy and a very good lag resolution in the D-region can be achieved with a pulse length long enough for simultaneous E- and F-region measurements with a reasonable lag extent. The new experiment design is tested with the EISCAT Tromsø VHF (224 MHz) radar. An example of a full D/E/F-region ACF from the test run is shown at the end of the paper.
2006-10-27
FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a codemore » obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.« less
Seals Code Development Workshop
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)
1996-01-01
Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.
NASA Astrophysics Data System (ADS)
Vaucouleur, Sebastien
2011-02-01
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
Code inspection instructional validation
NASA Technical Reports Server (NTRS)
Orr, Kay; Stancil, Shirley
1992-01-01
The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option
NASA Astrophysics Data System (ADS)
Gilleron, Franck; Piron, Robin
2015-12-01
We present Dédale, a fast code implementing a simplified non-local-thermodynamic-equilibrium (NLTE) plasma model. In this approach, the stationary collisional-radiative rates equations are solved for a set of well-chosen Layzer complexes in order to determine the ion state populations. The electronic structure is approximated using the screened hydrogenic model (SHM) of More with relativistic corrections. The radiative and collisional cross-sections are based on Kramers and Van Regemorter formula, respectively, which are extrapolated to derive analytical expressions for all the rates. The latter are improved thereafter using Gaunt factors or more accurate tabulated data. Special care is taken for dielectronic rates which are compared and rescaled with quantum calculations from the Averroès code. The emissivity and opacity spectra are calculated under the same assumptions as for the radiative rates, either in a detailed manner by summing the transitions between each pair of complexes, or in a coarser statistical way by summing the one-electron transitions averaged over the complexes. Optionally, nℓ-splitting can be accounted for using a WKB approach in an approximate potential reconstructed analytically from the screened charges. It is also possible to improve the spectra by replacing some transition arrays with more accurate data tabulated using the SCO-RCG or FAC codes. This latter option is particularly useful for K-shell emission spectroscopy. The Dédale code was used to submit neon and tungsten cases in the last NLTE-8 workshop (Santa Fe, November 4-8, 2013). Some of these results are presented, as well as comparisons with Averroès calculations.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor
Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik
2004-10-01
If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements
DOE 2012 occupational radiation exposure
none,
2013-10-01
The U.S. Department of Energy (DOE) Office of Analysis within the Office of Health, Safety and Security (HSS) publishes the annual DOE Occupational Radiation Exposure Report to provide an overview of the status of radiation protection practices at DOE (including the National Nuclear Security Administration [NNSA]). The DOE 2012 Occupational Radiation Exposure Report provides an evaluation of DOE-wide performance regarding compliance with Title 10, Code of Federal Regulations (C.F.R.), Part 835, Occupational Radiation Protection dose limits and as low as reasonably achievable (ALARA) process requirements. In addition, the report provides data to DOE organizations responsible for developing policies for protection of individuals from the adverse health effects of radiation. The report provides a summary and an analysis of occupational radiation exposure information from the monitoring of individuals involved in DOE activities. Over the past 5-year period, the occupational radiation exposure information is analyzed in terms of aggregate data, dose to individuals, and dose by site.
DOE 2011 occupational radiation exposure
none,
2012-12-01
The U.S. Department of Energy (DOE) Office of Analysis within the Office of Health, Safety and Security (HSS) publishes the annual DOE Occupational Radiation Exposure Report to provide an overview of the status of radiation protection practices at DOE (including the National Nuclear Security Administration [NNSA]). The DOE 2011 Occupational Radiation Exposure Report provides an evaluation of DOE-wide performance regarding compliance with Title 10, Code of Federal Regulations (C.F.R.), Part 835, Occupational Radiation Protection dose limits and as low as reasonably achievable (ALARA) process requirements. In addition, the report provides data to DOE organizations responsible for developing policies for protection of individuals from the adverse health effects of radiation. The report provides a summary and an analysis of occupational radiation exposure information from the monitoring of individuals involved in DOE activities. The occupational radiation exposure information is analyzed in terms of aggregate data, dose to individuals, and dose by site over the past five years.
HOTSPOT Health Physics codes for the PC
Homann, S.G.
1994-03-01
The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculation tool for evaluating accidents involving radioactive materials. HOTSPOT codes are a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT codes produce a consistent output for the same input assumptions and minimize the probability of errors associated with reading a graph incorrectly or scaling a universal nomogram during an emergency. The HOTSPOT codes are designed for short-term (less than 24 hours) release durations. Users requiring radiological release consequences for release scenarios over a longer time period, e.g., annual windrose data, are directed to such long-term models as CAPP88-PC (Parks, 1992). Users requiring more sophisticated modeling capabilities, e.g., complex terrain; multi-location real-time wind field data; etc., are directed to such capabilities as the Department of Energy`s ARAC computer codes (Sullivan, 1993). Four general programs -- Plume, Explosion, Fire, and Resuspension -- calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Other programs deal with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. Additional programs estimate the dose commitment from the inhalation of any one of the radionuclides listed in the database of radionuclides; calibrate a radiation survey instrument for ground-survey measurements; and screen plutonium uptake in the lung (see FIDLER Calibration and LUNG Screening sections).
Accumulate Repeat Accumulate Coded Modulation
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.
Multiple trellis coded modulation
NASA Technical Reports Server (NTRS)
Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor)
1990-01-01
A technique for designing trellis codes to minimize bit error performance for a fading channel. The invention provides a criteria which may be used in the design of such codes which is significantly different from that used for average white Gaussian noise channels. The method of multiple trellis coded modulation of the present invention comprises the steps of: (a) coding b bits of input data into s intermediate outputs; (b) grouping said s intermediate outputs into k groups of s.sub.i intermediate outputs each where the summation of all s.sub.i,s is equal to s and k is equal to at least 2; (c) mapping each of said k groups of intermediate outputs into one of a plurality of symbols in accordance with a plurality of modulation schemes, one for each group such that the first group is mapped in accordance with a first modulation scheme and the second group is mapped in accordance with a second modulation scheme; and (d) outputting each of said symbols to provide k output symbols for each b bits of input data.
ERIC Educational Resources Information Center
American Sociological Association, Washington, DC.
The American Sociological Association's code of ethics for sociologists is presented. For sociological research and practice, 10 requirements for ethical behavior are identified, including: maintaining objectivity and integrity; fully reporting findings and research methods, without omission of significant data; reporting fully all sources of…
ERIC Educational Resources Information Center
Olsen, Florence
2003-01-01
Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)
Jones, Dean P.
2015-01-01
Abstract Significance: The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O2 and H2O2 contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Recent Advances: Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Critical Issues: Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Future Directions: Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine. Antioxid. Redox Signal. 23, 734–746. PMID:25891126
Environmental Fluid Dynamics Code
The Environmental Fluid Dynamics Code (EFDC)is a state-of-the-art hydrodynamic model that can be used to simulate aquatic systems in one, two, and three dimensions. It has evolved over the past two decades to become one of the most widely used and technically defensible hydrodyn...
Heuristic dynamic complexity coding
NASA Astrophysics Data System (ADS)
Škorupa, Jozef; Slowack, Jürgen; Mys, Stefaan; Lambert, Peter; Van de Walle, Rik
2008-04-01
Distributed video coding is a new video coding paradigm that shifts the computational intensive motion estimation from encoder to decoder. This results in a lightweight encoder and a complex decoder, as opposed to the predictive video coding scheme (e.g., MPEG-X and H.26X) with a complex encoder and a lightweight decoder. Both schemas, however, do not have the ability to adapt to varying complexity constraints imposed by encoder and decoder, which is an essential ability for applications targeting a wide range of devices with different complexity constraints or applications with temporary variable complexity constraints. Moreover, the effect of complexity adaptation on the overall compression performance is of great importance and has not yet been investigated. To address this need, we have developed a video coding system with the possibility to adapt itself to complexity constraints by dynamically sharing the motion estimation computations between both components. On this system we have studied the effect of the complexity distribution on the compression performance. This paper describes how motion estimation can be shared using heuristic dynamic complexity and how distribution of complexity affects the overall compression performance of the system. The results show that the complexity can indeed be shared between encoder and decoder in an efficient way at acceptable rate-distortion performance.
ERIC Educational Resources Information Center
Association of College Unions-International, Bloomington, IN.
The code of ethics for the college union and student activities professional is presented by the Association of College Unions-International. The preamble identifies the objectives of the college union as providing campus community centers and social programs that enhance the quality of life for members of the academic community. Ethics for…
ERIC Educational Resources Information Center
Burton, John K.; Wildman, Terry M.
The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…
NASA Astrophysics Data System (ADS)
Ninio, Jacques
1990-03-01
Recent findings on the genetic code are reviewed, including selenocysteine usage, deviations in the assignments of sense and nonsense codons, RNA editing, natural ribosomal frameshifts and non-orthodox codon-anticodon pairings. A multi-stage codon reading process is presented.
ERIC Educational Resources Information Center
Lumsden, Linda; Miller, Gabriel
2002-01-01
Students do not always make choices that adults agree with in their choice of school dress. Dress-code issues are explored in this Research Roundup, and guidance is offered to principals seeking to maintain a positive school climate. In "Do School Uniforms Fit?" Kerry White discusses arguments for and against school uniforms and summarizes the…
MAGEE,GLEN I.
2000-08-03
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Importance biasing scheme implemented in the PRIZMA code
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
Radiation Pattern of Chair Armed Microstrip Antenna
NASA Astrophysics Data System (ADS)
Mishra, Rabindra Kishore; Sahu, Kumar Satyabrat
2016-07-01
This work analyzes planar antenna conformable to chair arm shaped surfaces for WLAN application. Closed form expressions for its radiation pattern are developed and validated using measurements on prototype and commercial EM code at 2.4 GHz.
The Arcetri Spectral Code for optically thin plasmas
NASA Astrophysics Data System (ADS)
Landi, E.; Landini, M.
2002-03-01
The Arcetri Spectral Code allows one to evaluate the spectrum of the radiation emitted by hot and optically thin plasmas in the spectral range 1-2000 Å. The Arcetri Code consists of a series of files that contain the emissivity of the plasma as a function of electron temperature and density. Both line and continuum emission are considered. These quantities are calculated using a database of atomic data and transition probabilities, mostly taken from the CHIANTI database. In the present work we describe the updates to the spectrum and present the new results. A comparison with the previous version of the code allows us to assess the improvements to the spectrum; comparison with other spectral codes allows us to assess the completeness of the Arcetri Code and of the CHIANTI database.
Development of a shuttle plume radiation heating indicator
NASA Technical Reports Server (NTRS)
Reardon, John E.
1988-01-01
The primary objectives were to develop a Base Heating Indicator Code and a new plume radiation code for the Space Shuttle. Additional work included: revision of the Space Shuttle plume radiation environment for changes in configuration and correction of errors, evaluation of radiation measurements to establish a plume radiation model for the SRB High Performance Motor (HPM) plume, radiation predictions for preliminary designs, and participation in hydrogen disposal analysis and testing for the VAFB Shuttle launch site. The two most significant accomplishments were the development of the Base Heating Indicator Code and the Shuttle Engine Plume Radiation (SEPRAD) Code. The major efforts in revising the current Shuttle plume radiation environment were for the Orbiter base heat shield and the ET components in the Orbiter-ET interface region. The work performed is summarized in the technical discussion section with references to the documents containing detailed results. The technical discussion is followed by a summary of conclusions and recommendations for future work.
Radiation therapy is a cancer treatment. It uses high doses of radiation to kill cancer cells and stop them from ... half of all cancer patients receive it. The radiation may be external, from special machines, or internal, ...
Radiation therapy uses high-powered x-rays, particles, or radioactive seeds to kill cancer cells. ... faster than normal cells in the body. Because radiation is most harmful to quickly growing cells, radiation ...
Radiation therapy is a cancer treatment. It uses high doses of radiation to kill cancer cells and stop them ... places inside your body. The type of radiation therapy you receive depends on many factors, including The ...
... people who have radiation therapy may feel more tired than usual, not feel hungry, or lose their ... of radiation therapy include: Fatigue. Fatigue, or feeling tired, is the most common side effect of radiation ...
... day from sources such as sunlight. A radiation emergency would involve larger amounts of radiation and could ... are no guarantees of safety during a radiation emergency, you can take actions to protect yourself. You ...
Binary coding for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Wang, Jing; Chang, Chein-I.; Chang, Chein-Chi; Lin, Chinsu
2004-10-01
Binary coding is one of simplest ways to characterize spectral features. One commonly used method is a binary coding-based image software system, called Spectral Analysis Manager (SPAM) for remotely sensed imagery developed by Mazer et al. For a given spectral signature, the SPAM calculates its spectral mean and inter-band spectral difference and uses them as thresholds to generate a binary code word for this particular spectral signature. Such coding scheme is generally effective and also very simple to implement. This paper revisits the SPAM and further develops three new SPAM-based binary coding methods, called equal probability partition (EPP) binary coding, halfway partition (HP) binary coding and median partition (MP) binary coding. These three binary coding methods along with the SPAM well be evaluated for spectral discrimination and identification. In doing so, a new criterion, called a posteriori discrimination probability (APDP) is also introduced for performance measure.
Improved Algorithms Speed It Up for Codes
Hazi, A
2005-09-20
Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leader for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.
NASA Technical Reports Server (NTRS)
Mcaulay, Robert J.; Quatieri, Thomas F.
1988-01-01
It has been shown that an analysis/synthesis system based on a sinusoidal representation of speech leads to synthetic speech that is essentially perceptually indistinguishable from the original. Strategies for coding the amplitudes, frequencies and phases of the sine waves have been developed that have led to a multirate coder operating at rates from 2400 to 9600 bps. The encoded speech is highly intelligible at all rates with a uniformly improving quality as the data rate is increased. A real-time fixed-point implementation has been developed using two ADSP2100 DSP chips. The methods used for coding and quantizing the sine-wave parameters for operation at the various frame rates are described.
2006-03-08
MAPVAR-KD is designed to transfer solution results from one finite element mesh to another. MAPVAR-KD draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR-KD are described. User instructions are presented. Example problems are included to demonstrate the operationmore » of the code and the effects of various input options. MAPVAR-KD is a modification of MAPVAR in which the search algorithm was replaced by a kd-tree-based search for better performance on large problems.« less
N.V. Mokhov
2003-04-09
Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.
NASA Astrophysics Data System (ADS)
Tóth, Gábor; Keppens, Rony
2012-07-01
The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
Harshvardhan, M.R. )
1991-01-01
Studies of atmospheric radiative processes are summarized for the period 1987-1990. Topics discussed include radiation modeling; clouds and radiation; radiative effects in dynamics and climate; radiation budget and aerosol effects; and gaseous absorption, particulate scattering and surface reflection. It is concluded that the key developments of the period are a defining of the radiative forcing to the climate system by trace gases and clouds, the recognition that cloud microphysics and morphology need to be incorporated not only into radiation models but also climate models, and the isolation of a few important unsolved theoretical problems in atmospheric radiation.
Computer aided radiation analysis for manned spacecraft
NASA Technical Reports Server (NTRS)
Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.
1991-01-01
In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.
Radiation physics, biophysics, and radiation biology
Hall, E.J.; Zaider, M.
1993-05-01
Research at the Center for Radiological Research is a multidisciplenary blend of physics, chemistry and biology aimed at understanding the mechanisms involved in the health problems resulting from human exposure to ionizing radiations. The focus is increased on biochemistry and the application of the techniques of molecular biology to the problems of radiation biology. Research highlights of the program from the past year are described. A mathematical model describing the production of single-strand and double-strand breaks in DNA as a function radiation quality has been completed. For the first time Monte Carlo techniques have been used to obtain directly the spatial distribution of DNA moieties altered by radiation. This information was obtained by including the transport codes a realistic description of the electronic structure of DNA. We have investigated structure activity relationships for the potential oncogenicity of a new generation of bioreductive drugs that function as hypoxic cytotoxins. Experimental and theoretical investigation of the inverse dose rate effect, whereby medium LET radiations actually produce an c effect when the dose is protracted, is now at a point where the basic mechanisms are reasonably understood and the complex interplay between dose, dose rate and radiation quality which is necessary for the effect to be present can now be predicted at least in vitro. In terms of early radiobiological damage, a quantitative link has been established between basic energy deposition and locally multiply damaged sites, the radiochemical precursor of DNA double strand breaks; specifically, the spatial and energy deposition requirements necessary to form LMDs have been evaluated. For the first time, a mechanically understood biological fingerprint'' of high-LET radiation has been established. Specifically measurement of the ratio of inter-to intra-chromosomal aberrations produces a unique signature from alpha-particles or neutrons.
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Bar coded retroreflective target
Vann, Charles S.
2000-01-01
This small, inexpensive, non-contact laser sensor can detect the location of a retroreflective target in a relatively large volume and up to six degrees of position. The tracker's laser beam is formed into a plane of light which is swept across the space of interest. When the beam illuminates the retroreflector, some of the light returns to the tracker. The intensity, angle, and time of the return beam is measured to calculate the three dimensional location of the target. With three retroreflectors on the target, the locations of three points on the target are measured, enabling the calculation of all six degrees of target position. Until now, devices for three-dimensional tracking of objects in a large volume have been heavy, large, and very expensive. Because of the simplicity and unique characteristics of this tracker, it is capable of three-dimensional tracking of one to several objects in a large volume, yet it is compact, light-weight, and relatively inexpensive. Alternatively, a tracker produces a diverging laser beam which is directed towards a fixed position, and senses when a retroreflective target enters the fixed field of view. An optically bar coded target can be read by the tracker to provide information about the target. The target can be formed of a ball lens with a bar code on one end. As the target moves through the field, the ball lens causes the laser beam to scan across the bar code.
Suboptimum decoding of block codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
This paper investigates a class of decomposable codes, their distance and structural properties. it is shown that this class includes several classes of well known and efficient codes as subclasses. Several methods for constructing decomposable codes or decomposing codes are presented. A two-stage soft decision decoding scheme for decomposable codes, their translates or unions of translates is devised. This two-stage soft-decision decoding is suboptimum, and provides an excellent trade-off between the error performance and decoding complexity for codes of moderate and long block length.
Preliminary Assessment of Turbomachinery Codes
NASA Technical Reports Server (NTRS)
Mazumder, Quamrul H.
2007-01-01
This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.
Structural coding versus free-energy predictive coding.
van der Helm, Peter A
2016-06-01
Focusing on visual perceptual organization, this article contrasts the free-energy (FE) version of predictive coding (a recent Bayesian approach) to structural coding (a long-standing representational approach). Both use free-energy minimization as metaphor for processing in the brain, but their formal elaborations of this metaphor are fundamentally different. FE predictive coding formalizes it by minimization of prediction errors, whereas structural coding formalizes it by minimization of the descriptive complexity of predictions. Here, both sides are evaluated. A conclusion regarding competence is that FE predictive coding uses a powerful modeling technique, but that structural coding has more explanatory power. A conclusion regarding performance is that FE predictive coding-though more detailed in its account of neurophysiological data-provides a less compelling cognitive architecture than that of structural coding, which, for instance, supplies formal support for the computationally powerful role it attributes to neuronal synchronization. PMID:26407895
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
LDEF geometry/mass model for radiation analyses
NASA Technical Reports Server (NTRS)
Colborn, B. L.; Armstrong, T. W.
1992-01-01
A three-dimensional geometry/mass model of LDEF is under development for ionizing radiation analyses. This model, together with ray tracing algorithms, is being programmed for use both as a stand alone code in determining three-dimensional shielding distributions at dosimetry locations and as a geometry module that can be interfaced with radiation transport codes.
Initial conditions of radiative shock experiments
Kuranz, C. C.; Drake, R. P.; Krauland, C. M.; Marion, D. C.; Grosskopf, M. J.; Rutter, E.; Torralva, B.; Holloway, J. P.; Bingham, D.; Goh, J.; Boehly, T. R.; Sorce, A. T.
2013-05-15
We performed experiments at the Omega Laser Facility to characterize the initial, laser-driven state of a radiative shock experiment. These experiments aimed to measure the shock breakout time from a thin, laser-irradiated Be disk. The data are then used to inform a range of valid model parameters, such as electron flux limiter and polytropic γ, used when simulating radiative shock experiments using radiation hydrodynamics codes. The characterization experiment and the radiative shock experiment use a laser irradiance of ∼7 × 10{sup 14} W cm{sup −2} to launch a shock in the Be disk. A velocity interferometer and a streaked optical pyrometer were used to infer the amount of time for the shock to move through the Be disk. The experimental results were compared with simulation results from the Hyades code, which can be used to model the initial conditions of a radiative shock system using the CRASH code.
ERIC Educational Resources Information Center
American Inst. of Architects, Washington, DC.
A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli. PMID:23724797
Dual-sided coded-aperture imager
Ziock, Klaus-Peter
2009-09-22
In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.
NASA Astrophysics Data System (ADS)
Gao, Wen; Jiang, Minqiang; Yu, Haoping
2013-02-01
In this paper, we first review the lossless coding mode in the version 1 of the HEVC standard that has recently finalized. We then provide a performance comparison between the lossless coding mode in the HEVC and MPEG-AVC/H.264 standards and show that the HEVC lossless coding has limited coding efficiency. To improve the performance of the lossless coding mode, several new coding tools that were contributed to JCT-VC but not adopted in version 1 of HEVC standard are introduced. In particular, we discuss sample based intra prediction and coding of residual coefficients in more detail. At the end, we briefly address a new class of coding tools, i.e., a dictionary-based coder, that is efficient in encoding screen content including graphics and text.
Radiation of the pelvis - discharge; Cancer treatment - pelvic radiation; Prostate cancer - pelvic radiation; Ovarian cancer - pelvic radiation; Cervical cancer - pelvic radiation; Uterine cancer - pelvic radiation; Rectal cancer - ...
Time coded distribution via broadcasting stations
NASA Technical Reports Server (NTRS)
Leschiutta, S.; Pettiti, V.; Detoma, E.
1979-01-01
The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.
NASA Astrophysics Data System (ADS)
Borrero, J. M.; Lites, B. W.; Lagg, A.; Rezaei, R.; Rempel, M.
2014-12-01
Milne-Eddington (M-E) inversion codes for the radiative transfer equation are the most widely used tools to infer the magnetic field from observations of the polarization signals in photospheric and chromospheric spectral lines. Unfortunately, a comprehensive comparison between the different M-E codes available to the solar physics community is still missing, and so is a physical interpretation of their inferences. In this contribution we offer a comparison between three of those codes (VFISV, ASP/HAO, and HeLIx+). These codes are used to invert synthetic Stokes profiles that were previously obtained from realistic non-grey three-dimensional magnetohydrodynamical (3D MHD) simulations. The results of the inversion are compared with each other and with those from the MHD simulations. In the first case, the M-E codes retrieve values for the magnetic field strength, inclination and line-of-sight velocity that agree with each other within σB ≤ 35 (Gauss), σγ ≤ 1.2°, and σv ≤ 10 m s-1, respectively. Additionally, M-E inversion codes agree with the numerical simulations, when compared at a fixed optical depth, within σB ≤ 130 (Gauss), σγ ≤ 5°, and σv ≤ 320 m s-1. Finally, we show that employing generalized response functions to determine the height at which M-E codes measure physical parameters is more meaningful than comparing at a fixed geometrical height or optical depth. In this case the differences between M-E inferences and the 3D MHD simulations decrease to σB ≤ 90 (Gauss), σγ ≤ 3°, and σv ≤ 90 m s-1.
Noiseless Coding Of Magnetometer Signals
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Lee, Jun-Ji
1989-01-01
Report discusses application of noiseless data-compression coding to digitized readings of spaceborne magnetometers for transmission back to Earth. Objective of such coding to increase efficiency by decreasing rate of transmission without sacrificing integrity of data. Adaptive coding compresses data by factors ranging from 2 to 6.
Energy Codes and Standards: Facilities
Bartlett, Rosemarie; Halverson, Mark A.; Shankle, Diana L.
2007-01-01
Energy codes and standards play a vital role in the marketplace by setting minimum requirements for energy-efficient design and construction. They outline uniform requirements for new buildings as well as additions and renovations. This article covers basic knowledge of codes and standards; development processes of each; adoption, implementation, and enforcement of energy codes and standards; and voluntary energy efficiency programs.
Coding Issues in Grounded Theory
ERIC Educational Resources Information Center
Moghaddam, Alireza
2006-01-01
This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…
Authorship Attribution of Source Code
ERIC Educational Resources Information Center
Tennyson, Matthew F.
2013-01-01
Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…
Ethical Codes in the Professions.
ERIC Educational Resources Information Center
Schmeiser, Cynthia B.
1992-01-01
Whether the measurement profession should consider developing and adopting a code of professional conduct is explored after a brief review of existing references to standards of conduct and a review of other professional codes. Issues include the need for a code of ethics, its usefulness, and its enforcement. (SLD)
2005-05-07
CONEX is a code for joining sequentially in time multiple exodusll database files which all represent the same base mesh topology and geometry. It is used to create a single results or restart file from multiple results or restart files which typically arise as the result of multiple restarted analyses. CONEX is used to postprocess the results from a series of finite element analyses. It can join sequentially the data from multiple results databases intomore » a single database which makes it easier to postprocess the results data.« less
2005-06-26
Exotxt is an analysis code that reads finite element results data stored in an exodusII file and generates a file in a structured text format. The text file can be edited or modified via a number of text formatting tools. Exotxt is used by analysis to translate data from the binary exodusII format into a structured text format which can then be edited or modified and then either translated back to exodusII format or tomore » another format.« less
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
New quantum codes constructed from quaternary BCH codes
NASA Astrophysics Data System (ADS)
Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena
2016-07-01
In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.
Structured error recovery for code-word-stabilized quantum codes
Li Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.
2010-05-15
Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3{sup t} times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.
User's manual for the ORIGEN2 computer code
Croff, A.G.
1980-07-01
This report describes how to use a revised version of the ORIGEN computer code, designated ORIGEN2. Included are a description of the input data, input deck organization, and sample input and output. ORIGEN2 can be obtained from the Radiation Shielding Information Center at ORNL.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
Obituary: Arthur Dodd Code (1923-2009)
NASA Astrophysics Data System (ADS)
Marché, Jordan D., II
2009-12-01
Former AAS president Arthur Dodd Code, age 85, passed away at Meriter Hospital in Madison, Wisconsin on 11 March 2009, from complications involving a long-standing pulmonary condition. Code was born in Brooklyn, New York on 13 August 1923, as the only child of former Canadian businessman Lorne Arthur Code and Jesse (Dodd) Code. An experienced ham radio operator, he entered the University of Chicago in 1940, but then enlisted in the U.S. Navy (1943-45) and was later stationed as an instructor at the Naval Research Laboratory, Washington, D.C. During the war, he gained extensive practical experience with the design and construction of technical equipment that served him well in years ahead. Concurrently, he took physics courses at George Washington University (some under the tutelage of George Gamow). In 1945, he was admitted to the graduate school of the University of Chicago, without having received his formal bachelor's degree. In 1950, he was awarded his Ph.D. for a theoretical study of radiative transfer in O- and B-type stars, directed by Subrahmanyan Chandrasekhar. hired onto the faculty of the Department of Astronomy at the University of Wisconsin-Madison (1951-56). He then accepted a tenured appointment at the California Institute of Technology and the Mount Wilson and Palomar Observatories (1956-58). But following the launch of Sputnik, Code returned to Wisconsin in 1958 as full professor of astronomy, director of the Washburn Observatory, and department chairman so that he could more readily pursue his interest in space astronomy. That same year, he was chosen a member of the Space Science Board of the National Academy of Sciences (created during the International Geophysical Year) and shortly became one of five principal investigators of the original NASA Space Science Working Group. In a cogent 1960 essay, Code argued that astrophysical investigations, when conducted from beyond the Earth's atmosphere, "cannot fail to have a tremendous impact on the
Fault-Tolerant Coding for State Machines
NASA Technical Reports Server (NTRS)
Naegle, Stephanie Taft; Burke, Gary; Newell, Michael
2008-01-01
Two reliable fault-tolerant coding schemes have been proposed for state machines that are used in field-programmable gate arrays and application-specific integrated circuits to implement sequential logic functions. The schemes apply to strings of bits in state registers, which are typically implemented in practice as assemblies of flip-flop circuits. If a single-event upset (SEU, a radiation-induced change in the bit in one flip-flop) occurs in a state register, the state machine that contains the register could go into an erroneous state or could hang, by which is meant that the machine could remain in undefined states indefinitely. The proposed fault-tolerant coding schemes are intended to prevent the state machine from going into an erroneous or hang state when an SEU occurs. To ensure reliability of the state machine, the coding scheme for bits in the state register must satisfy the following criteria: 1. All possible states are defined. 2. An SEU brings the state machine to a known state. 3. There is no possibility of a hang state. 4. No false state is entered. 5. An SEU exerts no effect on the state machine. Fault-tolerant coding schemes that have been commonly used include binary encoding and "one-hot" encoding. Binary encoding is the simplest state machine encoding and satisfies criteria 1 through 3 if all possible states are defined. Binary encoding is a binary count of the state machine number in sequence; the table represents an eight-state example. In one-hot encoding, N bits are used to represent N states: All except one of the bits in a string are 0, and the position of the 1 in the string represents the state. With proper circuit design, one-hot encoding can satisfy criteria 1 through 4. Unfortunately, the requirement to use N bits to represent N states makes one-hot coding inefficient.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
NASA Astrophysics Data System (ADS)
Abdullah, Alyasa Gan; Wah, Yap Bee
2015-02-01
The computation of the approximate values of the trigonometric sines was discovered by Bhaskara I (c. 600-c.680), a seventh century Indian mathematician and is known as the Bjaskara's I's sine approximation formula. The formula is given in his treatise titled Mahabhaskariya. In the 14th century, Madhava of Sangamagrama, a Kerala mathematician astronomer constructed the table of trigonometric sines of various angles. Madhava's table gives the measure of angles in arcminutes, arcseconds and sixtieths of an arcsecond. The search for more accurate formulas led to the discovery of the power series expansion by Madhava of Sangamagrama (c.1350-c. 1425), the founder of the Kerala school of astronomy and mathematics. In 1715, the Taylor series was introduced by Brook Taylor an English mathematician. If the Taylor series is centered at zero, it is called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. Some of the important Maclaurin series expansions include trigonometric functions. This paper introduces the genetic code of the sine of an angle without using power series expansion. The genetic code using square root approach reveals the pattern in the signs (plus, minus) and sequence of numbers in the sine of an angle. The square root approach complements the Pythagoras method, provides a better understanding of calculating an angle and will be useful for teaching the concepts of angles in trigonometry.
Radiation View Factor With Shadowing
1992-02-24
FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors as input data to finite element heat transfer analysis codes.
Determinate-state convolutional codes
NASA Technical Reports Server (NTRS)
Collins, O.; Hizlan, M.
1991-01-01
A determinate state convolutional code is formed from a conventional convolutional code by pruning away some of the possible state transitions in the decoding trellis. The type of staged power transfer used in determinate state convolutional codes proves to be an extremely efficient way of enhancing the performance of a concatenated coding system. The decoder complexity is analyzed along with free distances of these new codes and extensive simulation results is provided of their performance at the low signal to noise ratios where a real communication system would operate. Concise, practical examples are provided.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Model formulation of non-equilibrium gas radiation for hypersonic flight vehicles
NASA Technical Reports Server (NTRS)
Chang, Ing
1989-01-01
Several radiation models for low density nonequilibrium hypersonic flow are studied. It is proposed that these models should be tested by the 3-D VRFL code developed at NASA/JSC. A modified and optimized radiation model may be obtained from the testing. Then, the current VRFL code could be expanded to solve hypersonic flow problems with nonequilibrium thermal radiation.
Circular codes, symmetries and transformations.
Fimmel, Elena; Giannerini, Simone; Gonzalez, Diego Luis; Strüngmann, Lutz
2015-06-01
Circular codes, putative remnants of primeval comma-free codes, have gained considerable attention in the last years. In fact they represent a second kind of genetic code potentially involved in detecting and maintaining the normal reading frame in protein coding sequences. The discovering of an universal code across species suggested many theoretical and experimental questions. However, there is a key aspect that relates circular codes to symmetries and transformations that remains to a large extent unexplored. In this article we aim at addressing the issue by studying the symmetries and transformations that connect different circular codes. The main result is that the class of 216 C3 maximal self-complementary codes can be partitioned into 27 equivalence classes defined by a particular set of transformations. We show that such transformations can be put in a group theoretic framework with an intuitive geometric interpretation. More general mathematical results about symmetry transformations which are valid for any kind of circular codes are also presented. Our results pave the way to the study of the biological consequences of the mathematical structure behind circular codes and contribute to shed light on the evolutionary steps that led to the observed symmetries of present codes. PMID:25008961
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Herro, Mark A.
1988-01-01
A class of block coset codes with disparity and run-length constraints are studied. They are particularly well suited for high-speed optical fiber links and similar channels, where dc-free pulse formats, channel error control, and low-complexity encoder-decoder implementations are required. The codes are derived by partitioning linear block codes. The encoder and decoder structures are the same as those of linear block codes with only slight modifications. A special class of dc-free coset block codes are derived from BCH codes with specified bounds on minimum distance, disparity, and run length. The codes have low disparity levels (a small running digital sum) and good error-correcting capabilities.
Permutation-invariant quantum codes
NASA Astrophysics Data System (ADS)
Ouyang, Yingkai
2014-12-01
A quantum code is a subspace of a Hilbert space of a physical system chosen to be correctable against a given class of errors, where information can be encoded. Ideally, the quantum code lies within the ground space of the physical system. When the physical model is the Heisenberg ferromagnet in the absence of an external magnetic field, the corresponding ground space contains all permutation-invariant states. We use techniques from combinatorics and operator theory to construct families of permutation-invariant quantum codes. These codes have length proportional to t2; one family of codes perfectly corrects arbitrary weight t errors, while the other family of codes approximately correct t spontaneous decay errors. The analysis of our codes' performance with respect to spontaneous decay errors utilizes elementary matrix analysis, where we revisit and extend the quantum error correction criterion of Knill and Laflamme, and Leung, Chuang, Nielsen and Yamamoto.
Making your code citable with the Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.
2016-01-01
The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.
Practices in Code Discoverability: Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.
2012-09-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.
Overview of radiation protection at the Superconducting Super Collider Laboratory
Baker, S.; Britvich, G.; Bull, J.; Coulson, L.; Coyne, J.; Mokhov, N.; Romero, V.; Stapleton, G.
1994-03-01
The radiation protection program at the Superconducting Super Collider Laboratory is described. After establishing a set of stringent design guidelines for radiation protection, both normal and accidental beam losses for each accelerator were estimated. From these parameters, shielding requirements were specified using Monte-Carlo radiation transport codes. A groundwater activation model was developed to demonstrate compliance with federal drinking water standards. Finally, the environmental radiation monitoring program was implemented to determine the effect of the facility operation on the radiation environment.
WhiskyMHD: Numerical Code for General Relativistic Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Baiotti, Luca; Giacomazzo, Bruno; Hawke, Ian; et al.
2010-10-01
Whisky is a code to evolve the equations of general relativistic hydrodynamics (GRHD) and magnetohydrodynamics (GRMHD) in 3D Cartesian coordinates on a curved dynamical background. It was originally developed by and for members of the EU Network on Sources of Gravitational Radiation and is based on the Cactus Computational Toolkit. Whisky can also implement adaptive mesh refinement (AMR) if compiled together with Carpet. Whisky has grown from earlier codes such as GR3D and GRAstro_Hydro, but has been rewritten to take advantage of some of the latest research performed here in the EU. The motivation behind Whisky is to compute gravitational radiation waveforms for systems that involve matter. Examples would include the merger of a binary system containing a neutron star, which are expected to be reasonably common in the universe and expected to produce substantial amounts of radiation. Other possible sources are given in the projects list.
Kalugin, M. A.
2010-12-15
In the present work, a set of codes used for simulations of the radiation fields from ionizing radiation sources inside the containment in an accident is described. A method of evaluating the gamma dose rate from a space and energy distributed source is given. The dose rate is calculated by means of the design point kernel method and using buildup factors. The code MCU-REA with the ORIMCU module is used for the burnup calculations.
Methods of treating complex space vehicle geometry for charged particle radiation transport
NASA Technical Reports Server (NTRS)
Hill, C. W.
1973-01-01
Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.
NASA Technical Reports Server (NTRS)
Wu, Honglu
2006-01-01
Astronauts receive the highest occupational radiation exposure. Effective protections are needed to ensure the safety of astronauts on long duration space missions. Increased cancer morbidity or mortality risk in astronauts may be caused by occupational radiation exposure. Acute and late radiation damage to the central nervous system (CNS) may lead to changes in motor function and behavior, or neurological disorders. Radiation exposure may result in degenerative tissue diseases (non-cancer or non-CNS) such as cardiac, circulatory, or digestive diseases, as well as cataracts. Acute radiation syndromes may occur due to occupational radiation exposure.
Updates to the NEQAIR Radiation Solver
NASA Technical Reports Server (NTRS)
Cruden, Brett A.; Brandis, Aaron M.
2014-01-01
The NEQAIR code is one of the original heritage solvers for radiative heating prediction in aerothermal environments, and is still used today for mission design purposes. This paper discusses the implementation of the first major revision to the NEQAIR code in the last five years, NEQAIR v14.0. The most notable features of NEQAIR v14.0 are the parallelization of the radiation computation, reducing runtimes by about 30×, and the inclusion of mid-wave CO2 infrared radiation.
Surface acoustic wave coding for orthogonal frequency coded devices
NASA Technical Reports Server (NTRS)
Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)
2011-01-01
Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.
Liman, Emily R.; Zhang, Yali V.; Montell, Craig
2014-01-01
Five canonical tastes, bitter, sweet, umami (amino acid), salty and sour (acid) are detected by animals as diverse as fruit flies and humans, consistent with a near universal drive to consume fundamental nutrients and to avoid toxins or other harmful compounds. Surprisingly, despite this strong conservation of basic taste qualities between vertebrates and invertebrates, the receptors and signaling mechanisms that mediate taste in each are highly divergent. The identification over the last two decades of receptors and other molecules that mediate taste has led to stunning advances in our understanding of the basic mechanisms of transduction and coding of information by the gustatory systems of vertebrates and invertebrates. In this review, we discuss recent advances in taste research, mainly from the fly and mammalian systems, and we highlight principles that are common across species, despite stark differences in receptor types. PMID:24607224
NASA Astrophysics Data System (ADS)
Dauro, Vincent A., Sr.
IMP (Integrated Mission Program) is a simulation language and code used to model present and future Earth, Moon, or Mars missions. The profile is user controlled through selection from a large menu of events and maneuvers. A Fehlberg 7/13 Runge-Kutta integrator with error and step size control is used to numerically integrate the differential equations of motion (DEQ) of three spacecraft, a main, a target, and an observer. Through selection, the DEQ's include guided thrust, oblate gravity, atmosphere drag, solar pressure, and Moon gravity effects. Guide parameters for thrust events and performance parameters of velocity changes (Delta-V) and propellant usage (maximum of five systems) are developed as needed. Print, plot, summary, and debug files are output.
Telescope Adaptive Optics Code
Phillion, D.
2005-07-28
The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST
Boltzmann Transport Code Update: Parallelization and Integrated Design Updates
NASA Technical Reports Server (NTRS)
Heinbockel, J. H.; Nealy, J. E.; DeAngelis, G.; Feldman, G. A.; Chokshi, S.
2003-01-01
The on going efforts at developing a web site for radiation analysis is expected to result in an increased usage of the High Charge and Energy Transport Code HZETRN. It would be nice to be able to do the requested calculations quickly and efficiently. Therefore the question arose, "Could the implementation of parallel processing speed up the calculations required?" To answer this question two modifications of the HZETRN computer code were created. The first modification selected the shield material of Al(2219) , then polyethylene and then Al(2219). The modified Fortran code was labeled 1SSTRN.F. The second modification considered the shield material of CO2 and Martian regolith. This modified Fortran code was labeled MARSTRN.F.
Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications
Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth
2013-06-01
Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.
User's manual for the GABAS spectrum computer code. Final report
Thayer, D.D.; Lurie, N.A.
1982-01-01
The Gamma and Beta Spectrum computer code (GABAS) was developed at IRT Corporation for calculating time-dependent beta and/or gamma spectra from decaying fission products. GABAS calculates composite fission product spectra based on the technique used by England, et al., in conjunction with the CINDER family of fission product codes. Multigroup beta and gamma spectra for individual nuclides are folded with their corresponding time-dependent activities (usually generated by a fission product inventory code) to produce a composite time-dependent fission product spectrum. This manual contains the methodology employed by GABAS, input requirements for proper execution, a sample problem and a FORTRAN listing compatible with a UNIVAC machine. The code is available in a UNIVAC 1100/81 version and a VAX 11/780 version. The former may be obtained from the Radiation Shielding Information Center (RSIC); the latter may be obtained directly from IRT Corporation.
Transionospheric Propagation Code (TIPC)
Roussel-Dupre, R.; Kelley, T.A.
1990-10-01
The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of vhf signals following propagation through the ionosphere. The code is written in Fortran 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, DTOA study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of delta-times-of-arrival (DTOAs) vs TECs for a specified pair of receivers.
Transionospheric Propagation Code (TIPC)
NASA Astrophysics Data System (ADS)
Roussel-Dupre, Robert; Kelley, Thomas A.
1990-10-01
The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of VHF signals following propagation through the ionosphere. The code is written in FORTRAN 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, delta times of arrival (DTOA) study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of DTOAs vs TECs for a specified pair of receivers.
NASA Astrophysics Data System (ADS)
Ahmed, Hassan Yousif; Nisar, K. S.
2013-08-01
Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.
Some easily analyzable convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R.; Dolinar, S.; Pollara, F.; Vantilborg, H.
1989-01-01
Convolutional codes have played and will play a key role in the downlink telemetry systems on many NASA deep-space probes, including Voyager, Magellan, and Galileo. One of the chief difficulties associated with the use of convolutional codes, however, is the notorious difficulty of analyzing them. Given a convolutional code as specified, say, by its generator polynomials, it is no easy matter to say how well that code will perform on a given noisy channel. The usual first step in such an analysis is to computer the code's free distance; this can be done with an algorithm whose complexity is exponential in the code's constraint length. The second step is often to calculate the transfer function in one, two, or three variables, or at least a few terms in its power series expansion. This step is quite hard, and for many codes of relatively short constraint lengths, it can be intractable. However, a large class of convolutional codes were discovered for which the free distance can be computed by inspection, and for which there is a closed-form expression for the three-variable transfer function. Although for large constraint lengths, these codes have relatively low rates, they are nevertheless interesting and potentially useful. Furthermore, the ideas developed here to analyze these specialized codes may well extend to a much larger class.
Point Kernel Gamma-Ray Shielding Code With Geometric Progression Buildup Factors.
1990-11-30
Version 00 QADMOD-GP is a PC version of the mainframe code CCC-396/QADMOD-G, a point-kernel integration code for calculating gamma ray fluxes and dose rates or heating rates at specific detector locations within a three-dimensional shielding geometry configuration due to radiation from a volume-distributed source.
... this page: //medlineplus.gov/ency/article/001918.htm Radiation therapy To use the sharing features on this page, please enable JavaScript. Radiation therapy uses high-powered x-rays, particles, or ...
Benchmarking studies for the DESCARTES and CIDER codes
Eslinger, P.W.; Ouderkirk, S.J.; Nichols, W.E.
1993-01-01
The Hanford Envirorunental Dose Reconstruction (HEDR) project is developing several computer codes to model the airborne release, transport, and envirormental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In order to calculate the dose of radiation a person may have received in any given location, the geographic area addressed by the HEDR Project will be divided into a grid. The grid size suggested by the draft requirements contains 2091 units called nodes. Two of the codes being developed are DESCARTES and CIDER. The DESCARTES code will be used to estimate the concentration of radionuclides in environmental pathways from the output of the air transport code RATCHET. The CIDER code will use information provided by DESCARTES to estimate the dose received by an individual. The requirements that Battelle (BNW) set for these two codes were released to the HEDR Technical Steering Panel (TSP) in a draft document on November 10, 1992. This document reports on the preliminary work performed by the code development team to determine if the requirements could be met.
Nonlinear, nonbinary cyclic group codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1992-01-01
New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.
QR code for medical information uses.
Fontelo, Paul; Liu, Fang; Ducut, Erick G
2008-01-01
We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine. PMID:18998785