Development of probabilistic multimedia multipathway computer codes.
Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.
Benchmarking computer codes for cask development
Glass, R.E.
1987-07-01
Sandia National Laboratories has taken the lead in developing a series of standard problems to evaluate structural and thermal computer codes used in the development of nuclear materials packagings for the United States Department of Energy (DOE). These problems were designed to simulate the hypothetical accident conditions given in Title 10 of the Code of Federal Regulation, Part 71 (10CFR71) while retaining simple geometries. This problem set exercises the ability of the codes to model pertinent physical phenomena without requiring extensive use of computer resources. The development of the standard problem set has involved both national and international consensus groups. Nationally, both the problems and consensus solutions have been developed and discussed at Industry/Government Joint Thermal and Structural Code Information Exchange meetings. Internationally, the Organization for Economic Cooperation and Development (OECD), through its Nuclear Energy Agency's Committee on Reactor Physics, has hosted a series of Specialists' Meetings on Heat Transfer to develop standard thermal problems and solutions. Each of these activities is represented in the standard problem set and the consensus solutions. This paper presents a brief overview of the structural and thermal problem sets.
High-altitude plume computer code development
NASA Technical Reports Server (NTRS)
Audeh, B. J.; Murphy, J. E.
1985-01-01
The flowfield codes that have been developed to predict rocket motor plumes at high altitude were used to predict plume properties for the RCS motor which show reasonable agreement with experimental data. A systematic technique was established for the calculation of high altitude plumes. The communication of data between the computer codes was standardized. It is recommended that these outlined procedures be more completed, documented and updated as the plume methodology is applied to the varied problems of plume flow and plume impingement encountered by space station design and operation.
New developments in the Saphire computer codes
Russell, K.D.; Wood, S.T.; Kvarfordt, K.J.
1996-03-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.
Development of probabilistic internal dosimetry computer code
NASA Astrophysics Data System (ADS)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
Liquid rocket combustor computer code development
NASA Technical Reports Server (NTRS)
Liang, P. Y.
1985-01-01
The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
SGEMP Phenomenology and Computer Code Development
1974-11-01
SGEXPýExperiments Ca.lculational. Methods rpasi-static, Dynamic E8&4 20 AGSTRACT M-14dAw an Poese*O 0440 It Reaswr. MaE hUa’v Ar Ieck Adae. Two new compute7...length. The c~alculations are for end-on irradia- tion of the cylinders, wihirh is simulated by specified emission of electrons DO 1473 EDI TION OF a...cylind.:ical cavity. The two cylinders can be isolated from one another or coalnected by an arbitrai,, load. The outputs of the 2ode are fields and
Development Of A Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Kwak, Dochan
1993-01-01
Report discusses aspects of development of CENS3D computer code, solving three-dimensional Navier-Stokes equations of compressible, viscous, unsteady flow. Implements implicit finite-difference or finite-volume numerical-integration scheme, called "lower-upper symmetric-Gauss-Seidel" (LU-SGS), offering potential for very low computer time per iteration and for fast convergence.
Development of non-linear finite element computer code
NASA Technical Reports Server (NTRS)
Becker, E. B.; Miller, T.
1985-01-01
Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1987-01-01
Multiple nozzle plume flow field is computed with a 3-D, Navier-Stokes solver. Numerical simulation is performed with a flux-split, two-factor, time asymptotic viscous flow solver of Ying and Steger. The two factor splitting provides a stable 3-D solution procedure under ideal-gas assumptions. An ad-hoc acceleration procedure that shows promise in improving the convergence rate by a factor of three for steady state problems is utilized. Computed solutions to generic problems at various altitude and flight conditions show flow field complexity and three-dimensional effects due to multiple nozzle jet interactions. Viscous, ideal gas solutions for the symmetric nozzle are compared with other numerical solutions.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1993-01-01
Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.
PLUTO code for computational Astrophysics: News and Developments
NASA Astrophysics Data System (ADS)
Tzeferacos, P.; Mignone, A.
2012-01-01
We present an overview on recent developments and functionalities available with the PLUTO code for astrophysical fluid dynamics. The recent extension of the code to a conservative finite difference formulation and high order spatial discretization of the compressible equations of magneto-hydrodynamics (MHD), complementary to its finite volume approach, allows for a highly accurate treatment of smooth flows, while avoiding loss of accuracy near smooth extrema and providing sharp non-oscillatory transitions at discontinuities. Among the novel features, we present alternative, fully explicit treatments to include non-ideal dissipative processes (namely viscosity, resistivity and anisotropic thermal conduction), that do not suffer from the usual timestep limitation of explicit time stepping. These methods, offsprings of the multistep Runge-Kutta family that use a Chebyshev polynomial recursion, are competitive substitutes of computationally expensive implicit schemes that involve sparse matrix inversion. Several multi-dimensional benchmarks and appli-cations assess the potential of PLUTO to efficiently handle many astrophysical problems.
Development and application of the GIM code for the Cyber 203 computer
NASA Technical Reports Server (NTRS)
Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.
1982-01-01
The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.
Two-Phase Flow in Geothermal Wells: Development and Uses of a Good Computer Code
Ortiz-Ramirez, Jaime
1983-06-01
A computer code is developed for vertical two-phase flow in geothermal wellbores. The two-phase correlations used were developed by Orkiszewski (1967) and others and are widely applicable in the oil and gas industry. The computer code is compared to the flowing survey measurements from wells in the East Mesa, Cerro Prieto, and Roosevelt Hot Springs geothermal fields with success. Well data from the Svartsengi field in Iceland are also used. Several applications of the computer code are considered. They range from reservoir analysis to wellbore deposition studies. It is considered that accurate and workable wellbore simulators have an important role to play in geothermal reservoir engineering.
Development of a predictive computer code for heat losses from solar external receivers
Afshari, B.; Ferziger, J.H.
1983-02-01
The problem of mixed convection from external receivers is considered. The governing partial differential equations are stated, and the numerical solution is obtained using Keller's box method. The computer code developed to predict the heat losses in the attached flow region is described. Special attention is given to treatment of some numerical difficulties that arise. Preliminary computational results for laminar three-dimensional mixed convection are presented and discussed. Future project efforts and planned extensions of the present code are described. (LEW)
Barre, F.; Sun, C.; Dor, I.
1995-12-31
A new version of the French thermal-hydraulics safety code CATHARE 2 has been developed. It is a fast running version, able to take into account vector and parallel computing. It will be used as the thermal-hydraulics kernel of the new generation of full scope simulators and study simulators. One of the objectives is also to provide an advanced three-dimensional module with a high CPU-time performance. An effort has been performed to develop a three-step numerical method with a maximum level of implicitness. In the field of thermalhydraulics, new needs have been defined, especially for containment calculations. Second order schemes and turbulence models for two-phase flow are under development. Its last objective is to develop a code easy to couple with large system codes which deal, for example, with severe accident field. The structure of the new codes developed in the CEA allows to use parallel computing to manage this coupling.
The development of an intelligent interface to a computational fluid dynamics flow-solver code
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1988-01-01
Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, 3-D, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.
The development of an intelligent interface to a computational fluid dynamics flow-solver code
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1988-01-01
Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, three-dimensional, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.
The development of an intelligent interface to a computational fluid dynamics flow-solver code
NASA Technical Reports Server (NTRS)
Williams, Anthony D.
1988-01-01
Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, three-dimensional, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.
Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution
NASA Astrophysics Data System (ADS)
Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi
2015-05-01
In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs
Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code
NASA Technical Reports Server (NTRS)
Weinberg, B. C.; Mcdonald, H.
1980-01-01
There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Development of a cryogenic mixed fluid J-T cooling computer code, 'JTMIX'
NASA Technical Reports Server (NTRS)
Jones, Jack A.
1991-01-01
An initial study was performed for analyzing and predicting the temperatures and cooling capacities when mixtures of fluids are used in Joule-Thomson coolers and in heat pipes. A computer code, JTMIX, was developed for mixed gas J-T analysis for any fluid combination of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with the NIST computer code, DDMIX, it has accurately predicted order-of-magnitude increases in J-T cooling capacities when various hydrocarbons are added to nitrogen, and it predicts nitrogen normal boiling point depressions to as low as 60 K when neon is added.
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
Development of structured ICD-10 and its application to computer-assisted ICD coding.
Imai, Takeshi; Kajino, Masayuki; Sato, Megumi; Ohe, Kazuhiko
2010-01-01
This paper presents: (1) a framework of formal representation of ICD10, which functions as a bridge between ontological information and natural language expressions; and (2) a methodology to use formally described ICD10 for computer-assisted ICD coding. First, we analyzed and structurized the meanings of categories in 15 chapters of ICD10. Then we expanded the structured ICD10 (S-ICD10) by adding subordinate concepts and labels derived from Japanese Standard Disease Names. The information model to describe formal representation was refined repeatedly. The resultant model includes 74 types of semantic links. We also developed an ICD coding module based on S-ICD10 and a 'Coding Principle,' which achieved high accuracy (>70%) for four chapters. These results not only demonstrate the basic feasibility of our coding framework but might also inform the development of the information model for formal description framework in the ICD11 revision.
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.
Development of a new generation solid rocket motor ignition computer code
NASA Technical Reports Server (NTRS)
Foster, Winfred A., Jr.; Jenkins, Rhonald M.; Ciucci, Alessandro; Johnson, Shelby D.
1994-01-01
This report presents the results of experimental and numerical investigations of the flow field in the head-end star grain slots of the Space Shuttle Solid Rocket Motor. This work provided the basis for the development of an improved solid rocket motor ignition transient code which is also described in this report. The correlation between the experimental and numerical results is excellent and provides a firm basis for the development of a fully three-dimensional solid rocket motor ignition transient computer code.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
NASA Astrophysics Data System (ADS)
Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi
The COMPASS code is designed based on the moving particle semi-implicit method to simulate various complex mesoscale phenomena relevant to core disruptive accidents of sodium-cooled fast reactors. In this study, a computational framework for fluid-solid mixture flow simulations was developed for the COMPASS code. The passively moving solid model was used to simulate hydrodynamic interactions between fluid and solids. Mechanical interactions between solids were modeled by the distinct element method. A multi-time-step algorithm was introduced to couple these two calculations. The proposed computational framework for fluid-solid mixture flow simulations was verified by the comparison between experimental and numerical studies on the water-dam break with multiple solid rods.
Development of a numerical computer code and circuit element models for simulation of firing systems
Carpenter, K.H. . Dept. of Electrical and Computer Engineering)
1990-07-02
Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
NASA Technical Reports Server (NTRS)
1991-01-01
In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.
NASA Technical Reports Server (NTRS)
Goerke, W. S.
1972-01-01
A manual is presented as an aid in using the STEEP32 code. The code is the EXEC 8 version of the STEEP code (STEEP is an acronym for shock two-dimensional Eulerian elastic plastic). The major steps in a STEEP32 run are illustrated in a sample problem. There is a detailed discussion of the internal organization of the code, including a description of each subroutine.
Computer Code Validation in Electromagnetics
1989-06-01
modeling code. This user perception of validity is based on documentation, peer review, user experience and computer resource management. Keywords: Electromagnetic environment effects; Electromagnetic interference; Reprints. (jhd)
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
NASA Technical Reports Server (NTRS)
1979-01-01
A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.
NASA Technical Reports Server (NTRS)
Doane, George B., III; Armstrong, Wilbur C.
1991-01-01
Research into component modeling and system synthesis leading to the analysis of the major types of propulsion system instabilities and the characterization of various components characteristics are presented. Last year, several programs designed to run on a PC were developed for Marshall Space Flight Center. These codes covered the low, intermediate, and high frequency modes of oscillation of a liquid rocket propulsion system. No graphics were built into these programs and only simple piping layouts were supported. This year's effort was to add run time graphics to the low and intermediate frequency codes, allow new types of piping elements (accumulators, pumps, and split pipes) in the low frequency code, and develop a new code for the PC to generate Nyquist plots.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1996-01-01
This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1996-01-01
This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.
Chen, S. Y.; Yu, C.; Mo. T.; Trottier, C.
2000-10-17
In 1999, the US Nuclear Regulatory Commission (NRC) tasked Argonne National Laboratory to modify the existing RESRAD and RESRAD-BUILD codes to perform probabilistic, site-specific dose analysis for use with the NRC's Standard Review Plan for demonstrating compliance with the license termination rule. The RESRAD codes have been developed by Argonne to support the US Department of Energy's (DOEs) cleanup efforts. Through more than a decade of application, the codes already have established a large user base in the nation and a rigorous QA support. The primary objectives of the NRC task are to: (1) extend the codes' capabilities to include probabilistic analysis, and (2) develop parameter distribution functions and perform probabilistic analysis with the codes. The new codes also contain user-friendly features specially designed with graphic-user interface. In October 2000, the revised RESRAD (version 6.0) and RESRAD-BUILD (version 3.0), together with the user's guide and relevant parameter information, have been developed and are made available to the general public via the Internet for use.
Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji
2013-09-25
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
Development of a three-dimensional Navier-Stokes code on CDC star-100 computer
NASA Technical Reports Server (NTRS)
Vatsa, V. N.; Goglia, G. L.
1978-01-01
A three-dimensional code in body-fitted coordinates was developed using MacCormack's algorithm. The code is structured to be compatible with any general configuration, provided that the metric coefficients for the transformation are available. The governing equations are developed in primitive variables in order to facilitate the incorporation of physical boundary conditions and turbulence-closure models. MacCormack's two-step, unsplit, time-marching algorithm is used to solve the unsteady Navier-Stokes equations until steady-state solution is achieved. Cases discussed include (1) flat plate in supersonic free stream; (2) supersonic flow along an axial corner; (3) subsonic flow in an axial corner at M infinity = 0.95; and (4) supersonic flow in an axial corner at M infinity 1.5.
Development of Computer Codes to Model Dynamics of the Earth’s Magnetosphere
1989-04-01
feedback effect. Because electrons tend to collect in the field excursions, electrostatic fields also develop. As the amplitude of the disturbance grows...subsection has been used to investigate tearing mode instabilities in asymmetric tangential discontinuities of the type observed on the dayside...directions transverse to the magnetic field, large amplitude Langmuir plasma oscillations are seen. This is a feature which is observed in most particle code
Chemical Laser Computer Code Survey,
1980-12-01
DOCUMENTATION: Resonator Geometry Synthesis Code Requi rement NV. L. Gamiz); Incorporate General Resonator into Ray Trace Code (W. H. Southwell... Synthesis Code Development (L. R. Stidhm) CATEGRY ATIUEOPTICS KINETICS GASOYNAM41CS None * None *iNone J.LEVEL Simrple Fabry Perot Simple SaturatedGt... Synthesis Co2de Require- ment (V L. ami l ncor~orate General Resonatorn into Ray Trace Code (W. H. Southwel) Srace Optimization Algorithms and Equations (W
Swarmathon 2017 - Students Develop Computer Code to Support Exploration at Kennedy
2017-04-19
Students from colleges and universities from across the nation recently participated in a robotic programming competition at NASA's Kennedy Space Center in Florida. Their research may lead to technology which will help astronauts find needed resources when exploring the moon or Mars. In the spaceport's second annual Swarmathon competition, aspiring engineers from 20 teams representing 22 minority serving universities and community colleges were invited to develop software code to operate innovative robots called "Swarmies." The event took place April 18-20, 2017, at the Kennedy Space Center Visitor Complex.
NASA Astrophysics Data System (ADS)
Restrepo, Louis Fernando
The close location of most DOE non-reactor nuclear facilities to site boundaries and the potential for having receptors in the proximity of such facilities makes it extremely important to accurately address the impact of plume rise and building wake effects on the consequences to such individuals. Unfortunately, there is no current single computer code or model that adequately address the consequences to receptors postulated to be located within the building wake of such facilities. Existing state-of-the-art models have relied on over- simplistic plume rise and parametric wake models that were developed based on very limited amount of data or assumptions, thus potentially leading to large errors in calculations. Building wake and plume rise models implemented in existing consequence computer codes have been identified and evaluated. These models come from an extensive literature review of dispersion, transport, and consequence modeling of airborne radioactive material releases that extends over 25 years. This dissertation focuses on the evaluation of existing state-of-the-art parametric building wake dispersion models by the use of computational fluid dynamic (CFD) codes, developing potential improvements to such models, and comparing the results of such improvements to those generated by CFD models and models implemented in state- of-the-art computer codes. This dissertation also presents new dispersion models and a new analytical parametric model to deal with transient releases that decay or transform during transport.
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr.
1990-01-01
Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr.
1990-01-01
Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.
Development of a helically coiled tube steam generator model for the SASSYS computer code
Pizzica, P.A.
1994-12-31
A helically coiled steam generator design has been found to provide many advantages when considering the requirements of a liquid-metal reactor (LMR) power plant. A few of these advantages are a smaller number of longer, larger-diameter, thicker-walled tubes; fewer tube-to-tubesheet welds; better accommodation of thermal expansion; compact heat transfer geometry; and the mitigation of departure from nucleate boiling (DNB) effects. Therefore, this type of steam generator was chosen as the reference design for the Advanced Liquid-Metal Reactor (ALMR) project. This design is a vertically oriented, helical coil, sodium-to-water counter-cross-flow shell and tube heat exchanger with water on the tube side. The SASSYS LMR accident analysis computer code has been improved over the last several years by the addition of a number of new component models, one of which is for the steam generator. In addition to this straight-tube model, a new model now treats helically coiled tubes in the steam generator. Both models are available to calculate once-through as well as recirculation-type designs.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
NASA Technical Reports Server (NTRS)
Klopfer, Goetz H.
1993-01-01
The work performed during the past year on this cooperative agreement covered two major areas and two lesser ones. The two major items included further development and validation of the Compressible Navier-Stokes Finite Volume (CNSFV) code and providing computational support for the Laminar Flow Supersonic Wind Tunnel (LFSWT). The two lesser items involve a Navier-Stokes simulation of an oscillating control surface at transonic speeds and improving the basic algorithm used in the CNSFV code for faster convergence rates and more robustness. The work done in all four areas is in support of the High Speed Research Program at NASA Ames Research Center.
Using the DEWSBR computer code
Cable, G.D.
1989-09-01
A computer code is described which is designed to determine the fraction of time during which a given ground location is observable from one or more members of a satellite constellation in earth orbit. Ground visibility parameters are determined from the orientation and strength of an appropriate ionized cylinder (used to simulate a beam experiment) at the selected location. Satellite orbits are computed in a simplified two-body approximation computation. A variety of printed and graphical outputs is provided. 9 refs., 50 figs., 2 tabs.
Computer access security code system
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr. (Inventor)
1990-01-01
A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1984-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1983-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Hixon, Duane R.
2002-01-01
Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively
Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction
NASA Astrophysics Data System (ADS)
Keith, Theo G., Jr.; Hixon, Duane R.
2002-07-01
Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively
New coding technique for computer generated holograms.
NASA Technical Reports Server (NTRS)
Haskell, R. E.; Culver, B. C.
1972-01-01
A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.
Seals Code Development Workshop
NASA Technical Reports Server (NTRS)
Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)
1996-01-01
Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.
Lee, Y.J.; Dalpiaz, E.L.
1997-08-01
Computer code, WTVFE (Waste Tank Ventilation Flow Evaluation), has been developed to evaluate the ventilation requirement for an underground storage tank for radioactive waste. Heat generated by the radioactive waste and mixing pumps in the tank is removed mainly through the ventilation system. The heat removal process by the ventilation system includes the evaporation of water from the waste and the heat transfer by natural convection from the waste surface. Also, a portion of the heat will be removed through the soil and the air circulating through the gap between the primary and secondary tanks. The heat loss caused by evaporation is modeled based on recent evaporation test results by the Westinghouse Hanford Company using a simulated small scale waste tank. Other heat transfer phenomena are evaluated based on well established conduction and convection heat transfer relationships. 10 refs., 3 tabs.
Computer design code for conical ribbon parachutes
Waye, D.E.
1986-01-01
An interactive computer design code has been developed to aid in the design of conical ribbon parachutes. The program is written to include single conical and polyconical parachute designs. The code determines the pattern length, vent diameter, radial length, ribbon top and bottom lengths, and geometric local and average porosity for the designer with inputs of constructed diameter, ribbon widths, ribbon spacings, radial width, and number of gores. The gores are designed with one mini-radial in the center with an option for the addition of two outer mini-radials. The output provides all of the dimensions necessary for the construction of the parachute. These results could also be used as input into other computer codes used to predict parachute loads.
Efficient tree codes on SIMD computer architectures
NASA Astrophysics Data System (ADS)
Olson, Kevin M.
1996-11-01
This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
Implementing a modular system of computer codes
Vondy, D.R.; Fowler, T.B.
1983-07-01
A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out.
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
2005-01-01
A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.
Computer Code for Nanostructure Simulation
NASA Technical Reports Server (NTRS)
Filikhin, Igor; Vlahovic, Branislav
2009-01-01
Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.
Present state of the SOURCES computer code
Shores, E. F.
2002-01-01
In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.
Epetra developers coding guidelines.
Heroux, Michael Allen; Sexton, Paul Michael
2003-12-01
Epetra is a package of classes for the construction and use of serial and distributed parallel linear algebra objects. It is one of the base packages in Trilinos. This document describes guidelines for Epetra coding style. The issues discussed here go beyond correct C++ syntax to address issues that make code more readable and self-consistent. The guidelines presented here are intended to aid current and future development of Epetra specifically. They reflect design decisions that were made in the early development stages of Epetra. Some of the guidelines are contrary to more commonly used conventions, but we choose to continue these practices for the purposes of self-consistency. These guidelines are intended to be complimentary to policies established in the Trilinos Developers Guide.
Computer-Based Coding of Occupation Codes for Epidemiological Analyses
Russ, Daniel E.; Ho, Kwan-Yuet; Johnson, Calvin A.; Friesen, Melissa C.
2014-01-01
Mapping job titles to standardized occupation classification (SOC) codes is an important step in evaluating changes in health risks over time as measured in inspection databases. However, manual SOC coding is cost prohibitive for very large studies. Computer based SOC coding systems can improve the efficiency of incorporating occupational risk factors into large-scale epidemiological studies. We present a novel method of mapping verbatim job titles to SOC codes using a large table of prior knowledge available in the public domain that included detailed description of the tasks and activities and their synonyms relevant to each SOC code. Job titles are compared to our knowledge base to find the closest matching SOC code. A soft Jaccard index is used to measure the similarity between a previously unseen job title and the knowledge base. Additional information such as standardized industrial codes can be incorporated to improve the SOC code determination by providing additional context to break ties in matches. PMID:25221787
NASA Astrophysics Data System (ADS)
Ramamoorthy, Karthikeyan
The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant
2003-02-10
HYCOM code development Alan J. Wallcraft Naval Research Laboratory 2003 Layered Ocean Model Users’ Workshop February 10, 2003 Report Documentation...unlimited 13. SUPPLEMENTARY NOTES Layered Ocean Modeling Workshop (LOM 2003), Miami, FL, Feb 2003 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY...Kraus-Turner mixed-layer Æ Energy-Loan (passive) ice model Æ High frequency atmospheric forcing Æ New I/O scheme (.a and .b files) Æ Scalability via
NASA Technical Reports Server (NTRS)
Sawyer, Scott
2004-01-01
The BASS computational aeroacoustic code solves the fully nonlinear Euler equations in the time domain in two-dimensions. The acoustic response of the stator is determined simultaneously for the first three harmonics of the convected vortical gust of the rotor. The spatial mode generation, propagation and decay characteristics are predicted by assuming the acoustic field away from the stator can be represented as a uniform flow with small harmonic perturbations superimposed. The computed field is then decomposed using a joint temporal-spatial transform to determine the wave amplitudes as a function of rotor harmonic and spatial mode order. This report details the following technical aspects of the computations and analysis. 1) the BASS computational technique; 2) the application of periodic time shifted boundary conditions; 3) the linear theory aspects unique to rotor-stator interactions; and 4) the joint spatial-temporal transform. The computational results presented herein are twofold. In each case, the acoustic response of the stator is determined simultaneously for the first three harmonics of the convected vortical gust of the rotor. The fan under consideration here like modern fans is cut-off at +, and propagating acoustic waves are only expected at 2BPF and 3BPF. In the first case, the computations showed excellent agreement with linear theory predictions. The frequency and spatial mode order of acoustic field was computed and found consistent with linear theory. Further, the propagation of the generated modes was also correctly predicted. The upstream going waves propagated from the domain without reflection from the in ow boundary. However, reflections from the out ow boundary were noticed. The amplitude of the reflected wave was approximately 5% of the incident wave. The second set of computations were used to determine the influence of steady loading on the generated noise. Toward this end, the acoustic response was determined with three steady loading
Computational electromagnetics: Codes and capabilities
Sprouse, D.
1997-03-01
Livermore`s EM field experts study and model wave phenomena covering almost the entire electromagnetic spectrum. Applications are as varied as the wavelengths of interest: particle accelerator components, material science and pulsed power subsystems, photonic and optoelectronic devices, aerospace and radar systems, and microwave and microelectronics devices. Building from its seminal work on time-domain algorithms in the 1960s, the Laboratory has fashioned top-notch resources in electromagnetic and electronics modeling and characterization. Using Laboratory-developed two-dimensional and three-dimensional EM field and propagation modeling codes and EM measurement facilities. Livermore personnel can evaluate, design, fabricate, and test a wide range of accelerator systems and both impulse and continuous-wave RF (radio-frequency) microwave systems.
Clouse, C. J.; Edwards, M. J.; McCoy, M. G.; Marinak, M. M.; Verdon, C. P.
2015-07-07
Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.
Computing Challenges in Coded Mask Imaging
NASA Technical Reports Server (NTRS)
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
NASA Technical Reports Server (NTRS)
Marconi, F.; Yaeger, L.
1976-01-01
A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
Dual-code quantum computation model
NASA Astrophysics Data System (ADS)
Choi, Byung-Soo
2015-08-01
In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.
Fallout Computer Codes. A Bibliographic Perspective
1994-07-01
of time. The model calculates g(t) by assuming that fallout descends from a nuclear cloud that is characterized initially by a Gaussian distribution in...features and differences among the major radioactive fallout models and computer codes that are either in current use or that form the basis for more...contemporary codes and other computational tools. The DELFIC, WSEG-10, KDFOC2, SEER3, and DNAF-1 codes and the EM-I model are addressed. The review is
Overview of CODE V development
NASA Astrophysics Data System (ADS)
Harris, Thomas I.
1991-01-01
This paper is part of a session that is aimed at briefly describing some of today''s optical design software packages with emphasis on the program''s philosophy and technology. CODE V is the ongoing result of a development process that began in the 1960''s it is now the result of many people''s efforts. This paper summarizes the roots of the program some of its history dominant philosophies and technologies that have contributed to its usefulness and some that drive its continued development. ROOTS OF CODE V Conceived in the early 60''s This was at a time when there was skepticism that " automatic design" could design lenses equal or better than " hand" methods. The concepts underlying CODE V and its predecessors were based on ten years of experience and exposure to the problems of a group of lens designers in a design-for-manufacture environment. The basic challenge was to show that lens design could be done better easier and faster by high quality computer-assisted design tools. The earliest development was for our own use as an engineering services organization -an in-house tool for custom design. As a tool it had to make us efficient in providing lens design and engineering services as a self-sustaining business. PHILOSOPHY OF OVTIM!ZATION IN CODE V Error function formation Based on experience as a designer we felt very strongly that there should be a clear separation of
Developing a forensic dental code and programme.
Pierce, L; Lindsay, J; Lautenschlager, E P; Smith, E S; Harcourt, J K
1982-02-01
An attempt was made to develop a computer-assisted programme to aid in the positive identification of human remains. The first and second generations of a computer code and programme for entering ante-mortem and post-mortem records into a computer data bank are discussed. The initial code is presented along with the results following the computer evaluation of 50 record pairs. The self-evaluating aspects of the programme are discussed and the proposed second generation code is presented. The programme exhibits some promising characteristics but further refinement based on increased data banks is needed.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
MAGNUM-2D computer code: user's guide
England, R.L.; Kline, N.W.; Ekblad, K.J.; Baca, R.G.
1985-01-01
Information relevant to the general use of the MAGNUM-2D computer code is presented. This computer code was developed for the purpose of modeling (i.e., simulating) the thermal and hydraulic conditions in the vicinity of a waste package emplaced in a deep geologic repository. The MAGNUM-2D computer computes (1) the temperature field surrounding the waste package as a function of the heat generation rate of the nuclear waste and thermal properties of the basalt and (2) the hydraulic head distribution and associated groundwater flow fields as a function of the temperature gradients and hydraulic properties of the basalt. MAGNUM-2D is a two-dimensional numerical model for transient or steady-state analysis of coupled heat transfer and groundwater flow in a fractured porous medium. The governing equations consist of a set of coupled, quasi-linear partial differential equations that are solved using a Galerkin finite-element technique. A Newton-Raphson algorithm is embedded in the Galerkin functional to formulate the problem in terms of the incremental changes in the dependent variables. Both triangular and quadrilateral finite elements are used to represent the continuum portions of the spatial domain. Line elements may be used to represent discrete conduits. 18 refs., 4 figs., 1 tab.
Parallel-vector computation for CSI-design code
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Computational aspects of Control-Structure Interaction (CSI) DESIGN code is reviewed. Numerical intensive computation portions of CSI-DESIGN code were identified. Improvements in computational speed for the CSI-DESIGN code can be achieved by exploiting parallel and vector capabilities offered by modern computers, such as the Alliant, Convex, Cray-2, and Cray-YMP. Four options to generate the coefficient stiffness matrix and to solve the system of linear, simultaneous equations are currently available in the CSI-DESIGN code. A preprocessor to use RCM (Reverse Cuthill-Mackee) algorithm for bandwidth minimization was also developed for the CSI-DESIGN code. Preliminary results obtained by solving a small-scale, 97 node CSI finite element model (for eigensolution) have indicated that this new CSI-DESIGN code is 5 to 6 times faster (using 1 Alliant processor) than the old version of CSI-DESIGN code. This speed-up was achieved due to the RCM algorithm and the use of a new skyline solver. Efforts are underway to further improve the vector speed for CSI-DESIGN code, to evaluate its performance on a larger scale CSI model (such as phase zero CSI model) to make the code run efficiently on multiprocessor, parallel computer environment, and to make the code portable among different parallel computers available at NASA LaRC, such as Alliant, Convex, and Cray computers.
Network Coding for Function Computation
ERIC Educational Resources Information Center
Appuswamy, Rathinakumar
2011-01-01
In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1983-01-01
The Weight Analysis of Turbine Engines (WATE) computer code was developed by Boeing under contract to NASA Lewis. It was designed to function as an adjunct to the Navy/NASA Engine Program (NNEP). NNEP calculates the design and off-design thrust and sfc performance of User defined engine cycles. The thermodynamic parameters throughout the engine as generated by NNEP are then combined with input parameters defining the component characteristics in WATE to calculate the bare engine weight of this User defined engine. Preprocessor programs for NNEP were previously developed to simplify the task of creating input datasets. This report describes a similar preprocessor for the WATE code.
Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes
NASA Technical Reports Server (NTRS)
Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.
2001-01-01
The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as
NASA Astrophysics Data System (ADS)
Mork, M. W.; Kracht, O.
2012-04-01
When investigating stability relations in aquatic solutions or rock-water interactions, the number of dissolved species and mineral phases involved can be overwhelming. To facilitate an overview about equilibrium relationships and how chemical elements are distributed between different aqueous ions, complexes, and solids, predominance diagrams are a widely used tool in aquatic chemistry. In the simplest approach, the predominance field boundaries can be calculated based on a set of mass action equations and log K values for the reactions between different species. Example given, for the popular redox diagram (pe-pH diagram), half cell reactions according to Nernst's equation can be used (Garrels & Christ 1965). In such case, boundaries between different species are "equal-activity" lines. However, for boundaries between solids and dissolved species a specific concentration needs to be stipulated, and the same applies if other components than those displayed in the diagram are involved in the possible reactions. In such case, the predominance field boundaries depend on the actual concentration values chosen. An alternative approach can be the computation of predominance diagrams using the full speciation obtained from a geochemical speciation program, which then needs to be coupled with an external wrapper code for appropriate control and data pre- and post-processing. In this way, the distribution of different species can be based on the consideration of complete chemical analysis obtained from laboratory investigations. We present the results of a student semester-project that aimed to develop and test an external wrapper program for the computation of pe-pH diagrams based on modeling outputs obtained with PHREEQC (Parkhurst & Appelo 1999). We have chosen PHREEQC for this core task as a geochemical calculation module, because of its capabilities to simulate a wide range of equilibrium reactions between water and minerals. Due to the intended final users, a free
Quantum computation with Turaev-Viro codes
Koenig, Robert; Kuperberg, Greg; Reichardt, Ben W.
2010-12-15
For a 3-manifold with triangulated boundary, the Turaev-Viro topological invariant can be interpreted as a quantum error-correcting code. The code has local stabilizers, identified by Levin and Wen, on a qudit lattice. Kitaev's toric code arises as a special case. The toric code corresponds to an abelian anyon model, and therefore requires out-of-code operations to obtain universal quantum computation. In contrast, for many categories, such as the Fibonacci category, the Turaev-Viro code realizes a non-abelian anyon model. A universal set of fault-tolerant operations can be implemented by deforming the code with local gates, in order to implement anyon braiding. We identify the anyons in the code space, and present schemes for initialization, computation and measurement. This provides a family of constructions for fault-tolerant quantum computation that are closely related to topological quantum computation, but for which the fault tolerance is implemented in software rather than coming from a physical medium.
Para: a computer simulation code for plasma driven electromagnetic launchers
Thio, Y.-C.
1983-03-01
A computer code for simulation of rail-type accelerators utilizing a plasma armature has been developed and is described in detail. Some time varying properties of the plasma are taken into account in this code thus allowing the development of a dynamical model of the behavior of a plasma in a rail-type electromagnetic launcher. The code is being successfully used to predict and analyse experiments on small calibre rail-gun launchers.
NASA Technical Reports Server (NTRS)
Marconi, F.; Salas, M.; Yaeger, L.
1976-01-01
A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE
NASA Technical Reports Server (NTRS)
Dougherty, F. C.
1994-01-01
The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters
TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE
NASA Technical Reports Server (NTRS)
Dougherty, F. C.
1994-01-01
The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters
ICAN Computer Code Adapted for Building Materials
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
ICAN Computer Code Adapted for Building Materials
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing
2010-03-01
that the hybridization that occurs between a DNA strand and its Watson - Crick complement can be used to perform mathematical computation. This research... Watson - Crick (WC) duplex, e.g., TCGCA TCGCA . Note that non-WC duplexes can form and such a formation is called a cross-hybridization. Cross...5’GAAAGTCGCGTA3’ Watson Crick (WC) Duplexes TACGCGACTTTC Cross Hybridized (CH) Duplexes ATTTTTGCGTTA GAAAAAGAAGAA Coding Strands for Ligation
Development of the Code RITRACKS
NASA Technical Reports Server (NTRS)
Plante, Ianik; Cucinotta, Francis A.
2013-01-01
A document discusses the code RITRACKS (Relativistic Ion Tracks), which was developed to simulate heavy ion track structure at the microscopic and nanoscopic scales. It is a Monte-Carlo code that simulates the production of radiolytic species in water, event-by-event, and which may be used to simulate tracks and also to calculate dose in targets and voxels of different sizes. The dose deposited by the radiation can be calculated in nanovolumes (voxels). RITRACKS allows simulation of radiation tracks without the need of extensive knowledge of computer programming or Monte-Carlo simulations. It is installed as a regular application on Windows systems. The main input parameters entered by the user are the type and energy of the ion, the length and size of the irradiated volume, the number of ions impacting the volume, and the number of histories. The simulation can be started after the input parameters are entered in the GUI. The number of each kind of interactions for each track is shown in the result details window. The tracks can be visualized in 3D after the simulation is complete. It is also possible to see the time evolution of the tracks and zoom on specific parts of the tracks. The software RITRACKS can be very useful for radiation scientists to investigate various problems in the fields of radiation physics, radiation chemistry, and radiation biology. For example, it can be used to simulate electron ejection experiments (radiation physics).
Secure Computation from Random Error Correcting Codes
NASA Astrophysics Data System (ADS)
Chen, Hao; Cramer, Ronald; Goldwasser, Shafi; de Haan, Robbert; Vaikuntanathan, Vinod
Secure computation consists of protocols for secure arithmetic: secret values are added and multiplied securely by networked processors. The striking feature of secure computation is that security is maintained even in the presence of an adversary who corrupts a quorum of the processors and who exercises full, malicious control over them. One of the fundamental primitives at the heart of secure computation is secret-sharing. Typically, the required secret-sharing techniques build on Shamir's scheme, which can be viewed as a cryptographic twist on the Reed-Solomon error correcting code. In this work we further the connections between secure computation and error correcting codes. We demonstrate that threshold secure computation in the secure channels model can be based on arbitrary codes. For a network of size n, we then show a reduction in communication for secure computation amounting to a multiplicative logarithmic factor (in n) compared to classical methods for small, e.g., constant size fields, while tolerating t < ({1 over 2} - {ɛ}) {n} players to be corrupted, where ɛ> 0 can be arbitrarily small. For large networks this implies considerable savings in communication. Our results hold in the broadcast/negligible error model of Rabin and Ben-Or, and complement results from CRYPTO 2006 for the zero-error model of Ben-Or, Goldwasser and Wigderson (BGW). Our general theory can be extended so as to encompass those results from CRYPTO 2006 as well. We also present a new method for constructing high information rate ramp schemes based on arbitrary codes, and in particular we give a new construction based on algebraic geometry codes.
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
Error-correcting codes in computer arithmetic.
NASA Technical Reports Server (NTRS)
Massey, J. L.; Garcia, O. N.
1972-01-01
Summary of the most important results so far obtained in the theory of coding for the correction and detection of errors in computer arithmetic. Attempts to satisfy the stringent reliability demands upon the arithmetic unit are considered, and special attention is given to attempts to incorporate redundancy into the numbers themselves which are being processed so that erroneous results can be detected and corrected.
Thermoelectric pump performance analysis computer code
NASA Technical Reports Server (NTRS)
Johnson, J. L.
1973-01-01
A computer program is presented that was used to analyze and design dual-throat electromagnetic dc conduction pumps for the 5-kwe ZrH reactor thermoelectric system. In addition to a listing of the code and corresponding identification of symbols, the bases for this analytical model are provided.
RESRAD-CHEM: A computer code for chemical risk assessment
Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.
1993-10-01
RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.
NASA Space Radiation Transport Code Development Consortium.
Townsend, Lawrence W
2005-01-01
Recently, NASA established a consortium involving the University of Tennessee (lead institution), the University of Houston, Roanoke College and various government and national laboratories, to accelerate the development of a standard set of radiation transport computer codes for NASA human exploration applications. This effort involves further improvements of the Monte Carlo codes HETC and FLUKA and the deterministic code HZETRN, including developing nuclear reaction databases necessary to extend the Monte Carlo codes to carry out heavy ion transport, and extending HZETRN to three dimensions. The improved codes will be validated by comparing predictions with measured laboratory transport data, provided by an experimental measurements consortium, and measurements in the upper atmosphere on the balloon-borne Deep Space Test Bed (DSTB). In this paper, we present an overview of the consortium members and the current status and future plans of consortium efforts to meet the research goals and objectives of this extensive undertaking.
Quantified PIRT and Uncertainty Quantification for Computer Code Validation
NASA Astrophysics Data System (ADS)
Luo, Hu
This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (pi groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals.
User's manual for HDR3 computer code
Arundale, C.J.
1982-10-01
A description of the HDR3 computer code and instructions for its use are provided. HDR3 calculates space heating costs for a hot dry rock (HDR) geothermal space heating system. The code also compares these costs to those of a specific oil heating system in use at the National Aeronautics and Space Administration Flight Center at Wallops Island, Virginia. HDR3 allows many HDR system parameters to be varied so that the user may examine various reservoir management schemes and may optimize reservoir design to suit a particular set of geophysical and economic parameters.
Experimental methodology for computational fluid dynamics code validation
Aeschliman, D.P.; Oberkampf, W.L.
1997-09-01
Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.
Recent Developments in the Community Code ASPECT
NASA Astrophysics Data System (ADS)
Heister, T.; Bangerth, W.; Dannberg, J.; Gassmoeller, R.
2015-12-01
The Computational Geosciences have long used community codes to provide simulation capabilities to large numbers of users. We here report on the mantle convection code ASPECT (the Advanced Solver for Problems in Earth ConvecTion) that is developed to be a community tool with a focus on bringing modern numerical methods such as adaptive meshes, large parallel computations, algebraic multigrid solvers, and modern software design. We will comment in particular on two aspects: First, the more recent additions to its numerical capabilities, such as compressible models, averaging of material parameters, melt transport, free surfaces, and plasticity. We will demonstrate these capabilities using examples from computations by members of the ASPECT user community. Second, we will discuss lessons learned in writing a code specifically for community use. This includes our experience with a software design that is fundamentally based on a plugin system for practically all areas that a user may want to describe for the particular geophysical setup they want to simulate. It also includes our experience with leading and organizing a community of users and developers, for example by organizing annual "hackathons", by encouraging code submission via github over keeping modifications private, and by designing a code for which extensions can easily be written as separate plugins rather than requiring knowledge of the computational core.
Computer codes used in particle accelerator design: First edition
Not Available
1987-01-01
This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym. (LEW)
ERIC Educational Resources Information Center
Djambong, Takam; Freiman, Viktor
2016-01-01
While today's schools in several countries, like Canada, are about to bring back programming to their curricula, a new conceptual angle, namely one of computational thinking, draws attention of researchers. In order to understand the articulation between computational thinking tasks in one side, student's targeted skills, and the types of problems…
HUDU: The Hanford Unified Dose Utility computer code
Scherpelz, R.I.
1991-02-01
The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs.
An integrated radiation physics computer code system.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Harris, D. W.
1972-01-01
An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.
An integrated radiation physics computer code system.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Harris, D. W.
1972-01-01
An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.
1982-03-01
symbol can be displaced in more complex imanners, which can be classified as methods of animation. Apparent Movement-coded Line Symbols Two ba -,ic...dimensional and more complex multidimensional symbol systems. A model is presented which identifies the relationships between many factors and symbology system...adequate resolution is employed. These conclusions are: . Simple geometric symbols are identified more accurately and quickly than complex geometric symbols
Connecting Neural Coding to Number Cognition: A Computational Account
ERIC Educational Resources Information Center
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
Connecting Neural Coding to Number Cognition: A Computational Account
ERIC Educational Resources Information Center
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
The KORSA Computer Code Modeling of Stratified Two-Phase Flow Hydrodynamics in Horizontal Pipes
Yudov, Yu. V.
2002-07-01
The KORSAR best estimate computer code has been developed at NITI since 1996. It is designed to numerically simulate transient and accident conditions in VVER-type reactors /1/. From 1999 and on, the code development activity has been coordinated by the Center for Computer Code Development under Russia's Minatom. (authors)
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to the Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.
Seals Flow Code Development 1993
NASA Technical Reports Server (NTRS)
Liang, Anita D. (Compiler); Hendricks, Robert C. (Compiler)
1994-01-01
Seals Workshop of 1993 code releases include SPIRALI for spiral grooved cylindrical and face seal configurations; IFACE for face seals with pockets, steps, tapers, turbulence, and cavitation; GFACE for gas face seals with 'lift pad' configurations; and SCISEAL, a CFD code for research and design of seals of cylindrical configuration. GUI (graphical user interface) and code usage was discussed with hands on usage of the codes, discussions, comparisons, and industry feedback. Other highlights for the Seals Workshop-93 include environmental and customer driven seal requirements; 'what's coming'; and brush seal developments including flow visualization, numerical analysis, bench testing, T-700 engine testing, tribological pairing and ceramic configurations, and cryogenic and hot gas facility brush seal results. Also discussed are seals for hypersonic engines and dynamic results for spiral groove and smooth annular seals.
Codes That Support Smart Growth Development
Provides examples of local zoning codes that support smart growth development, categorized by: unified development code, form-based code, transit-oriented development, design guidelines, street design standards, and zoning overlay.
Survey of computer codes applicable to waste facility performance evaluations
Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.
1988-01-01
This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs.
Computer code for double beta decay QRPA based calculations
Barbero, C. A.; Mariano, A.; Krmpotić, F.; Samana, A. R.; Ferreira, V. dos Santos; Bertulani, C. A.
2014-11-11
The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.
General review of the MOSTAS computer code for wind turbines
NASA Technical Reports Server (NTRS)
Dungundji, J.; Wendell, J. H.
1981-01-01
The MOSTAS computer code for wind turbine analysis is reviewed, and techniques and methods used in its analyses are described. Impressions of its strengths and weakness, and recommendations for its application, modification, and further development are made. Basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed.
FLASH: A finite element computer code for variably saturated flow
Baca, R.G.; Magnuson, S.O.
1992-05-01
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.
Analog system for computing sparse codes
Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell
2010-08-24
A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.
Spiking network simulation code for petascale computers
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
Spiking network simulation code for petascale computers.
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
NASA Technical Reports Server (NTRS)
Gould, R. K.
1978-01-01
Mechanisms for the SiCl4/Na and SiF4/Na reaction systems were examined. Reaction schemes which include 25 elementary reactions were formulated for each system and run to test the sensitivity of the computed concentration and temperature profiles to the values given estimated rate coefficients. It was found that, for SiCl4/Na, the rate of production of free Si is largely mixing-limited for reasonable rate coefficient estimates. For the SiF4/Na system the results indicate that the endothermicities of many of the reactions involved in producing Si from SiF4/Na cause this system to be chemistry-limited rather than mixing-limited.
Parallel CARLOS-3D code development
Putnam, J.M.; Kotulski, J.D.
1996-02-01
CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.
Fluid Film Bearing Code Development
NASA Technical Reports Server (NTRS)
1995-01-01
The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the
Improved neutron activation prediction code system development
NASA Technical Reports Server (NTRS)
Saqui, R. M.
1971-01-01
Two integrated neutron activation prediction code systems have been developed by modifying and integrating existing computer programs to perform the necessary computations to determine neutron induced activation gamma ray doses and dose rates in complex geometries. Each of the two systems is comprised of three computational modules. The first program module computes the spatial and energy distribution of the neutron flux from an input source and prepares input data for the second program which performs the reaction rate, decay chain and activation gamma source calculations. A third module then accepts input prepared by the second program to compute the cumulative gamma doses and/or dose rates at specified detector locations in complex, three-dimensional geometries.
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Case, Jonathan; Venner, Jason; Moreno-Madrinan, Max J.; Delgado, Francisco
2012-01-01
Two projects at NASA Marshall Space Flight Center have collaborated to develop a high resolution weather forecast model for Mesoamerica: The NASA Short-term Prediction Research and Transition (SPoRT) Center, which integrates unique NASA satellite and weather forecast modeling capabilities into the operational weather forecasting community. NASA's SERVIR Program, which integrates satellite observations, ground-based data, and forecast models to improve disaster response in Central America, the Caribbean, Africa, and the Himalayas.
NASA Technical Reports Server (NTRS)
Kontos, Karen B.; Kraft, Robert E.; Gliebe, Philip R.
1996-01-01
The Aircraft Noise Predication Program (ANOPP) is an industry-wide tool used to predict turbofan engine flyover noise in system noise optimization studies. Its goal is to provide the best currently available methods for source noise prediction. As part of a program to improve the Heidmann fan noise model, models for fan inlet and fan exhaust noise suppression estimation that are based on simple engine and acoustic geometry inputs have been developed. The models can be used to predict sound power level suppression and sound pressure level suppression at a position specified relative to the engine inlet.
Upgrades of Two Computer Codes for Analysis of Turbomachinery
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Liou, Meng-Sing
2005-01-01
Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...
Standard problems to evaluate piping response computer codes
Bezler, P.; Subudhi, M.
1984-01-01
A program has been underway to evaluate the analysis methods used by industry to qualify nuclear power plant piping. Two objectives of this program are to develop physical benchmarks for validating the accuracy of computer codes used to simulate piping response and to develop improved procedures for calculating the response of multiple supported piping with independent seismic inputs. The status of the program in these two areas is reviewed.
NASA Technical Reports Server (NTRS)
Harp, J. L., Jr.; Oatway, T. P.
1975-01-01
A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.
A surface code quantum computer in silicon
Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.
2015-01-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.
Computer code for determination of thermally perfect gas properties
NASA Technical Reports Server (NTRS)
Witte, David W.; Tatum, Kenneth E.
1994-01-01
A set of one-dimensional compressible flow relations for a thermally perfect, calorically imperfect gas is derived for the specific heat c(sub p), expressed as a polynomial function of temperature, and developed into the thermally perfect gas (TPG) computer code. The code produces tables of compressible flow properties similar to those of NACA Rep. 1135. Unlike the tables of NACA Rep. 1135 which are valid only in the calorically perfect temperature regime, the TPG code results are also valid in the thermally perfect calorically imperfect temperature regime which considerably extends the range of temperature application. Accuracy of the TPG code in the calorically perfect temperature regime is verified by comparisons with the tables of NACA Rep. 1135. In the thermally perfect, calorically imperfect temperature regime, the TPG code is validated by comparisons with results obtained from the method of NACA Rep. 1135 for calculating the thermally perfect calorically imperfect compressible flow properties. The temperature limits for application of the TPG code are also examined. The advantage of the TPG code is its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture thereof, whereas the method of NACA Rep. 1135 is restricted to only diatomic gases.
NASA Astrophysics Data System (ADS)
Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.
2016-02-01
The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1981-10-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1983-06-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
Computer code for intraply hybrid composite design
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1981-01-01
A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.
Additional extensions to the NASCAP computer code, volume 3
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Cooke, D. L.
1981-01-01
The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.
CFD and Neutron codes coupling on a computational platform
NASA Astrophysics Data System (ADS)
Cerroni, D.; Da Vià, R.; Manservisi, S.; Menghini, F.; Scardovelli, R.
2017-01-01
In this work we investigate the thermal-hydraulics behavior of a PWR nuclear reactor core, evaluating the power generation distribution taking into account the local temperature field. The temperature field, evaluated using a self-developed CFD module, is exchanged with a neutron code, DONJON-DRAGON, which updates the macroscopic cross sections and evaluates the new neutron flux. From the updated neutron flux the new peak factor is evaluated and the new temperature field is computed. The exchange of data between the two codes is obtained thanks to their inclusion into the computational platform SALOME, an open-source tools developed by the collaborative project NURESAFE. The numerical libraries MEDmem, included into the SALOME platform, are used in this work, for the projection of computational fields from one problem to another. The two problems are driven by a common supervisor that can access to the computational fields of both systems, in every time step, the temperature field, is extracted from the CFD problem and set into the neutron problem. After this iteration the new power peak factor is projected back into the CFD problem and the new time step can be computed. Several computational examples, where both neutron and thermal-hydraulics quantities are parametrized, are finally reported in this work.
New Parallel computing framework for radiation transport codes
Kostin, M.A.; Mokhov, N.V.; Niita, K.; /JAERI, Tokai
2010-09-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
Implementation of a 3D mixing layer code on parallel computers
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-09-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
NASA Multidimensional Stirling Convertor Code Developed
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Thieme, Lanny G.
2004-01-01
A high-efficiency Stirling Radioisotope Generator (SRG) for use on potential NASA Space Science missions is being developed by the Department of Energy, Lockheed Martin, Stirling Technology Company, and the NASA Glenn Research Center. These missions may include providing spacecraft onboard electric power for deep space missions or power for unmanned Mars rovers. Glenn is also developing advanced technology for Stirling convertors, aimed at substantially improving the specific power and efficiency of the convertor and the overall power system. Performance and mass improvement goals have been established for second- and third-generation Stirling radioisotope power systems. Multiple efforts are underway to achieve these goals, both in house at Glenn and under various grants and contracts. These efforts include the development of a multidimensional Stirling computational fluid dynamics code, high-temperature materials, advanced controllers, an end-to-end system dynamics model, low-vibration techniques, advanced regenerators, and a lightweight convertor. Under a NASA grant, Cleveland State University (CSU) and its subcontractors, the University of Minnesota (UMN) and Gedeon Associates, have developed a twodimensional computer simulation of a CSUmod Stirling convertor. The CFD-ACE commercial software developed by CFD Research Corp. of Huntsville, Alabama, is being used. The CSUmod is a scaled version of the Stirling Technology Demonstrator Convertor (TDC), which was designed and fabricated by the Stirling Technology Company and is being tested by NASA. The schematic illustrates the structure of this model. Modeled are the fluid-flow and heat-transfer phenomena that occur in the expansion space, the heater, the regenerator, the cooler, the compression space, the surrounding walls, and the moving piston and displacer. In addition, the overall heat transfer, the indicated power, and the efficiency can be calculated. The CSUmod model is being converted to a two
Additional extensions to the NASCAP computer code, volume 1
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Katz, I.; Stannard, P. R.
1981-01-01
Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.
Computer Code For Turbocompounded Adiabatic Diesel Engine
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Heywood, J. B.
1988-01-01
Computer simulation developed to study advantages of increased exhaust enthalpy in adiabatic turbocompounded diesel engine. Subsytems of conceptual engine include compressor, reciprocator, turbocharger turbine, compounded turbine, ducting, and heat exchangers. Focus of simulation of total system is to define transfers of mass and energy, including release and transfer of heat and transfer of work in each subsystem, and relationship among subsystems. Written in FORTRAN IV.
Computer Code For Turbocompounded Adiabatic Diesel Engine
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Heywood, J. B.
1988-01-01
Computer simulation developed to study advantages of increased exhaust enthalpy in adiabatic turbocompounded diesel engine. Subsytems of conceptual engine include compressor, reciprocator, turbocharger turbine, compounded turbine, ducting, and heat exchangers. Focus of simulation of total system is to define transfers of mass and energy, including release and transfer of heat and transfer of work in each subsystem, and relationship among subsystems. Written in FORTRAN IV.
Computer code for the prediction of nozzle admittance
NASA Technical Reports Server (NTRS)
Nguyen, Thong V.
1988-01-01
A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.
Decoding the LIM development code.
Gill, Gordon N
2003-01-01
During development a vast number of distinct cell types arise from dividing progenitor cells. Concentration gradients of ligands that act via cell surface receptors signal transcriptional regulators that repress and activate particular genes. LIM homeodomain proteins are an important class of transcriptional regulators that direct cell fate. Although in C. elegans only a single LIM homeodomain protein is expressed in a particular cell type, in vertebrates combinations of LIM homeodomain proteins are expressed in cells that determine cell fates. We have investigated the molecular basis of the LIM domain "code" that determines cell fates such as wing formation in Drosophilia and motor neuron formation in chicks. The basic code is a homotetramer of 2 LIM homeodomain proteins bridged by the adaptor protein, nuclear LIM interactor (NLI). A more complex molecular language consisting of a hexamer complex involving NLI and 2 LIM homeodomain proteins, Lhx3 and Isl1 determines ventral motor neuron formation. The same molecular "words" adopt different meanings depending on the context of expression of other molecular "words."
Hanford Meteorological Station computer codes: Volume 4, The SUM computer code
Andrews, G.L.; Buck, J.W.
1987-09-01
At the end of each swing shift, the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, archives a set of daily weather observations. These weather observations are a summary of the maximum and minimum temperature, total precipitation, maximum and minimum relative humidity, total snowfall, total snow depth at 1200 Greenwich Mean Time (GMT), and maximum wind speed plus the direction from which the wind occurred and the time it occurred. This summary also indicates the occurrence of rain, snow, and other weather phenomena. The SUM computer code is used to archive the summary and apply quality assurance checks to the data. This code accesses an input file that contains the date of the previous archive and an output file that contains a daily weather summary for the current month. As part of the program, a data entry form consisting of 21 fields must be filled in by the user. The information on the form is appended to the monthly file, which provides an archive for the daily weather summary. This volume describes the implementation and operation of the SUM computer code at the HMS.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki
2015-03-01
This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data.
Appendage flow computations using the INS3D computer code
NASA Astrophysics Data System (ADS)
Ohring, Samuel
1989-10-01
The INS3D code, a steady state incompressible, fully 3-D Navier-Stokes solver, was applied to the computation of flow past an appendage mounted between two parallel flat plates of infinite extent at a Reynolds number of one-half million. The Baldwin-Lomax turbulence model was used to compute the eddy viscosity. The appendage consisted of a 1.5:1 elliptical nose and a NACA 0020 tail joined at maximum thickness of 0.24 chordlengths. A detailed description of the flow results covers all the major features of appendage flow and the results, for an unfilleted appendage, are in general agreement with experimental and other numerical results, except that the lateral location of the horseshoe vortex is larger than that in the experimental results. A detailed description is presented of the important trailing edge vortex. Detailed results for a second flow case, in which filleting is applied mainly to the front and side of the aforementioned appendage, show a greatly weakened horseshoe vortex but a still significant trailing edge vortex, that prevented velocity-deficit reduction in the wake, compared to the unfilleted appendage flow case. The calculations for the filleted case also exhibited an upstream instability. The plotting program PLOT3D was used to obtain color photos for flow visualization.
Proceduracy: Computer Code Writing in the Continuum of Literacy
ERIC Educational Resources Information Center
Vee, Annette
2010-01-01
This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…
Micromechanics Analysis Code (MAC) Developed
NASA Technical Reports Server (NTRS)
1997-01-01
The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers in performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions, it must be based on a micromechanics approach that uses physically based deformation and life constitutive models, and it must allow one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Only then can such a model be used by a material scientist to investigate the effect of different deformation mechanisms on the overall response of the composite and, thereby, identify the appropriate constituents for a given application. However, if a micromechanical model is to be used in a large-scale structural analysis it must be (1) computationally efficient, (2) able to generate accurate displacement and stress fields at both the macro and micro level, and (3) compatible with the finite element method. In addition, new advancements in processing and fabrication techniques now make it possible to engineer the architectures of these advanced composite systems. Full utilization of these emerging manufacturing capabilities require the development of a computationally efficient micromechanics analysis tool that can accurately predict the effect of microstructural details on the internal and macroscopic behavior of composites. Computational efficiency is required because (1) a large number of parameters must be varied in the course of engineering (or designing) composite materials and (2) the optimization of a material's microstructure requires that the micromechanics model be integrated with
Python interface generator for Fortran based codes (a code development aid)
Grote, D. P.
2012-02-22
Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.
Python interface generator for Fortran based codes (a code development aid)
Grote, D. P.
2012-02-22
Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.
Panel-Method Computer Code For Potential Flow
NASA Technical Reports Server (NTRS)
Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.
1992-01-01
Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.
Hanford Meteorological Station computer codes: Volume 7, The RIVER computer code
Andrews, G.L.; Buck, J.W.
1988-03-01
The RIVER computer code is used to archive Columbia River data measured at the 100N reactor. The data are recorded every other hour starting at 0100 Pacific Standard Time (12 observations in a day), and consists of river elevation, temperature, and flow rate. The program prompts the user for river data by using a data entry form. After the data have been enetered and verified, the program appends each hour of river data to the end of each corresponding surface observation record for the current day. The appended data are then stored in the current month's surface observation file.
3-D field computation: The near-triumph of commerical codes
Turner, L.R.
1995-07-01
In recent years, more and more of those who design and analyze magnets and other devices are using commercial codes rather than developing their own. This paper considers the commercial codes and the features available with them. Other recent trends with 3-D field computation include parallel computation and visualization methods such as virtual reality systems.
GERMINAL — A computer code for predicting fuel pin behaviour
NASA Astrophysics Data System (ADS)
Melis, J. C.; Roche, L.; Piron, J. P.; Truffert, J.
1992-06-01
In the frame of the R and D on FBR fuels, CEA/DEC is developing the computer code GERMINAL to study the fuel pin thermal-mechanical behaviour during steady-state and incidental conditions. The development of GERMINAL is foreseen in two steps: (1) The GERMINAL 1 code designed as a "working horse" for immediate applications. The version 1 of GERMINAL 1 is presently delivered fully documented with a physical qualification guaranteed up to 8 at%. (2) The version 2 of GERMINAL 1, in addition to what is presently treated in GERMINAL 1 includes the treatment of high burnup effects on the fission gas release and the fuel-clad joint. This version, GERMINAL 1.2, is presently under testing and will be completed up to the end of 1991. The GERMINAL 2 code designed as a reference code for future applications will cover all the aspects of GERMINAL 1 (including high burnup effects) with a more general mechanical treatment, and a completely revised and advanced informatical structure.
CFD Code Development for Combustor Flows
NASA Technical Reports Server (NTRS)
Norris, Andrew
2003-01-01
During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the
Interactive computer code for dynamic and soil structure interaction analysis
Mulliken, J.S.
1995-12-01
A new interactive computer code is presented in this paper for dynamic and soil-structure interaction (SSI) analyses. The computer program FETA (Finite Element Transient Analysis) is a self contained interactive graphics environment for IBM-PC`s that is used for the development of structural and soil models as well as post-processing dynamic analysis output. Full 3-D isometric views of the soil-structure system, animation of displacements, frequency and time domain responses at nodes, and response spectra are all graphically available simply by pointing and clicking with a mouse. FETA`s finite element solver performs 2-D and 3-D frequency and time domain soil-structure interaction analyses. The solver can be directly accessed from the graphical interface on a PC, or run on a number of other computer platforms.
Development of a CFD code for casting simulation
NASA Technical Reports Server (NTRS)
Murph, Jesse E.
1992-01-01
The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.
Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes
Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.
2002-07-01
A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)
WSRC approach to validation of criticality safety computer codes
Finch, D.R.; Mincey, J.F.
1991-12-31
Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.
WSRC approach to validation of criticality safety computer codes
Finch, D.R.; Mincey, J.F.
1991-01-01
Recent hardware and operating system changes at Westinghouse Savannah River Site (WSRC) have necessitated review of the validation for JOSHUA criticality safety computer codes. As part of the planning for this effort, a policy for validation of JOSHUA and other criticality safety codes has been developed. This policy will be illustrated with the steps being taken at WSRC. The objective in validating a specific computational method is to reliably correlate its calculated neutron multiplication factor (K{sub eff}) with known values over a well-defined set of neutronic conditions. Said another way, such correlations should be: (1) repeatable; (2) demonstrated with defined confidence; and (3) identify the range of neutronic conditions (area of applicability) for which the correlations are valid. The general approach to validation of computational methods at WSRC must encompass a large number of diverse types of fissile material processes in different operations. Special problems are presented in validating computational methods when very few experiments are available (such as for enriched uranium systems with principal second isotope {sup 236}U). To cover all process conditions at WSRC, a broad validation approach has been used. Broad validation is based upon calculation of many experiments to span all possible ranges of reflection, nuclide concentrations, moderation ratios, etc. Narrow validation, in comparison, relies on calculations of a few experiments very near anticipated worst-case process conditions. The methods and problems of broad validation are discussed.
Space Radiation Transport Code Development: 3DHZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2015-01-01
The space radiation transport code, HZETRN, has been used extensively for research, vehicle design optimization, risk analysis, and related applications. One of the simplifying features of the HZETRN transport formalism is the straight-ahead approximation, wherein all particles are assumed to travel along a common axis. This reduces the governing equation to one spatial dimension allowing enormous simplification and highly efficient computational procedures to be implemented. Despite the physical simplifications, the HZETRN code is widely used for space applications and has been found to agree well with fully 3D Monte Carlo simulations in many circumstances. Recent work has focused on the development of 3D transport corrections for neutrons and light ions (Z < 2) for which the straight-ahead approximation is known to be less accurate. Within the development of 3D corrections, well-defined convergence criteria have been considered, allowing approximation errors at each stage in model development to be quantified. The present level of development assumes the neutron cross sections have an isotropic component treated within N explicit angular directions and a forward component represented by the straight-ahead approximation. The N = 1 solution refers to the straight-ahead treatment, while N = 2 represents the bi-directional model in current use for engineering design. The figure below shows neutrons, protons, and alphas for various values of N at locations in an aluminum sphere exposed to a solar particle event (SPE) spectrum. The neutron fluence converges quickly in simple geometry with N > 14 directions. The improved code, 3DHZETRN, transports neutrons, light ions, and heavy ions under space-like boundary conditions through general geometry while maintaining a high degree of computational efficiency. A brief overview of the 3D transport formalism for neutrons and light ions is given, and extensive benchmarking results with the Monte Carlo codes Geant4, FLUKA, and
Validation of the RESRAD-RECYCLE computer code.
Cheng, J.-J.; Yu, C.; Williams, W. A.; Murphie, W.
2002-02-01
The RESRAD-RECYCLE computer code was developed by Argonne National Laboratory under the sponsorship of the U.S. Department of Energy. It was designed to analyze potential radiation exposures resulting from the reuse and recycling of radioactively contaminated scrap metal and equipment. It was one of two codes selected in an international model validation study concerning recycling of radioactively contaminated metals. In the validation study, dose measurements at various stages of melting a spent nuclear fuel rack at Studsvik RadWaste AB, Sweden, were collected and compared with modeling results. The comparison shows that the RESRAD-RECYCLE results agree fairly well with the measurement data. Among the scenarios considered, dose results and measurement data agree within a factor of 6. Discrepancies may be explained by the geometrical limitation of the RESRAD-RECYCLE's external exposure model, the dynamic nature of the recycling activities, and inaccuracy in the input parameter values used in dose calculations.
TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION
Yang, L.
2011-03-28
Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.
Improvements to the fastex flutter analysis computer code
NASA Technical Reports Server (NTRS)
Taylor, Ronald F.
1987-01-01
Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.
A computer code for performance of spur gears
NASA Technical Reports Server (NTRS)
Wang, K. L.; Cheng, H. S.
1983-01-01
In spur gears both performance and failure predictions are known to be strongly dependent on the variation of load, lubricant film thickness, and total flash or contact temperature of the contacting point as it moves along the contact path. The need of an accurate tool for predicting these variables has prompted the development of a computer code based on recent findings in EHL and on finite element methods. The analyses and some typical results which to illustrate effects of gear geometry, velocity, load, lubricant viscosity, and surface convective heat transfer coefficient on the performance of spur gears are analyzed.
PORPST: A statistical postprocessor for the PORMC computer code
Eslinger, P.W.; Didier, B.T. )
1991-06-01
This report describes the theory underlying the PORPST code and gives details for using the code. The PORPST code is designed to do statistical postprocessing on files written by the PORMC computer code. The data written by PORMC are summarized in terms of means, variances, standard deviations, or statistical distributions. In addition, the PORPST code provides for plotting of the results, either internal to the code or through use of the CONTOUR3 postprocessor. Section 2.0 discusses the mathematical basis of the code, and Section 3.0 discusses the code structure. Section 4.0 describes the free-format point command language. Section 5.0 describes in detail the commands to run the program. Section 6.0 provides an example program run, and Section 7.0 provides the references. 11 refs., 1 fig., 17 tabs.
Hanford Meteorological Station computer codes: Volume 8, The REVIEW computer code
Andrews, G.L.; Burk, K.W.
1988-08-01
The Hanford Meteorological Station (HMS) routinely collects meteorological data from sources on and off the Hanford Site. The data are averaged over both 15 minutes and 1 hour and are maintained in separate databases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS. The databases are transferred to the Emergency Management System (EMS) DEC VAX 11/750 computer. The EMS is part of the Unified Dose Assessment Center, which is located on on the ground-level floor of the Federal building in Richland and operated by Pacific Northwest Laboratory. The computer program REVIEW is used to display meteorological data in graphical and alphanumeric form from either the 15-minute or hourly database. The code is available on the HMS and EMS computer. The REVIEW program helps maintain a high level of quality assurance on the instruments that collect the data and provides a convenient mechanism for analyzing meteorological data on a routine basis and during emergency response situations.
Optimization of KINETICS Chemical Computation Code
NASA Technical Reports Server (NTRS)
Donastorg, Cristina
2012-01-01
NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.
Advances in Parallel Electromagnetic Codes for Accelerator Science and Development
Ko, Kwok; Candel, Arno; Ge, Lixin; Kabel, Andreas; Lee, Rich; Li, Zenghai; Ng, Cho; Rawat, Vineet; Schussman, Greg; Xiao, Liling; /SLAC
2011-02-07
Over a decade of concerted effort in code development for accelerator applications has resulted in a new set of electromagnetic codes which are based on higher-order finite elements for superior geometry fidelity and better solution accuracy. SLAC's ACE3P code suite is designed to harness the power of massively parallel computers to tackle large complex problems with the increased memory and solve them at greater speed. The US DOE supports the computational science R&D under the SciDAC project to improve the scalability of ACE3P, and provides the high performance computing resources needed for the applications. This paper summarizes the advances in the ACE3P set of codes, explains the capabilities of the modules, and presents results from selected applications covering a range of problems in accelerator science and development important to the Office of Science.
GAM-HEAT -- a computer code to compute heat transfer in complex enclosures. Revision 1
Cooper, R.E.; Taylor, J.R.; Kielpinski, A.L.; Steimke, J.L.
1991-02-01
The GAM-HEAT code was developed for heat transfer analyses associated with postulated Double Ended Guillotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re- radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices, and also accounts for convective, conductive, and advective heat exchanges. The code is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium. The GAM-HEAT code has been exercised extensively for computing transient temperatures in SRS reactors with specific charges and control components. Results from these computations have been used to establish the need for and to evaluate hardware modifications designed to mitigate results of postulated accident scenarios, and to assist in the specification of safe reactor operating power limits. The code utilizes temperature dependence on material properties. The efficiency of the code has been enhanced by the use of an iterative equation solver. Verification of the code to date consists of comparisons with parallel efforts at Los Alamos National Laboratory and with similar efforts at Westinghouse Science and Technology Center in Pittsburgh, PA, and benchmarked using problems with known analytical or iterated solutions. All comparisons and tests yield results that indicate the GAM-HEAT code performs as intended.
PLASIM: A computer code for simulating charge exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.
1982-01-01
The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.
Computer code for charge-exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.
1981-01-01
The propagation of the charge-exchange plasma from an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ASNI Standard FORTRAN.
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.
Computer-assisted coding and clinical documentation: first things first.
Tully, Melinda; Carmichael, Angela
2012-10-01
Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.
A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.
1989-01-01
A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.
A generalized one dimensional computer code for turbomachinery cooling passage flow calculations
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.
1989-01-01
A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.
Code 672 observational science branch computer networks
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Shirk, H. G.
1988-01-01
In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
NASA Technical Reports Server (NTRS)
Hartenstein, Richard G., Jr.
1985-01-01
Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.
Enhancements to the STAGS computer code
NASA Technical Reports Server (NTRS)
Rankin, C. C.; Stehlin, P.; Brogan, F. A.
1986-01-01
The power of the STAGS family of programs was greatly enhanced. Members of the family include STAGS-C1 and RRSYS. As a result of improvements implemented, it is now possible to address the full collapse of a structural system, up to and beyond critical points where its resistance to the applied loads vanishes or suddenly changes. This also includes the important class of problems where a multiplicity of solutions exists at a given point (bifurcation), and where until now no solution could be obtained along any alternate (secondary) load path with any standard production finite element code.
High Energy Radiation Transport Codes: Their Development and Application
NASA Astrophysics Data System (ADS)
Gabriel, Tony A.
1996-05-01
The development of high energy radiation transport codes has been very strongly correlated to the development of higher energy accelerators and more powerful computers. During the early 1960's a Nucleon Transport Code (NTC) was developed to transport neutrons and protons up to energies below the pion threshold. During the middle 1960's this code which was renamed to NMTC was expanded to include multiple pion production and could be used for particle energies up to 3.5 GeV. During the late 1960's and early 1970's with the development of Fermi National Accelerator Laboratory (FNAL) NMTC was again refined by the inclusion of a particle nucleus collision scaling model which could generate reliable collision information at the higher energies necessary for the development of radiation shielding at FNAL. This was HETC. During the 1970's HETC was coupled with the EGS code for electromagnetic particle transport the MORSE code for low-energy (<20MeV) neutron transport, and SPECT, a HETC analysis code for obtaining energy deposition, to produce the CALOR code system, a complete high energy radiation transport code package. For this paper CALOR will be described in detail and some recent applications will be presented. The strength and weakness as well as the applicability of other radiation transport code systems like FLUKA will be briefly discussed.
The PARA: A computer simulation code for plasma driven electromagnetic launchers
NASA Astrophysics Data System (ADS)
Thio, Y. C.
1983-03-01
A computer code for simulation of rail type accelerators utilizing a plasma armature was developed and is described in detail. Some time varying properties of the plasma are taken into account in this code thus allowing the development of a dynamical model of the behaviour of plasma in a rail type electromagnetic Launcher. The code is being successfully used to predict and analyze experiments on small calibre rail gun launchers.
NASA Lewis Stirling engine computer code evaluation
Sullivan, T.J.
1989-01-01
In support of the US Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was /minus/11 percent for the P-40 and 12 percent for the RE-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvement to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions. 13 refs., 26 figs., 3 tabs.
NASA Lewis Stirling engine computer code evaluation
NASA Technical Reports Server (NTRS)
Sullivan, Timothy J.
1989-01-01
In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.
Coupling the FIRAC and CFAST computer codes. Final report
Claybrook, S.W.
1993-10-01
This report summarizes the work performed by Numerical Applications, Inc. on LANL subcontract 3876L0013-9Q. The primary objectives of this work were to generalize the fire compartment interface in the IBM PC version of FIRAC and to couple FIRAC with the CFAST computer code. The resulting FIRAC/CFAST computer code would combine the ventilation system and particulate transport modeling capabilities of FIRAC with the fire room modeling capabilities of CFAST. Additional project objectives included debugging FIRAC to correct errors that had been reported by the FIRAC-PC evaluators and evaluating requirements for modifying the FIRAC preprocessor and postprocessor to work with the combined code.
Conversion of radionuclide transport codes from mainframes to personal computers
Pon, W.D.; Marschke, S.F. )
1987-01-01
Converting a mainframe computer code to run on a personal computer (PC) calls for more than just a simple translation -- the converted program and associated data files must be modified to fit the PC's environment. This has been done for three well-known mainframe codes that are used to estimate the impacts of normal operational radiological releases from nuclear power plants: GALE, GASPAR, and LADTAP. The programs were converted to run on an IBM PC and combined into a single integrated package. This article describes the steps in the conversion process and shows how the mainframe codes were modified and enhanced to take advantage of the PC's ease of use.
Advanced LMR safety analysis capabilities in the SASSYS-1 and SAS4A computer codes
Cahalan, J.E.; Tentner, A.M.; Morris, E.E.
1994-03-01
This paper provides an overview of recent modeling developments in the SAS4A and SASSYS-1 computer codes. The paper focuses on both phenomenological model descriptions for new thermal, hydraulic, and mechanical modules, and on new applications.
Terminal Ballistic Application of Hydrodynamic Computer Code Calculations.
1977-04-01
summarize results of applying a particular nume rical method , the method programed in the HEMP code , to the simulation of fragr.I r~t i ’~~ shell , Misznay...Page LIST OF ILLUSTRATIONS I. INTRODUCTION 7 A. Background and Summary 7 B. HEMP Code Formulation 9 C. Accuracy of HEMP Code Solutions 10 I I...of an Explosively Loaded Cylinder as Computed with the HEMP Code 13 4. Comparison of the Calculated Fragment Velocity Distribution and Arena Test
Computer code for intraply hybrid composite design
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1981-01-01
A computer program is described for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories, and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.
Codes & standards research, development & demonstration Roadmap
None, None
2008-07-22
This Roadmap is a guide to the Research, Development & Demonstration activities that will provide data required for SDOs to develop performance-based codes and standards for a commercial hydrogen fueled transportation sector in the U.S.
Computer vision cracks the leaf code.
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas
2016-03-22
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.
Computer vision cracks the leaf code
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas
2016-01-01
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664
The DEPOSIT computer code based on the low rank approximations
NASA Astrophysics Data System (ADS)
Litsarev, Mikhail S.; Oseledets, Ivan V.
2014-10-01
We present a new version of the DEPOSIT computer code based on the low rank approximations. This approach is based on the two dimensional cross decomposition of matrices and separated representations of analytical functions. The cross algorithm is available in the distributed package and can be used independently. All integration routines related to the computation of the deposited energy T(b) are implemented in a new way (low rank separated representation format on homogeneous meshes). By using this approach a bug in integration routines of previous version of the code was found and fixed in the current version. The total computational time was significantly accelerated and is about several minutes.
Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide
Ritchie, L.T.; Johnson, J.D.; Blond, R.M.
1983-02-01
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.
Interface design of VSOP'94 computer code for safety analysis
NASA Astrophysics Data System (ADS)
Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi
2014-09-01
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
Interface design of VSOP'94 computer code for safety analysis
Natsir, Khairina Andiwijayakusuma, D.; Wahanani, Nursinta Adi; Yazid, Putranto Ilham
2014-09-30
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
SCDAP/RELAP5 code development and assessment
Allison, C.M.; Hohorst, J.K.
1996-03-01
The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The current version of the code is SCDAP/RELAP5/MOD3.1e. Although MOD3.1e contains a number of significant improvements since the initial version of MOD3.1 was released, new models to treat the behavior of the fuel and cladding during reflood have had the most dramatic impact on the code`s calculations. This paper provides a brief description of the new reflood models, presents highlights of the assessment of the current version of MOD3.1, and discusses future SCDAP/RELAP5/MOD3.2 model development activities.
Space radiator simulation manual for computer code
NASA Technical Reports Server (NTRS)
Black, W. Z.; Wulff, W.
1972-01-01
A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
A Model Code of Ethics for the Use of Computers in Education.
ERIC Educational Resources Information Center
Shere, Daniel T.; Cannings, Terence R.
Two Delphi studies were conducted by the Ethics and Equity Committee of the International Council for Computers in Education (ICCE) to obtain the opinions of experts on areas that should be covered by ethical guides for the use of computers in education and for software development, and to develop a model code of ethics for each of these areas.…
Recent applications of the transonic wing analysis computer code, TWING
NASA Technical Reports Server (NTRS)
Subramanian, N. R.; Holst, T. L.; Thomas, S. D.
1982-01-01
An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.
A computer code for beam dynamics simulations in SFRFQ structure
NASA Astrophysics Data System (ADS)
Wang, Z.; Chen, J. E.; Lu, Y. R.; Yan, X. Q.; Zhu, K.; Fang, J. X.; Guo, Z. Y.
2007-03-01
A computer code (SFRFQCODEv1.0) is developed to analyze the beam dynamics of Separated Function Radio Frequency Quadruples (SFRFQ) structure. Calculations show that the transverse and longitudinal stability can be ensured by selecting proper dynamic and structure parameters. This paper describes the beam dynamical mechanism of SFRFQ, and presents a design example of SFRFQ cavity, which will be used as a post accelerator of a 26 MHz 1 MeV O + Integrated Split Ring (ISR) RFQ and accelerate O + from 1 to 1.5 MeV. Three electrostatic quadruples are adopted to realize the transverse beam matching from ISR RFQ to SFRFQ cavity. This setting is also useful for the beam size adjustment and its applications.
Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation
NASA Technical Reports Server (NTRS)
Scott, James R.
2004-01-01
NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.
MARS code developments, benchmarking and applications
Mokhov, N.V.
2000-03-24
Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electron volt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft prior production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electro-magnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring.
CATHARE code development and assessment methodologies
Micaelli, J.C.; Barre, F.; Bestion, D.
1995-12-31
The CATHARE thermal-hydraulic code has been developed jointly by Commissariat a l`Energie Atomique (CEA), Electricite de France (EdF), and Framatorne for safety analysis. Since the beginning of the project (September 1979), development and assessment activities have followed a methodology supported by two series of experimental tests: separate effects tests and integral effects tests. The purpose of this paper is to describe this methodology, the code assessment status, and the evolution to take into account two new components of this program: the modeling of three-dimensional phenomena and the requirements of code uncertainty evaluation.
Joshua J. Cogliati; Abderrafi M. Ougouag
2006-10-01
A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.
CORMLT code for the analysis of degraded core accidents. Computer code manual. [PWR
Denny, V.E.
1984-12-01
A computer code (CORMLT) has been developed to predict the effects of bouyancy-driven convection on the progression of core-degrading accidents in PWR vessels. Thermal/hydraulics modeling includes the downcomer/bottom-head regions, as well as the upper vessel and adjacent hot-leg portions of the primary coolant system for which gas communication is limited to the intervening discharge nozzles (so-called dead-end volumes). CORMLT requires flow rates and temperatures of any water feed (to the downcomer) versus time. CORMLT provides composition, enthalpy, temperature, and flow rate of steam/hydrogen mixtures within the vessel above the (receding) water surface, as well as estimates of these quantities for interaction between the plenum and the rest of the PCS. CORMLT also provides graphical representations for the morphological behavior of the progression of core meltdown accidents.
Bandy, P.J.; Hall, L.F.
1993-03-01
This report presents information on computer codes for numerical and analytical models that have been used at the Idaho National Engineering Laboratory (INEL) to model ground water and surface water flow and contaminant transport. Organizations conducting modeling at the INEL include: EG G Idaho, Inc., US Geological Survey, and Westinghouse Idaho Nuclear Company. Information concerning computer codes included in this report are: agency responsible for the modeling effort, name of the computer code, proprietor of the code (copyright holder or original author), validation and verification studies, applications of the model at INEL, the prime user of the model, computer code description, computing environment requirements, and documentation and references for the computer code.
Application of the RESRAD computer code to VAMP scenario S
Gnanapragasam, E.K.; Yu, C.
1997-03-01
The RESRAD computer code developed at Argonne National Laboratory was among 11 models from 11 countries participating in the international Scenario S validation of radiological assessment models with Chernobyl fallout data from southern Finland. The validation test was conducted by the Multiple Pathways Assessment Working Group of the Validation of Environmental Model Predictions (VAMP) program coordinated by the International Atomic Energy Agency. RESRAD was enhanced to provide an output of contaminant concentrations in environmental media and in food products to compare with measured data from southern Finland. Probability distributions for inputs that were judged to be most uncertain were obtained from the literature and from information provided in the scenario description prepared by the Finnish Centre for Radiation and Nuclear Safety. The deterministic version of RESRAD was run repeatedly to generate probability distributions for the required predictions. These predictions were used later to verify the probabilistic RESRAD code. The RESRAD predictions of radionuclide concentrations are compared with measured concentrations in selected food products. The radiological doses predicted by RESRAD are also compared with those estimated by the Finnish Centre for Radiation and Nuclear Safety.
Fire aerosol experiment and comparisons with computer code predictions
NASA Astrophysics Data System (ADS)
Gregory, W. S.; Nichols, B. D.; White, B. W.; Smith, P. R.; Leslie, I. H.; Corkran, J. R.
1988-08-01
Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the U.S. Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300 C. To date, we have used quartz aerosol with a median diameter of about 10 microns as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 cu ft/min) and three nominal gas temperatures (ambient, 150 C, and 300 C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data.
HYDRA, A finite element computational fluid dynamics code: User manual
Christon, M.A.
1995-06-01
HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.
Foundational development of an advanced nuclear reactor integrated safety code.
Clarno, Kevin; Lorber, Alfred Abraham; Pryor, Richard J.; Spotz, William F.; Schmidt, Rodney Cannon; Belcourt, Kenneth; Hooper, Russell Warren; Humphries, Larry LaRon
2010-02-01
This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Assessment of the computer code COBRA/CFTL
Baxi, C. B.; Burhop, C. J.
1981-07-01
The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices.
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
Proceedings of the conference on computer codes and the linear accelerator community
Cooper, R.K.
1990-07-01
The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.
Software requirements specification document for the AREST code development
Engel, D.W.; McGrail, B.P.; Whitney, P.D.; Gray, W.J.; Williford, R.E.; White, M.D.; Eslinger, P.W.; Altenhofen, M.K.
1993-11-01
The Analysis of the Repository Source Term (AREST) computer code was selected in 1992 by the U.S. Department of Energy. The AREST code will be used to analyze the performance of an underground high level nuclear waste repository. The AREST code is being modified by the Pacific Northwest Laboratory (PNL) in order to evaluate the engineered barrier and waste package designs, model regulatory compliance, analyze sensitivities, and support total systems performance assessment modeling. The current version of the AREST code was developed to be a very useful tool for analyzing model uncertainties and sensitivities to input parameters. The code has also been used successfully in supplying source-terms that were used in a total systems performance assessment. The current version, however, has been found to be inadequate for the comparison and selection of a design for the waste package. This is due to the assumptions and simplifications made in the selection of the process and system models. Thus, the new version of the AREST code will be designed to focus on the details of the individual processes and implementation of more realistic models. This document describes the requirements of the new models that will be implemented. Included in this document is a section describing the near-field environmental conditions for this waste package modeling, description of the new process models that will be implemented, and a description of the computer requirements for the new version of the AREST code.
SCDAP/RELAP5/MOD3 code development
Allison, C.M.; Siefken, J.L.; Coryell, E.W.
1992-01-01
The SCOAP/RELAP5/MOD3 computer code is designed to describe the overall reactor coolant system (RCS) thermal-hydraulic response, core damage progression, and fission product release and transport during severe accidents. The code is being developed at the Idaho National Engineering Laboratory (INEL) under the primary sponsorship of the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission (NRC). Code development activities are currently focused on three main areas - (a) code usability, (b) early phase melt progression model improvements, and (c) advanced reactor thermal-hydraulic model extensions. This paper describes the first two activities. A companion paper describes the advanced reactor model improvements being performed under RELAP5/MOD3 funding.
SCDAP/RELAP5/MOD3 code development
Allison, C.M.; Siefken, J.L.; Coryell, E.W.
1992-12-31
The SCOAP/RELAP5/MOD3 computer code is designed to describe the overall reactor coolant system (RCS) thermal-hydraulic response, core damage progression, and fission product release and transport during severe accidents. The code is being developed at the Idaho National Engineering Laboratory (INEL) under the primary sponsorship of the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission (NRC). Code development activities are currently focused on three main areas - (a) code usability, (b) early phase melt progression model improvements, and (c) advanced reactor thermal-hydraulic model extensions. This paper describes the first two activities. A companion paper describes the advanced reactor model improvements being performed under RELAP5/MOD3 funding.
Plagiarism Detection Algorithm for Source Code in Computer Science Education
ERIC Educational Resources Information Center
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
Plagiarism Detection Algorithm for Source Code in Computer Science Education
ERIC Educational Resources Information Center
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
User's manual for the ORIGEN2 computer code
Croff, A.G.
1980-07-01
This report describes how to use a revised version of the ORIGEN computer code, designated ORIGEN2. Included are a description of the input data, input deck organization, and sample input and output. ORIGEN2 can be obtained from the Radiation Shielding Information Center at ORNL.
Validation of Numerical Codes to Compute Tsunami Runup And Inundation
NASA Astrophysics Data System (ADS)
Velioğlu, Deniz; Cevdet Yalçıner, Ahmet; Kian, Rozita; Zaytsev, Andrey
2015-04-01
FLOW 3D and NAMI DANCE are two numerical codes which can be applied to analysis of flow and motion of long waves. Flow 3D simulates linear and nonlinear propagating surface waves as well as irregular waves including long waves. NAMI DANCE uses finite difference computational method to solve nonlinear shallow water equations (NSWE) in long wave problems, specifically tsunamis. Both codes can be applied to tsunami simulations and visualization of long waves. Both codes are capable of solving flooding problems. However, FLOW 3D is designed mainly to solve flooding problem from land and NAMI DANCE is designed to solve flooding problem from the sea. These numerical codes are applied to some benchmark problems for validation and verification. One useful benchmark problem is the runup of solitary waves which is investigated analytically and experimentally by Synolakis (1987). Since 1970s, solitary waves have commonly been used to model tsunamis especially in experimental and numerical studies. In this respect, a benchmark problem on runup of solitary waves is a relevant choice to assess the capability and validity of the numerical codes on amplification of tsunamis. In this study both codes have been tested, compared and validated by applying to the analytical benchmark problem of solitary wave runup on a sloping beach. Comparison of the results showed that both codes are in good agreement with the analytical and experimental results and thus can be proposed to be used in inundation of long waves and tsunami hazard analysis.
Health Code Number (HCN) Development Procedure
Petrocchi, Rocky; Craig, Douglas K.; Bond, Jayne-Anne; Trott, Donna M.; Yu, Xiao-Ying
2013-09-01
This report provides the detailed description of health code numbers (HCNs) and the procedure of how each HCN is assigned. It contains many guidelines and rationales of HCNs. HCNs are used in the chemical mixture methodology (CMM), a method recommended by the department of energy (DOE) for assessing health effects as a result of exposures to airborne aerosols in an emergency. The procedure is a useful tool for proficient HCN code developers. Intense training and quality assurance with qualified HCN developers are required before an individual comprehends the procedure to develop HCNs for DOE.
A new computational decoding complexity measure of convolutional codes
NASA Astrophysics Data System (ADS)
Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.
2014-12-01
This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.
Covariance Generation Using CONRAD and SAMMY Computer Codes
Leal, Luiz C; Derrien, Herve; De Saint Jean, C; Noguere, G; Ruggieri, J M
2009-01-01
Covariance generation in the resolved resonance region can be generated using the computer codes CONRAD and SAMMY. These codes use formalisms derived from the R-matrix methodology together with the generalized least squares technique to obtain resonance parameter. In addition, resonance parameter covariance is also obtained. Results of covariance calculations for a simple case of the s-wave resonance parameters of 48Ti in the energy region 10-5 eV to 300 keV are compared. The retroactive approach included in CONRAD and SAMMY was used.
Development of a CFD code for casting simulation
NASA Technical Reports Server (NTRS)
Murph, Jesse E.
1993-01-01
Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.
Computer code simulations of explosions in flow networks and comparison with experiments
NASA Astrophysics Data System (ADS)
Gregory, W. S.; Nichols, B. D.; Moore, J. A.; Smith, P. R.; Steinke, R. G.; Idzorek, R. D.
1987-10-01
A program of experimental testing and computer code development for predicting the effects of explosions in air-cleaning systems is being carried out for the Department of Energy. This work is a combined effort by the Los Alamos National Laboratory and New Mexico State University (NMSU). Los Alamos has the lead responsibility in the project and develops the computer codes; NMSU performs the experimental testing. The emphasis in the program is on obtaining experimental data to verify the analytical work. The primary benefit of this work will be the development of a verified computer code that safety analysts can use to analyze the effects of hypothetical explosions in nuclear plant air cleaning systems. The experimental data show the combined effects of explosions in air-cleaning systems that contain all of the important air-cleaning elements (blowers, dampers, filters, ductwork, and cells). A small experimental set-up consisting of multiple rooms, ductwork, a damper, a filter, and a blower was constructed. Explosions were simulated with a shock tube, hydrogen/air-filled gas balloons, and blasting caps. Analytical predictions were made using the EVENT84 and NF85 computer codes. The EVENT84 code predictions were in good agreement with the effects of the hydrogen/air explosions, but they did not model the blasting cap explosions adequately. NF85 predicted shock entrance to and within the experimental set-up very well. The NF85 code was not used to model the hydrogen/air or blasting cap explosions.
Development of Tritium Permeation Analysis Code (TPAC)
Eung S. Kim; Chang H. Oh; Mike Patterson
2010-10-01
Idaho National Laboratory developed the Tritium Permeation Analysis Code (TPAC) for tritium permeation in the Very High Temperature Gas Cooled Reactor (VHTR). All the component models in the VHTR were developed and were embedded into the MATHLAB SIMULINK package with a Graphic User Interface. The governing equations of the nuclear ternary reaction and thermal neutron capture reactions from impurities in helium and graphite core, reflector, and control rods were implemented. The TPAC code was verified using analytical solutions for the tritium birth rate from the ternary fission, the birth rate from 3He, and the birth rate from 10B. This paper also provides comparisons of the TPAC with the existing other codes. A VHTR reference design was selected for tritium permeation study from the reference design to the nuclear-assisted hydrogen production plant and some sensitivity study results are presented based on the HTGR outlet temperature of 750 degrees C.
Computer codes for evaluation of control room habitability (HABIT)
Stage, S.A.
1996-06-01
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.
War of ontology worlds: mathematics, computer code, or Esperanto?
Rzhetsky, Andrey; Evans, James A
2011-09-01
The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.
War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?
Rzhetsky, Andrey; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes
NASA Technical Reports Server (NTRS)
Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.
1989-01-01
The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.
A three-dimensional spacecraft-charging computer code
NASA Technical Reports Server (NTRS)
Rubin, A. G.; Katz, I.; Mandell, M.; Schnuelle, G.; Steen, P.; Parks, D.; Cassidy, J.; Roche, J.
1980-01-01
A computer code is described which simulates the interaction of the space environment with a satellite at geosynchronous altitude. Employing finite elements, a three-dimensional satellite model has been constructed with more than 1000 surface cells and 15 different surface materials. Free space around the satellite is modeled by nesting grids within grids. Applications of this NASA Spacecraft Charging Analyzer Program (NASCAP) code to the study of a satellite photosheath and the differential charging of the SCATHA (satellite charging at high altitudes) satellite in eclipse and in sunlight are discussed. In order to understand detector response when the satellite is charged, the code is used to trace the trajectories of particles reaching the SCATHA detectors. Particle trajectories from positive and negative emitters on SCATHA also are traced to determine the location of returning particles, to estimate the escaping flux, and to simulate active control of satellite potentials.
NASA Astrophysics Data System (ADS)
Markina, Natalya V.; Shimansky, Gregory A.
A method of controlling a systematic error in transmutation computations is described for a class of problems, in which strictly a one-parental and one-residual nucleus are considered in each nuclear transformation channel. A discrete-logical algorithm is stated for the differential equations system matrix to reduce it to a block-triangular type. A computing procedure is developed determining a strict estimation of a computing error for each value of the computation results for the above named class of transmutation computation problems with some additional restrictions on the complexity of the nuclei transformations scheme. The computer code for this computing procedure - TRANS_MU - compared with an analogue approach has a number of advantages. Besides the mentioned quantitative control of a systematic and computing errors as an important feature of the code TRANS_MU, it is necessary to indicate the calculation of the contribution of each considered reaction to the transmutant accumulation and gas production. The application of the TRANS_MU computer code is shown using copper alloys as an example when the planning of irradiation experiments with fusion reactor material specimens in fission reactors, and processing the experimental results.
GALPROP: New Developments in CR Propagation Code
NASA Technical Reports Server (NTRS)
Moskalenko, I. V.; Jones, F. C.; Mashnik, S. G.; Strong, A. W.; Ptuskin, V. S.
2003-01-01
The numerical Galactic CR propagation code GALPROP has been shown to reproduce simultaneously observational data of many kinds related to CR origin and propagation. It has been validated on direct measurements of nuclei, antiprotons, electrons, positrons as well as on astronomical measurements of gamma rays and synchrotron radiation. Such data provide many independent constraints on model parameters while revealing some contradictions in the conventional view of Galactic CR propagation. Using a new version of GALPROP we study new effects such as processes of wave-particle interactions in the interstellar medium. We also report about other developments in the CR propagation code.
The 1992 Seals Flow Code Development Workshop
NASA Technical Reports Server (NTRS)
Liang, Anita D.; Hendricks, Robert C.
1993-01-01
A two-day meeting was conducted at the NASA Lewis Research Center on August 5 and 6, 1992, to inform the technical community of the progress of NASA Contract NAS3-26544. This contract was established in 1990 to develop industrial and CFD codes for the design and analysis of seals. Codes were demonstrated and disseminated to the user community for evaluation. The peer review panel which was formed in 1991 provided recommendations on this effort. The technical community presented results of their activities in the area of seals, with particular emphasis on brush seal systems.
Geothermal reservoir engineering computer code comparison and validation
Faust, C.R.; Mercer, J.W.; Miller, W.J.
1980-11-12
The results of computer simulations for a set of six problems typical of geothermal reservoir engineering applications are presented. These results are compared to those obtained by others using similar geothermal reservoir simulators on the same problem set. The purpose of this code comparison is to check the performance of participating codes on a set of typical reservoir problems. The results provide a measure of the validity and appropriateness of the simulators in terms of major assumptions, governing equations, numerical accuracy, and computational procedures. A description is given of the general reservoir simulator - its major assumptions, mathematical formulation, and numerical techniques. Following the description of the model is the presentation of the results for the six problems. Included with the results for each problem is a discussion of the results; problem descriptions and result tabulations are included in appendixes. Each of the six problems specified in the contract was successfully simulated. (MHR)
Scheme for fault-tolerant holonomic computation on stabilizer codes
NASA Astrophysics Data System (ADS)
Oreshkov, Ognyan; Brun, Todd A.; Lidar, Daniel A.
2009-08-01
This paper generalizes and expands upon the work [O. Oreshkov, T. A. Brun, and D. A. Lidar, Phys. Rev. Lett. 102, 070502 (2009)] where we introduced a scheme for fault-tolerant holonomic quantum computation (HQC) on stabilizer codes. HQC is an all-geometric strategy based on non-Abelian adiabatic holonomies, which is known to be robust against various types of errors in the control parameters. The scheme we present shows that HQC is a scalable method of computation and opens the possibility for combining the benefits of error correction with the inherent resilience of the holonomic approach. We show that with the Bacon-Shor code the scheme can be implemented using Hamiltonian operators of weights 2 and 3.
Validation and testing of the VAM2D computer code
Kool, J.B.; Wu, Y.S. )
1991-10-01
This document describes two modeling studies conducted by HydroGeoLogic, Inc. for the US NRC under contract no. NRC-04089-090, entitled, Validation and Testing of the VAM2D Computer Code.'' VAM2D is a two-dimensional, variably saturated flow and transport code, with applications for performance assessment of nuclear waste disposal. The computer code itself is documented in a separate NUREG document (NUREG/CR-5352, 1989). The studies presented in this report involve application of the VAM2D code to two diverse subsurface modeling problems. The first one involves modeling of infiltration and redistribution of water and solutes in an initially dry, heterogeneous field soil. This application involves detailed modeling over a relatively short, 9-month time period. The second problem pertains to the application of VAM2D to the modeling of a waste disposal facility in a fractured clay, over much larger space and time scales and with particular emphasis on the applicability and reliability of using equivalent porous medium approach for simulating flow and transport in fractured geologic media. Reflecting the separate and distinct nature of the two problems studied, this report is organized in two separate parts. 61 refs., 31 figs., 9 tabs.
Bragg optics computer codes for neutron scattering instrument design
Popovici, M.; Yelon, W.B.; Berliner, R.R.; Stoica, A.D.
1997-09-01
Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
Methodology for computational fluid dynamics code verification/validation
Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.
1995-07-01
The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.
User's manual for the vertical axis wind turbine performance computer code darter
Klimas, P. C.; French, R. E.
1980-05-01
The computer code DARTER (DARrieus, Turbine, Elemental Reynolds number) is an aerodynamic performance/loads prediction scheme based upon the conservation of momentum principle. It is the latest evolution in a sequence which began with a model developed by Templin of NRC, Canada and progressed through the Sandia National Laboratories-developed SIMOSS (SSImple MOmentum, Single Streamtube) and DART (SARrieus Turbine) to DARTER.
SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters
Leal, L.C.
1995-01-01
The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
Fault-tolerant quantum computation with asymmetric Bacon-Shor codes
NASA Astrophysics Data System (ADS)
Brooks, Peter; Preskill, John
2013-03-01
We develop a scheme for fault-tolerant quantum computation based on asymmetric Bacon-Shor codes, which works effectively against highly biased noise dominated by dephasing. We find the optimal Bacon-Shor block size as a function of the noise strength and the noise bias, and estimate the logical error rate and overhead cost achieved by this optimal code. Our fault-tolerant gadgets, based on gate teleportation, are well suited for hardware platforms with geometrically local gates in two dimensions.
ASHMET: A computer code for estimating insolation incident on tilted surfaces
NASA Technical Reports Server (NTRS)
Elkin, R. F.; Toelle, R. G.
1980-01-01
A computer code, ASHMET, was developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors were included. Climatological data for 248 U. S. locations are built into the code. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.
Utilization of recently developed codes for high power Brayton and Rankine cycle power systems
NASA Technical Reports Server (NTRS)
Doherty, Michael P.
1993-01-01
Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.
Code for Multiblock CFD and Heat-Transfer Computations
NASA Technical Reports Server (NTRS)
Fabian, John C.; Heidmann, James D.; Lucci, Barbara L.; Ameri, Ali A.; Rigby, David L.; Steinthorsson, Erlendur
2006-01-01
The NASA Glenn Research Center General Multi-Block Navier-Stokes Convective Heat Transfer Code, Glenn-HT, has been used extensively to predict heat transfer and fluid flow for a variety of steady gas turbine engine problems. Recently, the Glenn-HT code has been completely rewritten in Fortran 90/95, a more object-oriented language that allows programmers to create code that is more modular and makes more efficient use of data structures. The new implementation takes full advantage of the capabilities of the Fortran 90/95 programming language. As a result, the Glenn-HT code now provides dynamic memory allocation, modular design, and unsteady flow capability. This allows for the heat-transfer analysis of a full turbine stage. The code has been demonstrated for an unsteady inflow condition, and gridding efforts have been initiated for a full turbine stage unsteady calculation. This analysis will be the first to simultaneously include the effects of rotation, blade interaction, film cooling, and tip clearance with recessed tip on turbine heat transfer and cooling performance. Future plans call for the application of the new Glenn-HT code to a range of gas turbine engine problems of current interest to the heat-transfer community. The new unsteady flow capability will allow researchers to predict the effect of unsteady flow phenomena upon the convective heat transfer of turbine blades and vanes. Work will also continue on the development of conjugate heat-transfer capability in the code, where simultaneous solution of convective and conductive heat-transfer domains is accomplished. Finally, advanced turbulence and fluid flow models and automatic gridding techniques are being developed that will be applied to the Glenn-HT code and solution process.
Software Quality-Control Guidelines for Codes Developed for the NWTC
H. James Green.
1999-06-16
Members in the wind-energy research, develop-ment, deployment, and production communities use computer codes for many things. They base important decisions on the results from the codes. It is important that the developers of these codes scrutinize them to assure an appropriate level for quality. The National Wind Technology Center (NWTC) and its subcontractors have developed many computer codes now in use in the United States and around the world. This document will present some guidelines for ensuring the quality of programs that are developed for the NWTC.
Initial Testing of a Two-Dimensional Computer Code for Microwave-Induced Surface Breakdown in Air
1991-06-01
operation of high- voltage electrical equipment are electron emission and surface flashover . As a step toward further understanding of these phenomena in gas...INITIAL TESTING OF A TWO-DIMENSIONAL COMPUTER CODE FOR MICROWAVE-INDUCED SURFACE BREAKDOWN IN AIR* D.J. Mayhall and J.H. Yee Lawrence Livermore...computer code for microwave-induced surface breakdown in air is developed. This code is based on finite difference approximations to Maxwell’s curl
Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic
2011-06-01
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the
CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows
Winters, W.S.; Evans, G.H.; Moen, C.D.
1996-10-01
This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.
FLAME: A finite element computer code for contaminant transport n variably-saturated media
Baca, R.G.; Magnuson, S.O.
1992-06-01
A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.
NASA Astrophysics Data System (ADS)
Pucha, Ragadeepika; Hiremath, K. M.; Gurumath, Shashanka R.
2016-03-01
Sunspots are the most conspicuous aspects of the Sun. They have a lower temperature, as compared to the surrounding photosphere; hence, sunspots appear as dark regions on a brighter background. Sunspots cyclically appear and disappear with a 11-year periodicity and are associated with a strong magnetic field ( ˜103 G) structure. Sunspots consist of a dark umbra, surrounded by a lighter penumbra. Study of umbra-penumbra area ratio can be used to give a rough idea as to how the convective energy of the Sun is transported from the interior, as the sunspot's thermal structure is related to this convective medium. An algorithm to extract sunspots from the white-light solar images obtained from the Kodaikanal Observatory is proposed. This algorithm computes the radius and center of the solar disk uniquely and removes the limb darkening from the image. It also separates the umbra and computes the position as well as the area of the sunspots. The estimated results are compared with the Debrecen photoheliographic results. It is shown that both area and position measurements are in quite good agreement.
NASA Astrophysics Data System (ADS)
Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.
Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.
Aeschliman, D.P.; Oberkampf, W.L.; Blottner, F.G.
1995-07-01
Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.
A Compact Code for Simulations of Quantum Error Correction in Classical Computers
Nyman, Peter
2009-03-10
This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.
Application of software to development of reactor-safety codes
Wilburn, N.P.; Niccoli, L.G.
1980-09-01
Over the past two-and-a-half decades, the application of new techniques has reduced hardware cost for digital computer systems and increased computational speed by several orders of magnitude. A corresponding cost reduction in business and scientific software development has not occurred. The same situation is seen for software developed to model the thermohydraulic behavior of nuclear systems under hypothetical accident situations. For all cases this is particularly noted when costs over the total software life cycle are considered. A solution to this dilemma for reactor safety code systems has been demonstrated by applying the software engineering techniques which have been developed over the course of the last few years in the aerospace and business communities. These techniques have been applied recently with a great deal of success in four major projects at the Hanford Engineering Development Laboratory (HEDL): 1) a rewrite of a major safety code (MELT); 2) development of a new code system (CONACS) for description of the response of LMFBR containment to hypothetical accidents, and 3) development of two new modules for reactor safety analysis.
NEWSPEC: A computer code to unfold neutron spectra from Bonner sphere data
Lemley, E.C.; West, L.
1996-12-31
A new computer code, NEWSPEC, is in development at the University of Arkansas. The NEWSPEC code allows a user to unfold, fold, rebin, display, and manipulate neutron spectra as applied to Bonner sphere measurements. The SPUNIT unfolding algorithm, a new rebinning algorithm, and the graphical capabilities of Microsoft (MS) Windows and MS Excel are utilized to perform these operations. The computer platform for NEWSPEC is a personal computer (PC) running MS Windows 3.x or Win95, while the code is written in MS Visual Basic (VB) and MS VB for Applications (VBA) under Excel. One of the most useful attributes of the NEWSPEC software is the link to Excel allowing additional manipulation of program output or creation of program input.
Computational Aeroacoustic Analysis System Development
NASA Technical Reports Server (NTRS)
Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.
2001-01-01
Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed
Heat pipe design handbook, part 2. [digital computer code specifications
NASA Technical Reports Server (NTRS)
Skrabek, E. A.
1972-01-01
The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.
Majorana fermion surface code for universal quantum computation
Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang
2015-12-10
In this study, we introduce an exactly solvable model of interacting Majorana fermions realizing Z2 topological order with a Z2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physical ancilla qubits,more » increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.« less
Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy
ERIC Educational Resources Information Center
Hutchison, Amy; Nadolny, Larysa; Estapa, Anne
2016-01-01
In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…
Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy
ERIC Educational Resources Information Center
Hutchison, Amy; Nadolny, Larysa; Estapa, Anne
2016-01-01
In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…
Development of Parallel Code for the Alaska Tsunami Forecast Model
NASA Astrophysics Data System (ADS)
Bahng, B.; Knight, W. R.; Whitmore, P.
2014-12-01
The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.
Development of a massively parallel parachute performance prediction code
Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.
1997-04-01
The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.
NASA Technical Reports Server (NTRS)
George, M. C.
1972-01-01
Gamma-ray production cross section by the inelastic scattering of neutrons from light nuclei are considered. The applicability of optical model potential is discussed. Based on experimental data, a cascade approach is developed to calculate the inelastic gamma production cross sections. In the case of O-16 using computer code LINGAP in conjunction with ABACUS-2; results are compared with reported values.
Multicode comparison of selected source-term computer codes
Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.
1989-04-01
This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.
Computer codes for thermal analysis of a solid rocket motor nozzle
NASA Technical Reports Server (NTRS)
Chauhan, Rajinder Singh
1988-01-01
A number of computer codes are available for performing thermal analysis of solid rocket motor nozzles. Aerotherm Chemical Equilibrium (ACE) computer program can be used to perform one-dimensional gas expansion to determine the state of the gas at each location of a nozzle. The ACE outputs can be used as input to a computer program called Momentum/Energy Integral Technique (MEIT) for predicting boundary layer development development, shear, and heating on the surface of the nozzle. The output from MEIT can be used as input to another computer program called Aerotherm Charring Material Thermal Response and Ablation Program (CMA). This program is used to calculate oblation or decomposition response of the nozzle material. A code called Failure Analysis Nonlinear Thermal and Structural Integrated Code (FANTASTIC) is also likely to be used for performing thermal analysis of solid rocket motor nozzles after the program is duly verified. A part of the verification work on FANTASTIC was done by using one and two dimension heat transfer examples with known answers. An attempt was made to prepare input for performing thermal analysis of the CCT nozzle using the FANTASTIC computer code. The CCT nozzle problem will first be solved by using ACE, MEIT, and CMA. The same problem will then be solved using FANTASTIC. These results will then be compared for verification of FANTASTIC.
SCALE: A modular code system for performing standardized computer analyses for licensing evaluation
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces
Ahluwalia, R.K.; Im, K.H.
1992-08-01
A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.
Wilson, R.D.; Price, R.K.; Kosanke, K.L.
1983-03-01
As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC.
GEOS Code Development Road Map - May, 2013
Johnson, Scott; Settgast, Randolph; Fu, Pengcheng; Antoun, Tarabay; Ryerson, F. J.
2013-05-03
GEOS is a massively parallel computational framework designed to enable HPC-based simulations of subsurface reservoir stimulation activities with the goal of optimizing current operations and evaluating innovative stimulation methods. GEOS will enable coupling of different solvers associated with the various physical processes occurring during reservoir stimulation in unique and sophisticated ways, adapted to various geologic settings, materials and stimulation methods. The overall architecture of the framework includes consistent data structures and will allow incorporation of additional physical and materials models as demanded by future applications. Along with predicting the initiation, propagation and reactivation of fractures, GEOS will also generate a seismic source term that can be linked with seismic wave propagation codes to generate synthetic microseismicity at surface and downhole arrays. Similarly, the output from GEOS can be linked with existing fluid/thermal transport codes. GEOS can also be linked with existing, non-intrusive uncertainty quantification schemes to constrain uncertainty in its predictions and sensitivity to the various parameters describing the reservoir and stimulation operations. We anticipate that an implicit-explicit 3D version of GEOS, including a preliminary seismic source model, will be available for parametric testing and validation against experimental and field data by Oct. 1, 2013.
ERIC Educational Resources Information Center
Knowlton, Marie; Wetzel, Robin
2006-01-01
This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…
pyro: A teaching code for computational astrophysical hydrodynamics
NASA Astrophysics Data System (ADS)
Zingale, M.
2014-10-01
We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.
Nyx: A MASSIVELY PARALLEL AMR CODE FOR COMPUTATIONAL COSMOLOGY
Almgren, Ann S.; Bell, John B.; Lijewski, Mike J.; Lukic, Zarija; Van Andel, Ethan
2013-03-01
We present a new N-body and gas dynamics code, called Nyx, for large-scale cosmological simulations. Nyx follows the temporal evolution of a system of discrete dark matter particles gravitationally coupled to an inviscid ideal fluid in an expanding universe. The gas is advanced in an Eulerian framework with block-structured adaptive mesh refinement; a particle-mesh scheme using the same grid hierarchy is used to solve for self-gravity and advance the particles. Computational results demonstrating the validation of Nyx on standard cosmological test problems, and the scaling behavior of Nyx to 50,000 cores, are presented.
Methodology, status and plans for development and assessment of Cathare code
Bestion, D.; Barre, F.; Faydide, B.
1997-07-01
This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests or integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.
Verification of the VARSKIN beta skin dose calculation computer code.
Sherbini, Sami; DeCicco, Joseph; Gray, Anita Turner; Struckmeyer, Richard
2008-06-01
The computer code VARSKIN is used extensively to calculate dose to the skin resulting from contaminants on the skin or on protective clothing covering the skin. The code uses six pre-programmed source geometries, four of which are volume sources, and a wide range of user-selectable radionuclides. Some verification of this code had been carried out before the current version of the code, version 3.0, was released, but this was limited in extent and did not include all the source geometries that the code is capable of modeling. This work extends this verification to include all the source geometries that are programmed in the code over a wide range of beta radiation energies and skin depths. Verification was carried out by comparing the doses calculated using VARSKIN with the doses for similar geometries calculated using the Monte Carlo radiation transport code MCNP5. Beta end-point energies used in the calculations ranged from 0.3 MeV up to 2.3 MeV. The results showed excellent agreement between the MCNP and VARSKIN calculations, with the agreement being within a few percent for point and disc sources and within 20% for other sources with the exception of a few cases, mainly at the low end of the beta end-point energies. The accuracy of the VARSKIN results, based on the work in this paper, indicates that it is sufficiently accurate for calculation of skin doses resulting from skin contaminations, and that the uncertainties arising from the use of VARSKIN are likely to be small compared with other uncertainties that typically arise in this type of dose assessment, such as those resulting from a lack of exact information on the size, shape, and density of the contaminant, the depth of the sensitive layer of the skin at the location of the contamination, the duration of the exposure, and the possibility of the source moving over various areas of the skin during the exposure period if the contaminant is on protective clothing.
Schrödinger's code-script: not a genetic cipher but a code of development.
Walsby, A E; Hodge, M J S
2017-06-01
In his book What is Life? Erwin Schrödinger coined the term 'code-script', thought by some to be the first published suggestion of a hereditary code and perhaps a forerunner of the genetic code. The etymology of 'code' suggests three meanings relevant to 'code-script which we distinguish as 'cipher-code', 'word-code' and 'rule-code'. Cipher-codes and word-codes entail translation of one set of characters into another. The genetic code comprises not one but two cipher-codes: the first is the DNA 'base-pairing cipher'; the second is the 'nucleotide-amino-acid cipher', which involves the translation of DNA base sequences into amino-acid sequences. We suggest that Schrödinger's code-script is a form of 'rule-code', a set of rules that, like the 'highway code' or 'penal code', requires no translation of a message. Schrödinger first relates his code-script to chromosomal genes made of protein. Ignorant of its properties, however, he later abandons 'protein' and adopts in its place a hypothetical, isomeric 'aperiodic solid' whose atoms he imagines rearranged in countless different conformations, which together are responsible for the patterns of ontogenetic development. In an attempt to explain the large number of combinations required, Schrödinger referred to the Morse code (a cipher) but in doing so unwittingly misled readers into believing that he intended a cipher-code resembling the genetic code. We argue that the modern equivalent of Schrödinger's code-script is a rule-code of organismal development based largely on the synthesis, folding, properties and interactions of numerous proteins, each performing a specific task. Copyright © 2016. Published by Elsevier Ltd.
Development of Tripropellant CFD Design Code
NASA Technical Reports Server (NTRS)
Farmer, Richard C.; Cheng, Gary C.; Anderson, Peter G.
1998-01-01
A tripropellant, such as GO2/H2/RP-1, CFD design code has been developed to predict the local mixing of multiple propellant streams as they are injected into a rocket motor. The code utilizes real fluid properties to account for the mixing and finite-rate combustion processes which occur near an injector faceplate, thus the analysis serves as a multi-phase homogeneous spray combustion model. Proper accounting of the combustion allows accurate gas-side temperature predictions which are essential for accurate wall heating analyses. The complex secondary flows which are predicted to occur near a faceplate cannot be quantitatively predicted by less accurate methodology. Test cases have been simulated to describe an axisymmetric tripropellant coaxial injector and a 3-dimensional RP-1/LO2 impinger injector system. The analysis has been shown to realistically describe such injector combustion flowfields. The code is also valuable to design meaningful future experiments by determining the critical location and type of measurements needed.
NASA Technical Reports Server (NTRS)
Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.
NASA Technical Reports Server (NTRS)
Ameri, Ali A.; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.
NASA Technical Reports Server (NTRS)
Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Philip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations which are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminarturbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes which take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-HT code and applied to film cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30 holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and four blowing ratios of 0.5, 1.0, 1.5 and 2.0 are shown. Flow features under those conditions are also described.
Large-Eddy Simulation Code Developed for Propulsion Applications
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2003-01-01
A large-eddy simulation (LES) code was developed at the NASA Glenn Research Center to provide more accurate and detailed computational analyses of propulsion flow fields. The accuracy of current computational fluid dynamics (CFD) methods is limited primarily by their inability to properly account for the turbulent motion present in virtually all propulsion flows. Because the efficiency and performance of a propulsion system are highly dependent on the details of this turbulent motion, it is critical for CFD to accurately model it. The LES code promises to give new CFD simulations an advantage over older methods by directly computing the large turbulent eddies, to correctly predict their effect on a propulsion system. Turbulent motion is a random, unsteady process whose behavior is difficult to predict through computer simulations. Current methods are based on Reynolds-Averaged Navier- Stokes (RANS) analyses that rely on models to represent the effect of turbulence within a flow field. The quality of the results depends on the quality of the model and its applicability to the type of flow field being studied. LES promises to be more accurate because it drastically reduces the amount of modeling necessary. It is the logical step toward improving turbulent flow predictions. In LES, the large-scale dominant turbulent motion is computed directly, leaving only the less significant small turbulent scales to be modeled. As part of the prediction, the LES method generates detailed information on the turbulence itself, providing important information for other applications, such as aeroacoustics. The LES code developed at Glenn for propulsion flow fields is being used to both analyze propulsion system components and test improved LES algorithms (subgrid-scale models, filters, and numerical schemes). The code solves the compressible Favre-filtered Navier- Stokes equations using an explicit fourth-order accurate numerical scheme, it incorporates a compressible form of
Biwer, B.M.; LePoire, D.J.; Chen, S.Y.
1996-03-01
The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive materials. The code is intended to provide scenario-specific analyses when evaluating alternatives for environmental assessment activities, including those for major federal actions involving radioactive material transport as required by the National Environmental Policy Act (NEPA). As such, rigorous procedures have been implemented to enhance the code`s credibility and strenuous efforts have been made to enhance ease of use of the code. To increase the code`s reliability and credibility, a new version of RISKIND was produced under a quality assurance plan that covered code development and testing, and a peer review process was conducted. During development of the new version, the flexibility and ease of use of RISKIND were enhanced through several major changes: (1) a Windows{sup {trademark}} point-and-click interface replaced the old DOS menu system, (2) the remaining model input parameters were added to the interface, (3) databases were updated, (4) the program output was revised, and (5) on-line help has been added. RISKIND has been well received by users and has been established as a key component in radiological transportation risk assessments through its acceptance by the U.S. Department of Energy community in recent environmental impact statements (EISs) and its continued use in the current preparation of several EISs.
NASA Technical Reports Server (NTRS)
Lilley, D. G.; Rhode, D. L.
1982-01-01
A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.
Geometric plane shapes for computer-generated holographic engraving codes
NASA Astrophysics Data System (ADS)
Augier, Ángel G.; Rabal, Héctor; Sánchez, Raúl B.
2017-04-01
We report a new theoretical and experimental study on hologravures, as holographic computer-generated laser-engravings. A geometric theory of images based on the general principles of light ray behaviour is shown. The models used are also applicable for similar engravings obtained by any non-laser method, and the solutions allow for the analysis of particular situations, not only in the case of light reflection mode, but also in transmission mode geometry. This approach is a novel perspective allowing the three-dimensional (3D) design of engraved images for specific ends. We prove theoretically that plane curves of very general geometric shapes can be used to encode image information onto a two-dimensional (2D) engraving, showing notable influence on the behaviour of reconstructed images that appears as an exciting investigation topic, extending its applications. Several cases of code using particular curvilinear shapes are experimentally studied. The computer-generated objects are coded by using the chosen curve type, and engraved by a laser on a plane surface of suitable material. All images are recovered optically by adequate illumination. The pseudoscopic or orthoscopic character of these images is considered, and an appropriate interpretation is presented.
GALPROP: New Developments in CR Propagation Code
NASA Astrophysics Data System (ADS)
Moskalenko, I. V.; Jones, F. C.; Mashnik, S. G.; Ptuskin, V. S.; Strong, A. W.
2003-07-01
The numerical Galactic CR propagation code GALPROP has been shown to repro duce simultaneously observational data of many kinds related to CR origin and propagation. Its ability to propagate all CR species in a self-consistent way has led to new results and also revealed new puzzles. We report on the latest up dates of GALPROP, development of a Web-based user interface to facilitate the access to the results of our models, and a library of evaluated isotopic production cross sections. Using an up dated version of GALPROP we study effects of waveparticle interactions in the interstellar medium (ISM).
An Object-oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2008-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented
An Object-oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2008-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
The GC computer code for flow sheet simulation of pyrochemical processing of spent nuclear fuels
Ahluwalia, R.K.; Geyer, H.K.
1996-11-01
The GC computer code has been developed for flow sheet simulation of pyrochemical processing of spent nuclear fuel. It utilizes a robust algorithm SLG for analyzing simultaneous chemical reactions between species distributed across many phases. Models have been developed for analysis of the oxide fuel reduction process, salt recovery by electrochemical decomposition of lithium oxide, uranium separation from the reduced fuel by electrorefining, and extraction of fission products into liquid cadmium. The versatility of GC is demonstrated by applying the code to a flow sheet of current interest.
ERIC Educational Resources Information Center
Good, Jonathon; Keenan, Sarah; Mishra, Punya
2016-01-01
The popular press is rife with examples of how students in the United States and around the globe are learning to program, make, and tinker. The Hour of Code, maker-education, and similar efforts are advocating that more students be exposed to principles found within computer science. We propose an expansion beyond simply teaching computational…
XSECT: A computer code for generating fuselage cross sections - user's manual
NASA Technical Reports Server (NTRS)
Ames, K. R.
1982-01-01
A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
Users manual for CAFE-3D : a computational fluid dynamics fire code.
Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma
2005-03-01
The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.
TEMP: a computer code to calculate fuel pin temperatures during a transient. [LMFBR
Bard, F E; Christensen, B Y; Gneiting, B C
1980-04-01
The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method.
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.
Reasoning with Computer Code: a new Mathematical Logic
NASA Astrophysics Data System (ADS)
Pissanetzky, Sergio
2013-01-01
A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self
NASA Technical Reports Server (NTRS)
Chambers, James P.; Cleveland, Robin O.; Bass, David T.; Raspet, Richard; Blackstock, David T.; Hamilton, Mark F.
1996-01-01
A numerical exercise to compare computer codes for the propagation of sonic booms through the atmosphere is reported. For the initial portion of the comparison, artificial, yet realistic, waveforms were numerically propagated through identical atmospheres. In addition to this comparison, one of these codes has been used to make preliminary predictions of the boom generated from a recent SR-71 flight. For the initial comparison, ground waveforms are calculated using four different codes or algorithms: (1) weak shock theory, an analytical prediction, (2) SHOCKN, a mixed time and frequency domain code developed at the University of Mississippi, (3) ZEPHYRUS, another mixed time and frequency code developed at the University of Texas, and (4) THOR, a pure time domain code recently developed at the University of Texas. The codes are described and their differences noted.
NASA Technical Reports Server (NTRS)
Chambers, James P.; Cleveland, Robin O.; Bass, David T.; Raspet, Richard; Blackstock, David T.; Hamilton, Mark F.
1996-01-01
A numerical exercise to compare computer codes for the propagation of sonic booms through the atmosphere is reported. For the initial portion of the comparison, artificial, yet realistic, waveforms were numerically propagated through identical atmospheres. In addition to this comparison, one of these codes has been used to make preliminary predictions of the boom generated from a recent SR-71 flight. For the initial comparison, ground waveforms are calculated using four different codes or algorithms: (1) weak shock theory, an analytical prediction, (2) SHOCKN, a mixed time and frequency domain code developed at the University of Mississippi, (3) ZEPHYRUS, another mixed time and frequency code developed at the University of Texas, and (4) THOR, a pure time domain code recently developed at the University of Texas. The codes are described and their differences noted.
Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold
1997-01-01
The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
The Physics Computer Development Project.
ERIC Educational Resources Information Center
Bork, Alfred M.; Ballard, Richard
This paper describes the design, development and use of computer-based teaching materials in the Physics Computer Development Project at the University of California at Irvine. The decision was made to develop dialogs and simulations related to existing material using the computational mode. The principal thrust has been in wave motion and…
Three-dimensional radiation dose mapping with the TORT computer code
Slater, C.O.; Pace, J.V. III; Childs, R.L.; Haire, M.J. ); Koyama, T. )
1991-01-01
The Consolidated Fuel Reprocessing Program (CFRP) at Oak Ridge National Laboratory (ORNL) has performed radiation shielding studies in support of various facility designs for many years. Computer codes employing the point-kernel method have been used, and the accuracy of these codes is within acceptable limits. However, to further improve the accuracy and to calculate dose at a larger number of locations, a higher order method is desired, even for analyses performed in the early stages of facility design. Consequently, the three-dimensional discrete ordinates transport code TORT, developed at ORNL in the mid-1980s, was selected to examine in detail the dose received at equipment locations. The capabilities of the code have been previously reported. Recently, the Power Reactor and Nuclear Fuel Development Corporation in Japan and the US Department of Energy have used the TORT code as part of a collaborative agreement to jointly develop breeder reactor fuel reprocessing technology. In particular, CFRP used the TORT code to estimate radiation dose levels within the main process cell for a conceptual plant design and to establish process equipment lifetimes. The results reported in this paper are for a conceptual plant design that included the mechanical head and (i.e., the disassembly and shear machines), solvent extraction equipment, and miscellaneous process support equipment.
ZEUS-MP/2: Computational Fluid Dynamics Code
NASA Astrophysics Data System (ADS)
Hayes, John C.; Norman, Michael L.; Fiedler, Robert A.; Bordner, James O.; Li, Pak Shing; Clark, Stephen E.; Ud-Doula, Asif; Mac Low, Mordecai-Mark
2011-02-01
ZEUS-MP is a multiphysics, massively parallel, message-passing implementation of the ZEUS code. ZEUS-MP offers an MHD algorithm that is better suited for multidimensional flows than the ZEUS-2D module by virtue of modifications to the method of characteristics scheme first suggested by Hawley & Stone. This MHD module is shown to compare quite favorably to the TVD scheme described by Ryu et al. ZEUS-MP is the first publicly available ZEUS code to allow the advection of multiple chemical (or nuclear) species. Radiation hydrodynamic simulations are enabled via an implicit flux-limited radiation diffusion (FLD) module. The hydrodynamic, MHD, and FLD modules can be used, singly or in concert, in one, two, or three space dimensions. In addition, so-called 1.5D and 2.5D grids, in which the "half-D'' denotes a symmetry axis along which a constant but nonzero value of velocity or magnetic field is evolved, are supported. Self-gravity can be included either through the assumption of a GM/r potential or through a solution of Poisson's equation using one of three linear solver packages (conjugate gradient, multigrid, and FFT) provided for that purpose. Point-mass potentials are also supported. Because ZEUS-MP is designed for large simulations on parallel computing platforms, considerable attention is paid to the parallel performance characteristics of each module in the code. Strong-scaling tests involving pure hydrodynamics (with and without self-gravity), MHD, and RHD are performed in which large problems (2563 zones) are distributed among as many as 1024 processors of an IBM SP3. Parallel efficiency is a strong function of the amount of communication required between processors in a given algorithm, but all modules are shown to scale well on up to 1024 processors for the chosen fixed problem size.
Computer Tensor Codes to Design the War Drive
NASA Astrophysics Data System (ADS)
Maccone, C.
To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.
A proposed framework for computational fluid dynamics code calibration/validation
Oberkampf, W.L.
1993-12-31
The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.
The PARTRAC code: Status and recent developments
NASA Astrophysics Data System (ADS)
Friedland, Werner; Kundrat, Pavel
Biophysical modeling is of particular value for predictions of radiation effects due to manned space missions. PARTRAC is an established tool for Monte Carlo-based simulations of radiation track structures, damage induction in cellular DNA and its repair [1]. Dedicated modules describe interactions of ionizing particles with the traversed medium, the production and reactions of reactive species, and score DNA damage determined by overlapping track structures with multi-scale chromatin models. The DNA repair module describes the repair of DNA double-strand breaks (DSB) via the non-homologous end-joining pathway; the code explicitly simulates the spatial mobility of individual DNA ends in parallel with their processing by major repair enzymes [2]. To simulate the yields and kinetics of radiation-induced chromosome aberrations, the repair module has been extended by tracking the information on the chromosome origin of ligated fragments as well as the presence of centromeres [3]. PARTRAC calculations have been benchmarked against experimental data on various biological endpoints induced by photon and ion irradiation. The calculated DNA fragment distributions after photon and ion irradiation reproduce corresponding experimental data and their dose- and LET-dependence. However, in particular for high-LET radiation many short DNA fragments are predicted below the detection limits of the measurements, so that the experiments significantly underestimate DSB yields by high-LET radiation [4]. The DNA repair module correctly describes the LET-dependent repair kinetics after (60) Co gamma-rays and different N-ion radiation qualities [2]. First calculations on the induction of chromosome aberrations have overestimated the absolute yields of dicentrics, but correctly reproduced their relative dose-dependence and the difference between gamma- and alpha particle irradiation [3]. Recent developments of the PARTRAC code include a model of hetero- vs euchromatin structures to enable
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Saltsman, James F.
1991-01-01
A recently developed high-temperature fatigue life prediction computer code is presented, based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
TP Clement
1999-06-24
RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in
Computer code for the atomistic simulation of lattice defects and dynamics. [COMENT code
Schiffgens, J.O.; Graves, N.J.; Oster, C.A.
1980-04-01
This document has been prepared to satisfy the need for a detailed, up-to-date description of a computer code that can be used to simulate phenomena on an atomistic level. COMENT was written in FORTRAN IV and COMPASS (CDC assembly language) to solve the classical equations of motion for a large number of atoms interacting according to a given force law, and to perform the desired ancillary analysis of the resulting data. COMENT is a dual-purpose intended to describe static defect configurations as well as the detailed motion of atoms in a crystal lattice. It can be used to simulate the effect of temperature, impurities, and pre-existing defects on radiation-induced defect production mechanisms, defect migration, and defect stability.
ER@CEBAF: Modeling code developments
Meot, F.; Roblin, Y.
2016-04-13
A proposal for a multiple-pass, high energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.
Recent developments in DYNSUB: New models, code optimization and parallelization
Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.
2013-07-01
DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)
Trinity Multiscale Transport Code Development for Experimental Comparison
NASA Astrophysics Data System (ADS)
Highcock, E.; Barnes, M.; Colyer, G.; Citrin, J.; Dickinson, D.; Mandel, N.; van Wyk, F.; Roach, C.; Schekochihin, A.; Dorland, W.
2014-10-01
The Trinity multiscale transport code has been extensively upgraded to further its use in experimental comparison. The upgrades to Trinity have extended its capability to work with experimental data, allowed it to evolve the magnetic equilibrium self-consistently (at fixed current) and significantly enhanced the range and performance of its turbulent transport modeling options. To enhance its capability to reproduce experiment, Trinity is now able to take output from the CRONOS integrated modelling suite, which is able to provide high quality reconstructions of experimental equilibria of, for example, JET. Trinity has also been integrated with the CHEASE Grad-Shafranov code. This allows the magnetic equilibrium to be re-computed self consistently as the pressure gradient evolves. Trinity has been given new options for modeling turbulent transport. These include the well-known TGLF framework, and the newly developed GPU-based nonlinear code GRYFX. These will allow rapid initial scans with Trinity before more detailed gyrokinetic modeling. Trinity's performance will benefit from an extensive programme to upgrade one of its primary gyrokinetic turbulence modeling options, GS2. We present a summary of these improvements and preliminary results. This work was supported by STFC and the Culham Centre for Fusion Energy. Computing time was provided by IFERC grants MULTEI and GKDELB, The Hartree Centre, and EPSRC Grants EP/H002081/1 and EP/L000237/1.
Development of parallel DEM for the open source code MFIX
Gopalakrishnan, Pradeep; Tafti, Danesh
2013-02-01
The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.
Once-through CANDU reactor models for the ORIGEN2 computer code
Croff, A.G.; Bjerke, M.A.
1980-11-01
Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.
The MELTSPREAD-1 computer code for the analysis of transient spreading in containments
Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.
1990-01-01
A one-dimensional, multicell, Eulerian finite difference computer code (MELTSPREAD-1) has been developed to provide an improved prediction of the gravity driven spreading and thermal interactions of molten corium flowing over a concrete or steel surface. In this paper, the modeling incorporated into the code is described and the spreading models are benchmarked against a simple dam break'' problem as well as water simulant spreading data obtained in a scaled apparatus of the Mk I containment. Results are also presented for a scoping calculation of the spreading behavior and shell thermal response in the full scale Mk I system following vessel meltthrough. 24 refs., 15 figs.
A silicon-based surface code quantum computer
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Nickerson, Naomi H.; Ross, Philipp; Morton, John Jl; Benjamin, Simon C.
2016-02-01
Individual impurity atoms in silicon can make superb individual qubits, but it remains an immense challenge to build a multi-qubit processor: there is a basic conflict between nanometre separation desired for qubit-qubit interactions and the much larger scales that would enable control and addressing in a manufacturable and fault-tolerant architecture. Here we resolve this conflict by establishing the feasibility of surface code quantum computing using solid-state spins, or ‘data qubits’, that are widely separated from one another. We use a second set of ‘probe’ spins that are mechanically separate from the data qubits and move in and out of their proximity. The spin dipole-dipole interactions give rise to phase shifts; measuring a probe’s total phase reveals the collective parity of the data qubits along the probe’s path. Using a protocol that balances the systematic errors due to imperfect device fabrication, our detailed simulations show that substantial misalignments can be handled within fault-tolerant operations. We conclude that this simple ‘orbital probe’ architecture overcomes many of the difficulties facing solid-state quantum computing, while minimising the complexity and offering qubit densities that are several orders of magnitude greater than other systems.
NASA Technical Reports Server (NTRS)
Schuster, David M.; Scott, Robert C.; Bartels, Robert E.; Edwards, John W.; Bennett, Robert M.
2000-01-01
As computational fluid dynamics methods mature, code development is rapidly transitioning from prediction of steady flowfields to unsteady flows. This change in emphasis offers a number of new challenges to the research community, not the least of which is obtaining detailed, accurate unsteady experimental data with which to evaluate new methods. Researchers at NASA Langley Research Center (LaRC) have been actively measuring unsteady pressure distributions for nearly 40 years. Over the last 20 years, these measurements have focused on developing high-quality datasets for use in code evaluation. This paper provides a sample of unsteady pressure measurements obtained by LaRC and available for government, university, and industry researchers to evaluate new and existing unsteady aerodynamic analysis methods. A number of cases are highlighted and discussed with attention focused on the unique character of the individual datasets and their perceived usefulness for code evaluation. Ongoing LaRC research in this area is also presented.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.
1980-01-01
Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.
Methodology, status and plans for development and assessment of TUF and CATHENA codes
Luxat, J.C.; Liu, W.S.; Leung, R.K.
1997-07-01
An overview is presented of the Canadian two-fluid computer codes TUF and CATHENA with specific focus on the constraints imposed during development of these codes and the areas of application for which they are intended. Additionally a process for systematic assessment of these codes is described which is part of a broader, industry based initiative for validation of computer codes used in all major disciplines of safety analysis. This is intended to provide both the licensee and the regulator in Canada with an objective basis for assessing the adequacy of codes for use in specific applications. Although focused specifically on CANDU reactors, Canadian experience in developing advanced two-fluid codes to meet wide-ranging application needs while maintaining past investment in plant modelling provides a useful contribution to international efforts in this area.
A computer code for calculating subcooled boiling pressure drop in forced convective tube flows
NASA Astrophysics Data System (ADS)
Wong, Christopher F.
1988-12-01
A calculation procedure, embodied in a computer code, was developed to calculate the convective subcooled boiling (SCB) pressure drop of water flowing in small diameter vertical or horizontal tubes under the condition of high heat fluxes. The present investigation is an extension of previous work performed by C. T. Kline in 1985. The computer code, presented then and now, numerically integrates the single-phase and separated-flow-model pressure drop equations from the inlet to the outlet of a heated tube. Efforts in this study were concentrated on identifying weaknesses in Kline's best code version and investigating his recommendations for future work. The calculation procedures for each flow regime in the tube were modified to give better overall results. New work focused primarily on the partially-developed boiling (PDB) and fully-developed boiling (FDB) regimes. The pressure drop predictions from each code version were compared to the experimental pressure drop results from the experimental investigations of Dormer/Bergles, Owens/Schrock, and Reynolds.
[Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].
Furuta, Takuya; Sato, Tatsuhiko
2015-01-01
Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.
RIVER-RAD: A computer code for simulating the transport of radionuclides in rivers
Hetrick, D.M.; McDowell-Boyer, L.M.; Sjoreen, A.L.; Thorne, D.J.; Patterson, M.R.
1992-11-01
A screening-level model, RIVER-RAD, has been developed to assess the potential fate of radionuclides released to rivers. The model is simplified in nature and is intended to provide guidance in determining the potential importance of the surface water pathway, relevant transport mechanisms, and key radionuclides in estimating radiological dose to man. The purpose of this report is to provide a description of the model and a user's manual for the FORTRAN computer code.
NASA Technical Reports Server (NTRS)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters
Plutonium explosive dispersal modeling using the MACCS2 computer code
Steele, C.M.; Wald, T.L.; Chanin, D.I.
1998-11-01
The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.
Visualization of elastic wavefields computed with a finite difference code
Larsen, S.; Harris, D.
1994-11-15
The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.
Developing Fortran Code for Kriging on the Stampede Supercomputer
NASA Astrophysics Data System (ADS)
Hodgess, Erin
2016-04-01
Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.
Computer code simulations of the formation of Meteor Crater, Arizona - Calculations MC-1 and MC-2
NASA Technical Reports Server (NTRS)
Roddy, D. J.; Schuster, S. H.; Kreyenhagen, K. N.; Orphal, D. L.
1980-01-01
It has been widely accepted that hypervelocity impact processes play a major role in the evolution of the terrestrial planets and satellites. In connection with the development of quantitative methods for the description of impact cratering, it was found that the results provided by two-dimensional finite difference, computer codes is greatly improved when initial impact conditions can be defined and when the numerical results can be tested against field and laboratory data. In order to address this problem, a numerical code study of the formation of Meteor (Barringer) Crater, Arizona, has been undertaken. A description is presented of the major results from the first two code calculations, MC-1 and MC-2, that have been completed for Meteor Crater. Both calculations used an iron meteorite with a kinetic energy of 3.8 Megatons. Calculation MC-1 had an impact velocity of 25 km/sec and MC-2 had an impact velocity of 15 km/sec.
Moreno, Maggie; Baggio, Giosuè
2015-07-01
In signaling games, a sender has private access to a state of affairs and uses a signal to inform a receiver about that state. If no common association of signals and states is initially available, sender and receiver must coordinate to develop one. How do players divide coordination labor? We show experimentally that, if players switch roles at each communication round, coordination labor is shared. However, in games with fixed roles, coordination labor is divided: Receivers adjust their mappings more frequently, whereas senders maintain the initial code, which is transmitted to receivers and becomes the common code. In a series of computer simulations, player and role asymmetry as observed experimentally were accounted for by a model in which the receiver in the first signaling round has a higher chance of adjusting its code than its partner. From this basic division of labor among players, certain properties of role asymmetry, in particular correlations with game complexity, are seen to follow.
ERIC Educational Resources Information Center
Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey
2017-01-01
A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…
Performance of a parallel code for the Euler equations on hypercube computers
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.
1990-01-01
The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.
Leonard, P.R.; Seitz, R.R.
1992-04-01
The US Nuclear Regulatory Commission (NRC) recently announced a revision to Chapter 10 of the Code of Federal Regulations, Part 20 (10 CFR 20) ``Standards for Protection Against Radiation,`` which incorporates recommendations contained in Publications 26 and 30 of the International Commission on Radiological Protection (ICRP), issued in 1977 and 1979, respectively. The revision to 10 CFR 20 was also developed in parallel with Presidential Guidance on occupational radiation protection published in the Federal Register. Thus, this study concludes that the issuance of the revised 10 CFR 20 will not affect calculations using the computer codes considered in this report. In general, the computer codes and EPA and DOE guidance on which computer codes are based were developed in a manner consistent with the guidance provided in ICRP 26/30, well before the revision of 10 CFR 20.
Novokhatski, Alexander; /SLAC
2005-12-01
The problem of electromagnetic interaction of a beam and accelerator elements is very important for linear colliders, electron-positron factories, and free electron lasers. Precise calculation of wake fields is required for beam dynamics study in these machines. We describe a method which allows computation of wake fields of the very short bunches. Computer code NOVO was developed based on this method. This method is free of unphysical solutions like ''self-acceleration'' of a bunch head, which is common to well known wake field codes. Code NOVO was used for the wake fields study for many accelerator projects all over the world.
Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes
NASA Astrophysics Data System (ADS)
Piro, Markus Hans Alexander
Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system
NASA Astrophysics Data System (ADS)
McGaw, Michael A.; Saltsman, James F.
1993-10-01
A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Saltsman, James F.
1993-01-01
A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
Methodology, status, and plans for development and assessment of the RELAP5 code
Johnson, G.W.; Riemke, R.A.
1997-07-01
RELAP/MOD3 is a computer code used for the simulation of transients and accidents in light-water nuclear power plants. The objective of the program to develop and maintain RELAP5 was and is to provide the U.S. Nuclear Regulatory Commission with an independent tool for assessing reactor safety. This paper describes code requirements, models, solution scheme, language and structure, user interface validation, and documentation. The paper also describes the current and near term development program and provides an assessment of the code`s strengths and limitations.
The TESS (Tandem Experiment Simulation Studies) computer code user's manual
Procassini, R.J. . Dept. of Nuclear Engineering); Cohen, B.I. )
1990-06-01
TESS (Tandem Experiment Simulation Studies) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the confinement and transport of plasma in a magnetic mirror device, including tandem mirror configurations. Mirror plasmas may be modeled in a system which includes an applied magnetic field and/or a self-consistent or applied electrostatic potential. The PIC code TESS is similar to the PIC code DIPSI (Direct Implicit Plasma Surface Interactions) which is designed to study plasma transport to and interaction with a solid surface. The codes TESS and DIPSI are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 10 refs., 2 tabs.
NASA Technical Reports Server (NTRS)
Chen, Y. S.
1986-01-01
In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.
Computer code for predicting coolant flow and heat transfer in turbomachinery
NASA Technical Reports Server (NTRS)
Meitner, Peter L.
1990-01-01
A computer code was developed to analyze any turbomachinery coolant flow path geometry that consist of a single flow passage with a unique inlet and exit. Flow can be bled off for tip-cap impingement cooling, and a flow bypass can be specified in which coolant flow is taken off at one point in the flow channel and reintroduced at a point farther downstream in the same channel. The user may either choose the coolant flow rate or let the program determine the flow rate from specified inlet and exit conditions. The computer code integrates the 1-D momentum and energy equations along a defined flow path and calculates the coolant's flow rate, temperature, pressure, and velocity and the heat transfer coefficients along the passage. The equations account for area change, mass addition or subtraction, pumping, friction, and heat transfer.
Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center
Carter, B.J.; Maskewitz, B.F.
1985-04-01
This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art.
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
Probabilistic load simulation: Code development status
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H.
1991-01-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
Spent fuel management fee methodology and computer code user's manual.
Engel, R.L.; White, M.K.
1982-01-01
The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.
Parallelized tree-code for clusters of personal computers
NASA Astrophysics Data System (ADS)
Viturro, H. R.; Carpintero, D. D.
2000-02-01
We present a tree-code for integrating the equations of the motion of collisionless systems, which has been fully parallelized and adapted to run in several PC-based processors simultaneously, using the well-known PVM message passing library software. SPH algorithms, not yet included, may be easily incorporated to the code. The code is written in ANSI C; it can be freely downloaded from a public ftp site. Simulations of collisions of galaxies are presented, with which the performance of the code is tested.
Status of Continuum Edge Gyrokinetic Code Physics Development
Xu, X Q; Xiong, Z; Dorr, M R; Hittinger, J A; Kerbel, G D; Nevins, W M; Cohen, B I; Cohen, R H
2005-05-31
We are developing an edge gyro-kinetic continuum simulation code to study the boundary plasma over a region extending from inside the H-mode pedestal across the separatrix to the divertor plates. A 4-D ({psi}, {theta}, {epsilon}, {mu}) version of this code is presently being implemented, en route to a full 5-D version. A set of gyrokinetic equations[1] are discretized on computational grid which incorporates X-point divertor geometry. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. A fourth order upwinding algorithm is used for particle cross-field drifts, parallel streaming, and acceleration. Boundary conditions at conducting material surfaces are implemented on the plasma side of the sheath. The Poisson-like equation is solved using GMRES with multi-grid preconditioner from HYPRE. A nonlinear Fokker-Planck collision operator from STELLA[2] in ({nu}{sub {parallel}},{nu}{sub {perpendicular}}) has been streamlined and integrated into the gyro-kinetic package using the same implicit Newton-Krylov solver and interpolating F and dF/dt|{sub coll} to/from ({epsilon}, {mu}) space. With our 4D code we compute the ion thermal flux, ion parallel velocity, self-consistent electric field, and geo-acoustic oscillations, which we compare with standard neoclassical theory for core plasma parameters; and we study the transition from collisional to collisionless end-loss. In the real X-point geometry, we find that the particles are trapped near outside midplane and in the X-point regions due to the magnetic configurations. The sizes of banana orbits are comparable to the pedestal width and/or the SOL width for energetic trapped particles. The effect of the real X-point geometry and edge plasma conditions on standard neoclassical theory will be evaluated, including a comparison of our 4D code with other kinetic
MMA, A Computer Code for Multi-Model Analysis
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
MOMDIS: a Glauber model computer code for knockout reactions
NASA Astrophysics Data System (ADS)
Bertulani, C. A.; Gade, A.
2006-09-01
A computer program is described to calculate momentum distributions in stripping and diffraction dissociation reactions. A Glauber model is used with the scattering wavefunctions calculated in the eikonal approximation. The program is appropriate for knockout reactions at intermediate energy collisions ( 30 MeV⩽E/nucleon⩽2000 MeV). It is particularly useful for reactions involving unstable nuclear beams, or exotic nuclei (e.g., neutron-rich nuclei), and studies of single-particle occupancy probabilities (spectroscopic factors) and other related physical observables. Such studies are an essential part of the scientific program of radioactive beam facilities, as in for instance the proposed RIA (Rare Isotope Accelerator) facility in the US. Program summaryTitle of program: MOMDIS (MOMentum DIStributions) Catalogue identifier:ADXZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXZ_v1_0 Computers: The code has been created on an IBM-PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: 6255 No. of bytes in distributed program, including test data, etc.: 63 568 Distribution format: tar.gz Nature of physical problem: The program calculates bound wavefunctions, eikonal S-matrices, total cross-sections and momentum distributions of interest in nuclear knockout reactions at intermediate energies. Method of solution: Solves the radial Schrödinger equation for bound states. A Numerov integration is used outwardly and inwardly and a matching at the nuclear surface is done to obtain the energy and the bound state wavefunction with good accuracy. The S-matrices are obtained using eikonal wavefunctions and the "t- ρρ" method to obtain the eikonal phase-shifts. The momentum distributions are obtained by means of a Gaussian expansion of
Software tools for developing parallel applications. Part 1: Code development and debugging
Brown, J.; Geist, A.; Pancake, C.; Rover, D.
1997-04-01
Developing an application for parallel computers can be a lengthy and frustrating process making it a perfect candidate for software tool support. Yet application programmers are often the last to hear about new tools emerging from R and D efforts. This paper provides an overview of two focuses of tool support: code development and debugging. Each is discussed in terms of the programmer needs addressed, the extent to which representative current tools meet those needs, and what new levels of tool support are important if parallel computing is to become more widespread.
T-Matrix: Codes for Computing Electromagnetic Scattering by Nonspherical and Aggregated Particles
NASA Astrophysics Data System (ADS)
Waterman, Peter; Mishchenko, Michael I.; Travis, Larry D.; Mackowski, Daniel W.
2015-11-01
The T-Matrix package includes codes to compute electromagnetic scattering by homogeneous, rotationally symmetric nonspherical particles in fixed and random orientations, randomly oriented two-sphere clusters with touching or separated components, and multi-sphere clusters in fixed and random orientations. All codes are written in Fortran-77. LAPACK-based, extended-precision, Gauss-elimination- and NAG-based, and superposition codes are available, as are double-precision superposition, parallelized double-precision, double-precision Lorenz-Mie codes, and codes for the computation of the coefficients for the generalized Chebyshev shape.
High pressure humidification columns: Design equations, algorithm, and computer code
Enick, R.M.; Klara, S.M.; Marano, J.J.
1994-07-01
This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
GIANT: a computer code for General Interactive ANalysis of Trajectories
Jaeger, J.; Lee, M.; Servranckx, R.; Shoaee, H.
1985-04-01
Many model-driven diagnostic and correction procedures have been developed at SLAC for the on-line computer controlled operation of SPEAR, PEP, the LINAC, and the Electron Damping Ring. In order to facilitate future applications and enhancements, these procedures are being collected into a single program, GIANT. The program allows interactive diagnosis as well as performance optimization of any beam transport line or circular machine. The test systems for GIANT are those of the SLC project. The organization of this program and some of the recent applications of the procedures will be described in this paper.
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.
Mothers as Mediators of Cognitive Development: A Coding Manual. Updated.
ERIC Educational Resources Information Center
Friedman, Sarah L.; Sherman, Tracy L.
Coding systems developed for a study of the way mothers influence the cognitive development of their 2- to 4-year-old children are described in this report. The coding systems were developed for the analysis of data recorded on videotapes of 3 mother-child situations: 8 minutes of interaction starting with a reunion between mother and child, 5…
Operations analysis (study 2.1). Program listing for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1974-01-01
A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.
Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields
NASA Technical Reports Server (NTRS)
Tannehill, J. C.; Wadawadigi, G.
1992-01-01
Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the
Multiphase integral reacting flow computer code (ICOMFLO): User`s guide
Chang, S.L.; Lottes, S.A.; Petrick, M.
1997-11-01
A copyrighted computational fluid dynamics computer code, ICOMFLO, has been developed for the simulation of multiphase reacting flows. The code solves conservation equations for gaseous species and droplets (or solid particles) of various sizes. General conservation laws, expressed by elliptic type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy, and turbulent dissipation. Associated phenomenological submodels of the code include integral combustion, two parameter turbulence, particle evaporation, and interfacial submodels. A newly developed integral combustion submodel replacing an Arrhenius type differential reaction submodel has been implemented to improve numerical convergence and enhance numerical stability. A two parameter turbulence submodel is modified for both gas and solid phases. An evaporation submodel treats not only droplet evaporation but size dispersion. Interfacial submodels use correlations to model interfacial momentum and energy transfer. The ICOMFLO code solves the governing equations in three steps. First, a staggered grid system is constructed in the flow domain. The staggered grid system defines gas velocity components on the surfaces of a control volume, while the other flow properties are defined at the volume center. A blocked cell technique is used to handle complex geometry. Then, the partial differential equations are integrated over each control volume and transformed into discrete difference equations. Finally, the difference equations are solved iteratively by using a modified SIMPLER algorithm. The results of the solution include gas flow properties (pressure, temperature, density, species concentration, velocity, and turbulence parameters) and particle flow properties (number density, temperature, velocity, and void fraction). The code has been used in many engineering applications, such as coal-fired combustors, air
Development of the Tensoral Computer Language
NASA Technical Reports Server (NTRS)
Ferziger, Joel; Dresselhaus, Eliot
1996-01-01
The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.
Recent developments in the Los Alamos radiation transport code system
Forster, R.A.; Parsons, K.
1997-06-01
A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results.
TRANS4: a computer code calculation of solid fuel penetration of a concrete barrier. [LMFBR; GCFR
Ono, C. M.; Kumar, R.; Fink, J. K.
1980-07-01
The computer code, TRANS4, models the melting and penetration of a solid barrier by a solid disc of fuel following a core disruptive accident. This computer code has been used to model fuel debris penetration of basalt, limestone concrete, basaltic concrete, and magnetite concrete. Sensitivity studies were performed to assess the importance of various properties on the rate of penetration. Comparisons were made with results from the GROWS II code.
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes, the capability to account for surface-to-surface radiation exchange in complex geometries is critical. This paper presents recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute geometric view factors for radiation problems involving multiple surfaces. Verification of the code's radiation capabilities and results of a code-to-code comparison are presented. Finally, a demonstration case of a two-dimensional ablating cavity with enclosure radiation accounting for a changing geometry is shown.
Finite element code development for modeling detonation of HMX composites
NASA Astrophysics Data System (ADS)
Duran, Adam; Sundararaghavan, Veera
2015-06-01
In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.
Verification of RESRAD-build computer code, version 3.1.
2003-06-02
RESRAD-BUILD is a computer model for analyzing the radiological doses resulting from the remediation and occupancy of buildings contaminated with radioactive material. It is part of a family of codes that includes RESRAD, RESRAD-CHEM, RESRAD-RECYCLE, RESRAD-BASELINE, and RESRAD-ECORISK. The RESRAD-BUILD models were developed and codified by Argonne National Laboratory (ANL); version 1.5 of the code and the user's manual were publicly released in 1994. The original version of the code was written for the Microsoft DOS operating system. However, subsequent versions of the code were written for the Microsoft Windows operating system. The purpose of the present verification task (which includes validation as defined in the standard) is to provide an independent review of the latest version of RESRAD-BUILD under the guidance provided by ANSI/ANS-10.4 for verification and validation of existing computer programs. This approach consists of a posteriori V&V review which takes advantage of available program development products as well as user experience. The purpose, as specified in ANSI/ANS-10.4, is to determine whether the program produces valid responses when used to analyze problems within a specific domain of applications, and to document the level of verification. The culmination of these efforts is the production of this formal Verification Report. The first step in performing the verification of an existing program was the preparation of a Verification Review Plan. The review plan consisted of identifying: Reason(s) why a posteriori verification is to be performed; Scope and objectives for the level of verification selected; Development products to be used for the review; Availability and use of user experience; and Actions to be taken to supplement missing or unavailable development products. The purpose, scope and objectives for the level of verification selected are described in this section of the Verification Report. The development products that were used for
The Proteus Navier-Stokes code. [two and three dimensional computational fluid dynamics
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.
1992-01-01
An effort is currently underway at NASA Lewis to develop two and three dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. Proteus solves the Reynolds-averaged, unsteady, compressible Navier-Stokes equations in strong conservation law form. Turbulence is modeled using a Baldwin-Lomax based algebraic eddy viscosity model. In addition, options are available to solve thin layer or Euler equations, and to eliminate the energy equation by assuming constant stagnation enthalpy. An extensive series of validation cases have been run, primarily using the two dimensional planar/axisymmetric version of the code. Several flows were computed that have exact solution such as: fully developed channel and pipe flow; Couette flow with and without pressure gradients; unsteady Couette flow formation; flow near a suddenly accelerated flat plate; flow between concentric rotating cylinders; and flow near a rotating disk. The two dimensional version of the Proteus code has been released, and the three dimensional code is scheduled for release in late 1991.
The Proteus Navier-Stokes code. [two and three dimensional computational fluid dynamics
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.
1992-01-01
An effort is currently underway at NASA Lewis to develop two and three dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. Proteus solves the Reynolds-averaged, unsteady, compressible Navier-Stokes equations in strong conservation law form. Turbulence is modeled using a Baldwin-Lomax based algebraic eddy viscosity model. In addition, options are available to solve thin layer or Euler equations, and to eliminate the energy equation by assuming constant stagnation enthalpy. An extensive series of validation cases have been run, primarily using the two dimensional planar/axisymmetric version of the code. Several flows were computed that have exact solution such as: fully developed channel and pipe flow; Couette flow with and without pressure gradients; unsteady Couette flow formation; flow near a suddenly accelerated flat plate; flow between concentric rotating cylinders; and flow near a rotating disk. The two dimensional version of the Proteus code has been released, and the three dimensional code is scheduled for release in late 1991.
The role of the PIRT process in identifying code improvements and executing code development
Wilson, G.E.; Boyack, B.E.
1997-07-01
In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.
Russ, Daniel E; Ho, Kwan-Yuet; Colt, Joanne S; Armenti, Karla R; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P; Karagas, Margaret R; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T; Johnson, Calvin A; Friesen, Melissa C
2016-06-01
Mapping job titles to standardised occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiological studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14 983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in 2 occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. For 11 991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6-digit and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (κ 0.6-0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiological studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Russ, Daniel E.; Ho, Kwan-Yuet; Colt, Joanne S.; Armenti, Karla R.; Baris, Dalsu; Chow, Wong-Ho; Davis, Faith; Johnson, Alison; Purdue, Mark P.; Karagas, Margaret R.; Schwartz, Kendra; Schwenn, Molly; Silverman, Debra T.; Johnson, Calvin A.; Friesen, Melissa C.
2016-01-01
Background Mapping job titles to standardized occupation classification (SOC) codes is an important step in identifying occupational risk factors in epidemiologic studies. Because manual coding is time-consuming and has moderate reliability, we developed an algorithm called SOCcer (Standardized Occupation Coding for Computer-assisted Epidemiologic Research) to assign SOC-2010 codes based on free-text job description components. Methods Job title and task-based classifiers were developed by comparing job descriptions to multiple sources linking job and task descriptions to SOC codes. An industry-based classifier was developed based on the SOC prevalence within an industry. These classifiers were used in a logistic model trained using 14,983 jobs with expert-assigned SOC codes to obtain empirical weights for an algorithm that scored each SOC/job description. We assigned the highest scoring SOC code to each job. SOCcer was validated in two occupational data sources by comparing SOC codes obtained from SOCcer to expert assigned SOC codes and lead exposure estimates obtained by linking SOC codes to a job-exposure matrix. Results For 11,991 case-control study jobs, SOCcer-assigned codes agreed with 44.5% and 76.3% of manually assigned codes at the 6- and 2-digit level, respectively. Agreement increased with the score, providing a mechanism to identify assignments needing review. Good agreement was observed between lead estimates based on SOCcer and manual SOC assignments (kappa: 0.6–0.8). Poorer performance was observed for inspection job descriptions, which included abbreviations and worksite-specific terminology. Conclusions Although some manual coding will remain necessary, using SOCcer may improve the efficiency of incorporating occupation into large-scale epidemiologic studies. PMID:27102331
NASA Astrophysics Data System (ADS)
Cummings, Mary L.
1994-09-01
A FORTRAN computer code (SKINTEMP) has been developed to calculate transient missile/aircraft aerodynamic heating parameters utilizing basic flight parameters such as altitude, Mach number, and angle of attack. The insulated skin temperature of a vehicle surface on either the fuselage (axisymmetric body) or wing (two-dimensional body) is computed from a basic heat balance relationship throughout the entire spectrum (subsonic, transonic, supersonic, hypersonic) of flight. This calculation method employs a simple finite difference procedure which considers radiation, forced convection, and non-reactive chemistry. Surface pressure estimates are based on a modified Newtonian flow model. Eckert's reference temperature method is used as the forced convection heat transfer model. SKINTEMP predictions are compared with a limited number of test cases. SKINTEMP was developed as a tool to enhance the conceptual design process of high speed missiles and aircraft. Recommendations are made for possible future development of SKINTEMP to further support the design process.
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...
2016-01-06
In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less
Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H.; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.
2016-01-01
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946
Codebase: A commercially developed code management system and code transfer facility
NASA Astrophysics Data System (ADS)
Mueller, K.; Pfeifer, P.
1989-12-01
The CODEBASE package has been developed by the commercial software house BASE GmbH according to requirements typical for large HEP collaboration. CODEBASE runs currently on IBM/MVS, IBM/VM, VAX/VMS, and UNIX machines. An installations on CRAYs under UNICOS is foreseen. With code being developed by geographically distributed teams, priority has been given to powerful tools for code distribution and for well documented version keeping. Truly alternative versions of code are supported. Ease of use is provided by a menu-type full-screen interface which is common to all installations and which itself is interfaced to the respective local editor
On the Development of a Gridless Inflation Code for Parachute Simulations
STRICKLAND,JAMES H.; HOMICZ,GREGORY F.; GOSSLER,ALBERT A.; WOLFE,WALTER P.; PORTER,VICKI L.
2000-08-29
In this paper the authors present the current status of an unsteady 3D parachute simulation code which is being developed at Sandia National Laboratories under the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The Vortex Inflation PARachute code (VIPAR) which embodies this effort will eventually be able to perform complete numerical simulations of ribbon parachute deployment, inflation, and steady descent. At the present time they have a working serial version of the uncoupled fluids code which can simulate unsteady 3D incompressible flows around bluff bodies made up of triangular membrane elements. A parallel version of the code has just been completed which will allow one to compute flows over complex geometries utilizing several thousand processors on one of the new DOE teraFLOP computers.
A Monte Carlo Code to Compute Energy Fluxes in Cometary Nuclei
NASA Astrophysics Data System (ADS)
Moreno, F.; Muñoz, O.; López-Moreno, J. J.; Molina, A.; Ortiz, J. L.
2002-04-01
A Monte Carlo model designed to compute both the input and output radiation fields from spherical-shell cometary atmospheres has been developed. The code is an improved version of that by H. Salo (1988, Icarus76, 253-269); it includes the computation of the full Stokes vector and can compute both the input fluxes impinging on the nucleus surface and the output radiation. This will have specific applications for the near-nucleus photometry, polarimetry, and imaging data collection planned in the near future from space probes. After carrying out some validation tests of the code, we consider here the effects of including the full 4×4 scattering matrix in the calculations of the radiative flux impinging on cometary nuclei. As input to the code we used realistic phase matrices derived by fitting the observed behavior of the linear polarization as a function of phase angle. The observed single scattering linear polarization phase curves of comets are fairly well represented by a mixture of magnesium-rich olivine particles and small carbonaceous particles. The input matrix of the code is thus given by the phase matrix for olivine as obtained in the laboratory plus a variable scattering fraction phase matrix for absorbing carbonaceous particles. These fractions are 3.5% for Comet Halley and 6% for Comet Hale-Bopp, the comet with the highest percentage of all those observed. The errors in the total input flux impinging on the nucleus surface caused by neglecting polarization are found to be within 10% for the full range of solar zenith angles. Additional tests on the resulting linear polarization of the light emerging from cometary nuclei in near-nucleus observation conditions at a variety of coma optical thicknesses show that the polarization phase curves do not experience any significant changes for optical thicknesses τ≳0.25 and Halley-like surface albedo, except near 90° phase angle.
NASA Astrophysics Data System (ADS)
Kharkhordin, I. L.
2013-12-01
Correct calculations of multistep radioactive decay is important for radionuclide transport forecast at contaminated sites and designing radionuclide storage facilities as well as for a number applications of natural radioactive tracers for understanding of groundwater flow in complex hydrogeological systems. Radioactive chains can involves a number of branches with certain probabilities of decay and up to fourteen steps. General description of radioactive decay in complex system could be presented as a system of linear differential equations. Numerical solution of this system encounters a difficulties connected with wide rage of radioactive decay constants variations. In present work the database with 1253 records of radioactive isotope decay parameters for 97 elements was created. An algorithm of analytical solution construction and solving was elaborated for arbitrary radioactive isotope system taking into account the possible chain branching and connection. The algorithm is based on radionuclide decay graphs. The main steps of algorithm is as follows: a) searching of all possible isotopes in database, creation full isotope list; b) looking for main parent isotopes; c) construction of all possible radioactive chains; d) looking for branching and connections in decay chains, marking of links as primary (left chain in graph for main parent isotope), secondary (after connection), and recurring (before branching); e) construction and calculation the coefficients for analytical solutions. The developed computer code was tested on a few simple systems like follows: Cs-135 - one step decay, Sr-90 (Y-90) - two steps decay, U-238+U-235 mixture - complex decay with branching. Calculation of radiogenic He-4 is also possible witch could be important application for groundwater flow and transport model calibration using natural tracers. The computer code for multistep radioactive calculation was elaborated for incorporation into NIMFA code. NIMFA is a parallel computer code
Application of the TEMPEST computer code to canister-filling heat transfer problems
Farnsworth, R.K.; Faletti, D.W.; Budden, M.J.
1988-03-01
Pacific Northwest Laboratory (PNL) researchers used the TEMPEST computer code to simulate thermal cooldown behavior of nuclear waste glass after it was poured into steel canisters for long-term storage. The objective of this work was to determine the accuracy and applicability of the TEMPEST code when used to compute canister thermal histories. First, experimental data were obtained to provide the basis for comparing TEMPEST-generated predictions. Five canisters were instrumented with appropriately located radial and axial thermocouples. The canister were filled using the pilot-scale ceramic melter (PSCM) at PNL. Each canister was filled in either a continous or a batch filling mode. One of the canisters was also filled within a turntable simulant (a group of cylindrical shells with heat transfer resistances similar to those in an actual melter turntable). This was necessary to provide a basis for assessing the ability of the TEMPEST code to also model the transient cooling of canisters in a melter turntable. The continous-fill model, Version M, was found to predict temperatures with more accuracy. The turntable simulant experiment demonstrated that TEMPEST can adequately model the asymmetric temperature field caused by the turntable geometry. Further, TEMPEST can acceptably predict the canister cooling history within a turntable, despite code limitations in computing simultaneous radiation and convection heat transfer between shells, along with uncertainty in stainless-steel surface emissivities. Based on the successful performance of TEMPEST Version M, development was initiated to incorporate 1) full viscous glass convection, 2) a dynamically adaptive grid that automatically follows the glass/air interface throughout the transient, and 3) a full enclosure radiation model to allow radiation heat transfer to non-nearest neighbor cells. 5 refs., 47 figs., 17 tabs.
[Vascular assessment in stroke codes: role of computed tomography angiography].
Mendigaña Ramos, M; Cabada Giadas, T
2015-01-01
Advances in imaging studies for acute ischemic stroke are largely due to the development of new efficacious treatments carried out in the acute phase. Together with computed tomography (CT) perfusion studies, CT angiography facilitates the selection of patients who are likely to benefit from appropriate early treatment. CT angiography plays an important role in the workup for acute ischemic stroke because it makes it possible to confirm vascular occlusion, assess the collateral circulation, and obtain an arterial map that is very useful for planning endovascular treatment. In this review about CT angiography, we discuss the main technical characteristics, emphasizing the usefulness of the technique in making the right diagnosis and improving treatment strategies. Copyright © 2012 SERAM. Published by Elsevier España, S.L.U. All rights reserved.
Computation of turbine flowfields with a Navier-Stokes code
NASA Technical Reports Server (NTRS)
Hobson, G. V.; Lakshminarayana, B.
1990-01-01
A new technique has been developed for the solution of the incompressible Navier-Stokes equations. The numerical technique, derived from a pressure substitution method (PSM), overcomes many of the deficiencies of the pressure crrection method. This technique allows for the direct solution of the actual pressure in the form of a Poisson equation which is derived from the pressure weighted substitution of the full momentum equations into the continuity equation. In two-dimensions a turbine flowfield, including heat transfer, has been computed with this method and the prediction of the cascade performance is presented. The extension of the pressure correction method for the solution of three-dimensional flows is also presented for laminar flow in an S-shaped duct and turbulent flow in the end-wall region of a turbine cascade.
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined
Benchmark Problems Used to Assess Computational Aeroacoustics Codes
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Envia, Edmane
2005-01-01
The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1992-01-01
Presented is a collection of papers on research activities carried out during the funding period of October 1991 to March 1992. Topics covered include: blunt body flows in thermochemical equilibrium; thermochemical relaxation in high enthalpy nozzle flow; single expansion ramp nozzle simulations; lunar return aerobraking; line boundary problem for three dimensional grids; and unsteady shock induced combustion.
BRYNTRN: A baryon transport computer code, computation procedures and data base
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank
1988-01-01
The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).
REMAP: A computer code that transfers node information between dissimilar grids
Shapiro, A.B.
1990-04-01
REMAP is a computer code that transfers the axisymmetric, two dimensional planar, or three dimensional temperature field from one finite element mesh to another. The meshes may be arbitrary as far as the number of elements and their geometry. REMAP interpolates or extrapolates the node temperatures from the old mesh to the new mesh using linear, bilinear, or trilinear isoparametric finite element shape functions. REMAP is used to transfer the temperature field from a thermal analysis mesh to a more finely discretized structural analysis mesh when performing a thermal stress analysis. REMAP was designed to be used with the finite element heat transfer codes TOPAZ2D and TOPAZ3D, and the solid mechanics codes NIKE2D and NIKE3D. The I/O formats in REMAP can be easily modified to accept input from other codes (e.g., finite difference) and generate output files for other structural codes. REMAP can be used to transfer any scalar field variable between dissimilar finite element meshes. The idea of a coarse filter by a fine filter to determine which element from the old mesh contains a node point from the new mesh was used. The coarse filter determines a subset of elements from the old mesh that may contain the new node point. The fine filter determines the element that contains the new node point. REMAP uses the ray-surface intersection algorithm developed for the FACET code for the fine filter. This algorithm has the added capability to determine which element the node is closest to if the node point lies outside the perimeter of the old mesh. Once an element from the old mesh has been identified as containing or closest to the new node point, the natural coordinates for the node point are calculated. The isoparametric finite element shape functions are calculated next. These shape functions are then used to interpolate or extrapolate the temperatures from the nodes comprising the old element to the new node point.
Finite element code development for modeling detonation of HMX composites
NASA Astrophysics Data System (ADS)
Duran, Adam V.; Sundararaghavan, Veera
2017-01-01
In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.
Development of covariance capabilities in EMPIRE code
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.
Computation of Supersonic Jet Mixing Noise Using PARC Code With a kappa-epsilon Turbulence Model
NASA Technical Reports Server (NTRS)
Khavaran, A.; Kim, C. M.
1999-01-01
A number of modifications have been proposed in order to improve the jet noise prediction capabilities of the MGB code. This code which was developed at General Electric, employees the concept of acoustic analogy for the prediction of turbulent mixing noise. The source convection and also refraction of sound due to the shrouding effect of the mean flow are accounted for by incorporating the high frequency solution to Lilley's equation for cylindrical jets (Balsa and Mani). The broadband shock-associated noise is estimated using Harper-Bourne and Fisher's shock noise theory. The proposed modifications are aimed at improving the aerodynamic predictions (source/spectrum computations) and allowing for the non- axisymmetric effects in the jet plume and nozzle geometry (sound/flow interaction). In addition, recent advances in shock noise prediction as proposed by Tam can be employed to predict the shock-associated noise as an addition to the jet mixing noise when the flow is not perfectly expanded. Here we concentrate on the aerodynamic predictions using the PARC code with a k-E turbulence model and the ensuing turbulent mixing noise. The geometry under consideration is an axisymmetric convergent-divergent nozzle at its design operating conditions. Aerodynamic and acoustic computations are compared with data as well as predictions due to the original MGB model using Reichardt's aerodynamic theory.
Light curves for bump Cepheids computed with a dynamically zoned pulsation code
NASA Technical Reports Server (NTRS)
Adams, T. F.; Castor, J. I.; Davis, C. G.
1980-01-01
The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.
Light curves for bump Cepheids computed with a dynamically zoned pulsation code
NASA Astrophysics Data System (ADS)
Adams, T. F.; Castor, J. I.; Davis, C. G.
1980-05-01
The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.
Thermodynamic analysis of five compressed-air energy-storage cycles. [Using CAESCAP computer code
Fort, J. A.
1983-03-01
One important aspect of the Compressed-Air Energy-Storage (CAES) Program is the evaluation of alternative CAES plant designs. The thermodynamic performance of the various configurations is particularly critical to the successful demonstration of CAES as an economically feasible energy-storage option. A computer code, the Compressed-Air Energy-Storage Cycle-Analysis Program (CAESCAP), was developed in 1982 at the Pacific Northwest Laboratory. This code was designed specifically to calculate overall thermodynamic performance of proposed CAES-system configurations. The results of applying this code to the analysis of five CAES plant designs are presented in this report. The designs analyzed were: conventional CAES; adiabatic CAES; hybrid CAES; pressurized fluidized-bed CAES; and direct coupled steam-CAES. Inputs to the code were based on published reports describing each plant cycle. For each cycle analyzed, CAESCAP calculated the thermodynamic station conditions and individual-component efficiencies, as well as overall cycle-performance-parameter values. These data were then used to diagram the availability and energy flow for each of the five cycles. The resulting diagrams graphically illustrate the overall thermodynamic performance inherent in each plant configuration, and enable a more accurate and complete understanding of each design.
On the performance of a 2D unstructured computational rheology code on a GPU
NASA Astrophysics Data System (ADS)
Pereira, Simão P.; Vuik, Kees; Pinho, Fernando T.; Nóbrega, João M.
2013-04-01
The present work explores the massively parallel capabilities of the most advanced architecture of graphics processing units (GPUs) code named "Fermi", on a two-dimensional unstructured cell-centred finite volume code. We use the SIMPLE algorithm to solve the continuity and momentum equations that was fully ported to the GPU. The benefits of this implementation are compared with a serial implementation that traditionally runs on the central processing unit (CPU). The developed codes were assessed with the bench-mark problems of Poiseuille flow, for Newtonian and generalized Newtonian fluids, as well as by the lid-driven cavity and the sudden expansion flows for Newtonian fluids. The parallel (GPU) code accelerated the resolution of those three problems by factors of 19, 10 and 11, respectively, in comparison with the corresponding CPU single core counterpart. The results are a clear indication that GPUs are and will be useful in the field of computational fluid dynamics (CFD) for rheologically simple and complex fluids.
Fast Computation of Pulse Height Spectra Using SGRD Code
NASA Astrophysics Data System (ADS)
Humbert, Philippe; Méchitoua, Boukhmès
2017-09-01
SGRD (Spectroscopy, Gamma rays, Rapid, Deterministic) code is used for fast calculation of the gamma ray spectrum produced by a spherical shielded source and measured by a detector. The photon source lines originate from the radioactive decay of the unstable isotopes. The emission rate and spectrum of these primary sources are calculated using the DARWIN code. The leakage spectrum is separated in two parts, the uncollided component is transported by ray-tracing and the scattered component is calculated using a multigroup discrete ordinates method. The pulsed height spectrum is then simulated by folding the leakage spectrum with the detector response functions which are pre-calculated using MCNP5 code for each considered detector type. An application to the simulation of the gamma spectrum produced by a natural uranium ball coated with plexiglass and measured using a NaI detector is presented.
NASA Astrophysics Data System (ADS)
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating
UCODE, a computer code for universal inverse modeling
NASA Astrophysics Data System (ADS)
Poeter, Eileen P.; Hill, Mary C.
1999-05-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating
Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview
NASA Technical Reports Server (NTRS)
Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard
1996-01-01
The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'
Sienicki, J.J.
1997-06-01
A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. Flashing from coolant release, condensation heat transfer, intercompartment transport, and engineered safety features are described using best estimate models and correlations often based upon experiment analyses. Two notable capabilities of PACER that differ from most other containment loads codes are the modeling of the rates of steam and water formation accompanying coolant release as well as the correlations for steam condensation upon structure.
NASA Astrophysics Data System (ADS)
Peng, Liang-You; Gong, Qihuang
2010-12-01
The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for
Horak, W.C.; Lu, Ming-Shih
1991-12-01
This paper reviews the accuracy and precision of methods used by United States electric utilities to determine the actinide isotopic and element content of irradiated fuel. After an extensive literature search, three key code suites were selected for review. Two suites of computer codes, CASMO and ARMP, are used for reactor physics calculations; the ORIGEN code is used for spent fuel calculations. They are also the most widely used codes in the nuclear industry throughout the world. Although none of these codes calculate actinide isotopics as their primary variables intended for safeguards applications, accurate calculation of actinide isotopic content is necessary to fulfill their function.
Computed neutron tomography and coded aperture holography from real time neutron images
NASA Astrophysics Data System (ADS)
Sulcoski, Mark F.
1986-10-01
The uses of neutron tomography and holography for nondestructive evaluation applications are developed and investigated. The use of a real time neutron imaging system coupled with a image processing system to obtain neutron tomographs. Experiments utilized a Thomson-CSF neutron camera coupled to a computer based system used for image processing. Experiments included a configuration of a reactor neutron beam port for neutron imaging, development and implementation of a convolution method tomographic algorithm suitable for neutron imaging. Results to date have demonstrated the proof of principle of this neutron tomography system. Coded aperture neutron holography is under investigation using a cadmium Fresnel zone plate as the coded aperture and the real time imaging system as the detection and holographic reconstruction system. Coded aperture imaging utilizes the zone place to encode scattered radiation pattern recorded at the detector is used as input data to a convolution algorithm which reconstructs the scattering source. This technique has not yet been successfully implemented and is still under development.
Chan, M.K.; Ballinger, M.Y.; Owczarski, P.C.
1989-02-01
This manual describes the technical bases and use of the computer code FIRIN. This code was developed to estimate the source term release of smoke and radioactive particles from potential fires in nuclear fuel cycle facilities. FIRIN is a product of a broader study, Fuel Cycle Accident Analysis, which Pacific Northwest Laboratory conducted for the US Nuclear Regulatory Commission. The technical bases of FIRIN consist of a nonradioactive fire source term model, compartment effects modeling, and radioactive source term models. These three elements interact with each other in the code affecting the course of the fire. This report also serves as a complete FIRIN user's manual. Included are the FIRIN code description with methods/algorithms of calculation and subroutines, code operating instructions with input requirements, and output descriptions. 40 refs., 5 figs., 31 tabs.
Second Generation Integrated Composite Analyzer (ICAN) Computer Code
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.
1993-01-01
This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.
RTE: A computer code for Rocket Thermal Evaluation
NASA Technical Reports Server (NTRS)
Naraghi, Mohammad H. N.
1995-01-01
The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.
A fast technique for computing syndromes of BCH and RS codes. [deep space network
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.; Miller, R. L.
1979-01-01
A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.
Code of Ethical Conduct for Computer-Using Educators: An ICCE Policy Statement.
ERIC Educational Resources Information Center
Computing Teacher, 1987
1987-01-01
Prepared by the International Council for Computers in Education's Ethics and Equity Committee, this code of ethics for educators using computers covers nine main areas: curriculum issues, issues relating to computer access, privacy/confidentiality issues, teacher-related issues, student issues, the community, school organizational issues,…
Energy standards and model codes development, adoption, implementation, and enforcement
Conover, D.R.
1994-08-01
This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.
The Development of a Discipline Code for Sue Bennett College.
ERIC Educational Resources Information Center
McLendon, Sandra F.
A Student Discipline Code (SDC) was developed to govern student life at Sue Bennett College (SBC), Kentucky, a private two-year college affiliated with the Methodist Church. Steps taken in the process included the following: a review of relevant literature on student discipline; examination of discipline codes from six other educational…
Beatty, J.A.; Younker, J.L.; Rousseau, W.F.; Elayat, H.A.
1983-12-06
A Decision Tree Computer Model was adapted for the purposes of a Pilot Salt Site Selection Project conducted by the Office of Nuclear Waste Isolation (ONWI). A deterministic computer model was developed to structure the site selection problem with submodels reflecting the five major outcome categories (Cost, Safety, Delay, Environment, Community Impact) to be evaluated in the decision process. Time-saving modifications were made in the tree code as part of the effort. In addition, format changes allowed retention of information items which are valuable in directing future research and in isolation of key variabilities in the Site Selection Decision Model. The deterministic code was linked to the modified tree code and the entire program was transferred to the ONWI-VAX computer for future use by the ONWI project.
Computational Methods Development at Ames
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Smith, Charles A. (Technical Monitor)
1998-01-01
This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.
Ritter, E R
1991-08-01
A computer package has been developed called THERM, an acronym for THermodynamic property Estimation for Radicals and Molecules. THERM is a versatile computer code designed to automate the estimation of ideal gas phase thermodynamic properties for radicals and molecules important to combustion and reaction-modeling studies. Thermodynamic properties calculated include heat of formation and entropies at 298 K and heat capacities from 300 to 1500 K. Heat capacity estimates are then extrapolated to above 5000 K, and NASA format polynomial thermodynamic property representations valid from 298 to 5000 K are generated. This code is written in Microsoft Fortran version 5.0 for use on machines running under MSDOS. THERM uses group additivity principles of Benson and current best values for bond strengths, changes in entropy, and loss of vibrational degrees of freedom to estimate properties for radical species from parent molecules. This ensemble of computer programs can be used to input literature data, estimate data when not available, and review, update, and revise entries to reflect improvements and modifications to the group contribution and bond dissociation databases. All input and output files are ASCII so that they can be easily edited, updated, or expanded. In addition, heats of reaction, entropy changes, Gibbs free-energy changes, and equilibrium constants can be calculated as functions of temperature from a NASA format polynomial database.
Computational Participation: Understanding Coding as an Extension of Literacy Instruction
ERIC Educational Resources Information Center
Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.
2016-01-01
Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…
Computational Participation: Understanding Coding as an Extension of Literacy Instruction
ERIC Educational Resources Information Center
Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.
2016-01-01
Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…
GAM-HEAT: A computer code to compute heat transfer in complex enclosures. Revision 2
Cooper, R.E.; Taylor, J.R.
1992-12-01
This report discusses the GAM{underscore}HEAT code which was developed for heat transfer analyses associated with postulated Double Ended Guilliotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re-radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices as discussed below, and also accounts for convective, conductive, and advective heat exchanges. The code is structured such that it is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium.
GAM-HEAT: A computer code to compute heat transfer in complex enclosures
Cooper, R.E.; Taylor, J.R.
1992-12-01
This report discusses the GAM[underscore]HEAT code which was developed for heat transfer analyses associated with postulated Double Ended Guilliotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re-radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices as discussed below, and also accounts for convective, conductive, and advective heat exchanges. The code is structured such that it is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium.
Rowland, R.
1994-07-01
This report is a summary overview of the basic features and differences among the major radioactive fallout models and computer codes that are either in current use or that form the basis for more contemporary codes and other computational tools. The DELFIC, WSEG-10, KDFOC2, SEER3, and DNAF-1 codes and the EM-1 model are addressed. The review is based only on the information that is available in the general body of literature. This report describes the fallout process, gives an overview of each code/model, summarizes how each code/model handles the basic fallout parameters (initial cloud, particle distributions, fall mechanics, total activity and activity to dose rate conversion, and transport), cites the literature references used, and provides an annotated bibliography for other fallout code literature that was not cited. Nuclear weapons, Radiation, Radioactivity, Fallout, DELFIC, WSEG, Nuclear weapon effects, KDFOC, SEER, DNAF, EM-1.
A new 3-D integral code for computation of accelerator magnets
Turner, L.R.; Kettunen, L.
1991-01-01
For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.
Independent verification and benchmark testing of the UNSAT-H computer code, Version 2.0
Baca, R.G.; Magnuson, S.O.
1990-02-01
Independent testing of the UNSAT-H computer code, Version 2.0, was conducted to establish confidence that the code is ready for general use in performance assessment applications. Verification and benchmark test problems were used to check the correctness of the FORTRAN coding, computational efficiency and accuracy of the numerical algorithm, and code, capability to simulate diverse hydrologic conditions. This testing was performed using a structured and quantitative evaluation protocol. The protocol consisted of: blind testing, independent applications, maintaining test equivalence and use of graduated test cases. Graphical comparisons and calculation of the relative root mean square (RRMS) values were used as indicators of accuracy and consistency levels. Four specific ranges of RRMS values were chosen for in judging the quality of the comparison. Four verification test problems were used to check the computational accuracy of UNSAT-H in solving the uncoupled fluid flow and heat transport equations. Five benchmark test problems, ranging in complexity, were used to check the code`s simulation capability. Some of the benchmark test cases include comparisons with laboratory and field data. The primary findings of this independent testing is that the UNSAT-H is fully operationaL In general, the test results showed that computer code produced unsaturated flow simulations with excellent stability, reasonable accuracy, and acceptable speed. This report describes the technical basis, approach, and results of the independent testing. A number of future refinements to the UNSAT-H code are recommended that would improve: computational speed and accuracy, code usability and code portability. Aspects of the code that warrant further testing are outlined.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
Developing Computational Physics in Nigeria
NASA Astrophysics Data System (ADS)
Akpojotor, Godfrey; Enukpere, Emmanuel; Akpojotor, Famous; Ojobor, Sunny
2009-03-01
Computer based instruction is permeating the educational curricula of many countries oweing to the realization that computational physics which involves computer modeling, enhances the teaching/learning process when combined with theory and experiment. For the students, it gives them more insight and understanding in the learning process and thereby equips them with scientific and computing skills to excel in the industrial and commercial environments as well as at the Masters and doctoral levels. And for the teachers, among others benefits, the availability of open access sites on both instructional and evaluation materials can improve their performances. With a growing population of students and new challenges to meet developmental goals, this paper examine the challenges and prospects of current drive to develop Computational physics as a university undergraduate programme or as a choice of specialized modules or laboratories within the mainstream physics programme in Nigeria institutions. In particular, the current effort of the Nigerian Computational Physics Working Group to design computational physics programmes to meet the developmental goals of the country is discussed.
Description of the FCUP code used to compute currents due to recoil protons from CH/sub 2/ foils
Stelts, M.L.; Glasgow, D.W.; Wood, B.E.; Craft, A.D.
1982-07-01
A computer code, FCUP, was developed at EG and G during the period from 1973 to the present to compute proton currents produced by a time- and energy-dependent neutron flux striking a CH/sub 2/ foil and knocking protons into a detector placed at an angle with respect to the target foil and the neutron beam. This report describes the methods of calculation used and the physical assumptions and limitations involved and suggests possibilities for improving the calculations.
Developing a computational model of human hand kinetics using AVS
Abramowitz, Mark S.
1996-05-01
As part of an ongoing effort to develop a finite element model of the human hand at the Institute for Scientific Computing Research (ISCR), this project extended existing computational tools for analyzing and visualizing hand kinetics. These tools employ a commercial, scientific visualization package called AVS. FORTRAN and C code, originally written by David Giurintano of the Gillis W. Long Hansen`s Disease Center, was ported to a different computing platform, debugged, and documented. Usability features were added and the code was made more modular and readable. When the code is used to visualize bone movement and tendon paths for the thumb, graphical output is consistent with expected results. However, numerical values for forces and moments at the thumb joints do not yet appear to be accurate enough to be included in ISCR`s finite element model. Future work includes debugging the parts of the code that calculate forces and moments and verifying the correctness of these values.
Müller, C.; Hughes, E. D.; Niederauer, G. F.; Wilkening, H.; Travis, J. R.; Spore, J. W.; Royl, P.; Baumann, W.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume
Nichols, B. D.; Mueller, C.; Necker, G. A.; Travis, J. R.; Spore, J. W.; Lam, K. L.; Royl, P.; Wilson, T. L.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III
GATO Code Modification to Compute Plasma Response to External Perturbations
NASA Astrophysics Data System (ADS)
Turnbull, A. D.; Chu, M. S.; Ng, E.; Li, X. S.; James, A.
2006-10-01
It has become increasingly clear that the plasma response to an external nonaxiymmetric magnetic perturbation cannot be neglected in many situations of interest. This response can be described as a linear combination of the eigenmodes of the ideal MHD operator. The eigenmodes of the system can be obtained numerically with the GATO ideal MHD stability code, which has been modified for this purpose. A key requirement is the removal of inadmissible continuum modes. For Finite Hybrid Element codes such as GATO, a prerequisite for this is their numerical restabilization by addition of small numerical terms to δ,to cancel the analytic numerical destabilization. In addition, robustness of the code was improved and the solution method speeded up by use of the SuperLU package to facilitate calculation of the full set of eigenmodes in a reasonable time. To treat resonant plasma responses, the finite element basis has been extended to include eigenfunctions with finite jumps at rational surfaces. Some preliminary numerical results for DIII-D equilibria will be given.
Digital Poetry: A Narrow Relation between Poetics and the Codes of the Computational Logic
NASA Astrophysics Data System (ADS)
Laurentiz, Silvia
The project "Percorrendo Escrituras" (Walking Through Writings Project) has been developed at ECA-USP Fine Arts Department. Summarizing, it intends to study different structures of digital information that share the same universe and are generators of a new aesthetics condition. The aim is to search which are the expressive possibilities of the computer among the algorithm functions and other of its specific properties. It is a practical, theoretical and interdisciplinary project where the study of programming evolutionary language, logic and mathematics take us to poetic experimentations. The focus of this research is the digital poetry, and it comes from poetics of permutation combinations and culminates with dynamic and complex systems, autonomous, multi-user and interactive, through agents generation derivations, filtration and emergent standards. This lecture will present artworks that use some mechanisms introduced by cybernetics and the notion of system in digital poetry that demonstrate the narrow relationship between poetics and the codes of computational logic.
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
Balancing Particle and Mesh Computation in a Particle-In-Cell Code
Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert; Ku, Seung-Hoe; Yoon, Eisung; Chang, C. S.
2016-01-01
The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and mesh work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.
Keshavarz, Mohammad Hossein; Motamedoshariati, Hadi; Moghayadnia, Reza; Nazari, Hamid Reza; Azarniamehraban, Jamshid
2009-12-30
In this paper a new simple user-friendly computer code, in Visual Basic, has been introduced to evaluate detonation performance of high explosives and their thermochemical properties. The code is based on recently developed methods to obtain thermochemical and performance parameters of energetic materials, which can complement the computer outputs of the other thermodynamic chemical equilibrium codes. It can predict various important properties of high explosive including velocity of detonation, detonation pressure, heat of detonation, detonation temperature, Gurney velocity, adiabatic exponent and specific impulse of high explosives. It can also predict detonation performance of aluminized explosives that can have non-ideal behaviors. This code has been validated with well-known and standard explosives and compared the predicted results, where the predictions of desired properties were possible, with outputs of some computer codes. A large amount of data for detonation performance on different classes of explosives from C-NO(2), O-NO(2) and N-NO(2) energetic groups have also been generated and compared with well-known complex code BKW.
McGrail, B.P.; Bacon, D.H.
1998-02-01
Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites.
Thermal-hydraulic/heat transfer code development for sphere-pac-fueled LMFBRs. [COBRA-3SP code
Morris, D.G.
1980-06-01
Sphere-pac fuel has received much attention recently in light of the development of proliferation-resistant fuel cycles for the Fast Breeder Reactor Program in the United States. However, for sphere-pac fuel to be a viable alternative to conventional pellet fuel, a means to analyze the thermal behavior of sphere-pac-fueled pin bundles is needed. To meet this need, a thermal-hydraulic/heat transfer computer code has been developed for sphere-pac-fueled fast breeder reactors. The code, COBRA-3SP, is a modified version of COBRA-3M incorporating a three-region sphere-pac fuel pin model which permits fuel restructuring. With COBRA-3SP, steady-state and transient analysis of sphere-pac-fueled pin bundles is possible. The validity of the sphere-pac fuel pin model has been verified using experimental results of irradiated sphere-pac fuel.
Ramsdell, J.V.
1991-03-01
This report presents the NRC staff with a tool for assessing the potential effects of accidental releases of radioactive materials and toxic substances on habitability of nuclear facility control rooms. The tool is a computer code that estimates concentrations at nuclear facility control room air intakes given information about the release and the environmental conditions. The name of the computer code is EXTRAN. EXTRAN combines procedures for estimating the amount of airborne material, a Gaussian puff dispersion model, and the most recent algorithms for estimating diffusion coefficients in building wakes. It is a modular computer code, written in FORTRAN-77, that runs on personal computers. It uses a math coprocessor, if present, but does not require one. Code output may be directed to a printer or disk files. 25 refs., 8 figs., 4 tabs.
Computational perspectives on cognitive development.
Mareschal, Denis
2010-09-01
This article reviews the efforts to develop process models of infants' and children's cognition. Computational process models provide a tool for elucidating the causal mechanisms involved in learning and development. The history of computational modeling in developmental psychology broadly follows the same trends that have run throughout cognitive science-including rule-based models, neural network (connectionist) models, ACT-R models, ART models, decision tree models, reinforcement learning models, and hybrid models among others. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website.
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
ERIC Educational Resources Information Center
Holbrook, M. Cay; MacCuspie, P. Ann
2010-01-01
Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…
ERIC Educational Resources Information Center
Holbrook, M. Cay; MacCuspie, P. Ann
2010-01-01
Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…
Additional development of the XTRAN3S computer program
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.
Proposed standards for peer-reviewed publication of computer code
USDA-ARS?s Scientific Manuscript database
Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...
Projectile base bleed technology. Part 2: User's guide CMINT computer code, version 5.04-BRL
NASA Astrophysics Data System (ADS)
Gibeling, Howard J.; Buggeln, Richard C.
1992-11-01
Detailed finite rate chemistry models for H2, and H2-CO combustion have been incorporated into a Navier-Stokes computer code and applied to flow field simulation in the base region of an M864 base burning projectile. Results without base injection were obtained using a low Reynolds number k-epsilon turbulence model and several mixing length turbulence models. The results with base injection utilized only the Baldwin-Lomax model for the Projectile forebody and the Chow wake mixing model downstream of the projectile base. A validation calculation was performed for a supersonic hydrogen-air burner using an H, reaction set which is a subset of the H2-CO reaction set developed for the base combustion modeling. The comparison with the-available experimental data was good, and provides a level of validation for the technique and code developed. Projectile base injection calculations were performed for a flat base M864 projectile at M(infinity) = 2. Hot air injection, H2 injection and H2-CO injection were modeled, and computed results show reasonable trends in the base pressure increase (base drag reduction), base corner expansion and downstream wake closure location.
Benchmark testing and independent verification of the VS2DT computer code
McCord, J.T.; Goodrich, M.T.
1994-11-01
The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code`s original documentation.
Kirk, B.L.; Sartori, E.
1997-06-01
Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.
Intelligent Automated Process Planning and Code Generation for Computer-Controlled Inspection
1994-01-01
software modules to automate the design, manufactum and inspection aspects of product fabricanon, as well as an artficial intelligence memory to provide...WL-TR- 94-4002 INTELLIGENT AUTOMATED PROCESS PLANNING AND CODE GENERATION FOR COMPUTER-CONTROLLED INSPECTION AD-A275 346 STEVEN M. RUEGSEGGER CASE...TITLE AND SUBTITLE 5. FUNDING NUMBERS Intelligent Automated Process Planning and Code Generation C: F33615-87-C-5250 for Computer-Controlled Inspection
Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation
2009-05-20
Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation Shanna-Shaye Forbes Electrical Engineering and Computer Sciences...MAY 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Real-time C Code Generation in Ptolemy II for the Giotto...periodic and there are multiple modes of operation. Ptolemy II is a university based open source modeling and simulation framework that supports model
Sasaki, K.; Terashita, N.; Ogino, T. . Central Research Lab.)
1989-06-01
This article discusses a pressurized water reactor plant analyzer code (NUPAC-1) has been developed to apply to an operator support system or an advanced training simulator. The simulation code must produce reasonably accurate results as well as fun in a fast mode for realizing functions such as anomaly detection, estimation of unobservable plant internal states, and prediction of plant state trends. The NUPAC-1 code adopts fast computing methods, i.e., the table fitting method of the state variables, time-step control, and calculation control of heat transfer coefficients, in order to attain accuracy and fast-running capability.
Characterizing the Properties of a Woven SiC/SiC Composite Using W-CEMCAN Computer Code
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.
1999-01-01
A micromechanics based computer code to predict the thermal and mechanical properties of woven ceramic matrix composites (CMC) is developed. This computer code, W-CEMCAN (Woven CEramic Matrix Composites ANalyzer), predicts the properties of two-dimensional woven CMC at any temperature and takes into account various constituent geometries and volume fractions. This computer code is used to predict the thermal and mechanical properties of an advanced CMC composed of 0/90 five-harness (5 HS) Sylramic fiber which had been chemically vapor infiltrated (CVI) with boron nitride (BN) and SiC interphase coatings and melt-infiltrated (MI) with SiC. The predictions, based on the bulk constituent properties from the literature, are compared with measured experimental data. Based on the comparison. improved or calibrated properties for the constituent materials are then developed for use by material developers/designers. The computer code is then used to predict the properties of a composite with the same constituents but with different fiber volume fractions. The predictions are compared with measured data and a good agreement is achieved.
Independent verification and benchmark testing of the UNSAT-H computer code, Version 2. 0
Baca, R.G.; Magnuson, S.O.
1990-02-01
Independent testing of the UNSAT-H computer code, Version 2.0, was conducted to establish confidence that the code is ready for general use in performance assessment applications. Verification and benchmark test problems were used to check the correctness of the FORTRAN coding, computational efficiency and accuracy of the numerical algorithm, and code, capability to simulate diverse hydrologic conditions. This testing was performed using a structured and quantitative evaluation protocol. The protocol consisted of: blind testing, independent applications, maintaining test equivalence and use of graduated test cases. Graphical comparisons and calculation of the relative root mean square (RRMS) values were used as indicators of accuracy and consistency levels. Four specific ranges of RRMS values were chosen for in judging the quality of the comparison. Four verification test problems were used to check the computational accuracy of UNSAT-H in solving the uncoupled fluid flow and heat transport equations. Five benchmark test problems, ranging in complexity, were used to check the code's simulation capability. Some of the benchmark test cases include comparisons with laboratory and field data. The primary findings of this independent testing is that the UNSAT-H is fully operationaL In general, the test results showed that computer code produced unsaturated flow simulations with excellent stability, reasonable accuracy, and acceptable speed. This report describes the technical basis, approach, and results of the independent testing. A number of future refinements to the UNSAT-H code are recommended that would improve: computational speed and accuracy, code usability and code portability. Aspects of the code that warrant further testing are outlined.
Computation of high Reynolds number internal/external flows. [VNAP2 computer code
Cline, M.C.; Wilmoth, R.G.
1981-01-01
A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
User's manual for airfoil flow field computer code SRAIR
NASA Technical Reports Server (NTRS)
Shamroth, S. J.
1985-01-01
A two dimensional unsteady Navier-Stokes calculation procedure with specific application to the isolated airfoil problem is presented. The procedure solves the full, ensemble averaged Navier-Stokes equations with turbulence represented by a mixing length model. The equations are solved in a general nonorthogonal coordinate system which is obtained via an external source. Specific Cartesian locations of grid points are required as input for this code. The method of solution is based upon the Briley-McDonald LBI procedure. The manual discusses the analysis, flow of the program, control steam, input and output.
Evaluation of Computational Codes for Underwater Hull Analysis Model Applications
2014-02-05
file is text and could be created by the user, the format is very exacting and difficult to get correct. This makes BEASY GiD very useful. Rhino3D...user can manually write the text material data file, but it is exceedingly difficult to get the format precisely right. Figure 4-1: Screen shot of...code that is an add-on to the software SolidWorks [8]. It runs on Windows on a laptop, desktop, or workstation. It is not portable to Macintosh or
NASTRAN as a resource in code development
NASA Technical Reports Server (NTRS)
Stanton, E. L.; Crain, L. M.; Neu, T. F.
1975-01-01
A case history is presented in which the NASTRAN system provided both guidelines and working software for use in the development of a discrete element program, PATCHES-111. To avoid duplication and to take advantage of the wide spread user familiarity with NASTRAN, the PATCHES-111 system uses NASTRAN bulk data syntax, NASTRAN matrix utilities, and the NASTRAN linkage editor. Problems in developing the program are discussed along with details on the architecture of the PATCHES-111 parametric cubic modeling system. The system includes model construction procedures, checkpoint/restart strategies, and other features.
NASA Technical Reports Server (NTRS)
Tatchell, D. G.
1979-01-01
A code, CATHY3/M, was prepared and demonstrated by application to a sample case. The preparation is reviewed, a summary of the capabilities and main features of the code is given, and the sample case results are discussed. Recommendations for future use and development of the code are provided.
A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.
ERIC Educational Resources Information Center
Bobka, Marilyn E.; Subramaniam, J.B.
The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…
A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.
ERIC Educational Resources Information Center
Bobka, Marilyn E.; Subramaniam, J.B.
The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…
Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.
2005-01-01
In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.
Additional extensions to the NASCAP computer code, volume 2
NASA Technical Reports Server (NTRS)
Stannard, P. R.; Katz, I.; Mandell, M. J.
1982-01-01
Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results.
Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.
1980-01-01
Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.
Overview of NASA Multi-Dimensional Stirling Convertor Code Development and Validation Effort
NASA Astrophysics Data System (ADS)
Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David
2003-01-01
A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and ``two space'' test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow rig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this multi-D code development effort.
TPASS: a gamma-ray spectrum analysis and isotope identification computer code
Dickens, J.K.
1981-03-01
The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.
Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included
Watson, S.B.; Ford, M.R.
1980-02-01
A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.
TVENT1: a computer code for analyzing tornado-induced flow in ventilation systems
Andrae, R.W.; Tang, P.K.; Gregory, W.S.
1983-07-01
TVENT1 is a new version of the TVENT computer code, which was designed to predict the flows and pressures in a ventilation system subjected to a tornado. TVENT1 is essentially the same code but has added features for turning blowers off and on, changing blower speeds, and changing the resistance of dampers and filters. These features make it possible to depict a sequence of events during a single run. Other features also have been added to make the code more versatile. Example problems are included to demonstrate the code's applications.
Development of the KIVA-2 CFD code for rocket propulsion applications
NASA Technical Reports Server (NTRS)
Shannon, Robert V., Jr.; Murray, Alvin L.
1992-01-01
The KIVA-2 code, originally developed to solve computational fluid dynamics problems in internal combustion engines, has been developed to solve rocket propulsion type flows. The objective of the work was to develop a code such that both liquid and solid particle motion could be simulated for arbitrary geometry and high speed as well as low speed reacting flows. Modification to the original code include: incorporating independently specific supersonic and subsonic inflows and outflows; symmetric as well as periodic boundary conditions; and the capability to use generalized single or multi-specie thermodynamic data and transport coefficients allowing the user to specify arbitrary wall temperature/heat flux distributions. This code has been shown to successfully solve rocket propulsion flows as well as flows with entrained particles for several different rocket nozzles.
Atmospheric Transmittance/Radiance Computer Code FASCOD2,
1981-10-16
transmittance and radiance are listed in Table 4. The non-LTE calcula- tion uses a straightforward extension of the HIRAC line-by-line algo- rithm, which is...8217 subfunction. CONT Called by HIRAC to merge ABSRB and R4 into R3. 33 HIRACI, CNVFNV, and PANEL. The subroutine STRTHS is used by HIRACQ to compute the two
Comparison of various NLTE codes in computing the charge-state populations of an argon plasma
Stone, S.R.; Weisheit, J.C.
1984-11-01
A comparison among nine computer codes shows surprisingly large differences where it had been believed that the theroy was well understood. Each code treats an argon plasma, optically thin and with no external photon flux; temperatures vary around 1 keV and ion densities vary from 6 x 10/sup 17/ cm/sup -3/ to 6 x 10/sup 21/ cm/sup -3/. At these conditions most ions have three or fewer bound electrons. The calculated populations of 0-, 1-, 2-, and 3-electron ions differ from code to code by typical factors of 2, in some cases by factors greater than 300. These differences depend as sensitively on how may Rydberg states a code allows as they do on variations among computed collision rates. 29 refs., 23 figs.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.
Computer code to predict the heat of explosion of high energy materials.
Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B
2009-01-30
The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.