Science.gov

Sample records for computer code development

  1. Development of probabilistic multimedia multipathway computer codes.

    SciTech Connect

    Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.

  2. New developments in the Saphire computer codes

    SciTech Connect

    Russell, K.D.; Wood, S.T.; Kvarfordt, K.J.

    1996-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.

  3. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  4. Liquid rocket combustor computer code development

    NASA Technical Reports Server (NTRS)

    Liang, P. Y.

    1985-01-01

    The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.

  5. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  6. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1993-01-01

    Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.

  7. Theoretical Atomic Physics code development IV: LINES, A code for computing atomic line spectra

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-12-01

    A new computer program, LINES, has been developed for simulating atomic line emission and absorption spectra using the accurate fine structure energy levels and transition strengths calculated by the (CATS) Cowan Atomic Structure code. Population distributions for the ion stages are obtained in LINES by using the Local Thermodynamic Equilibrium (LTE) model. LINES is also useful for displaying the pertinent atomic data generated by CATS. This report describes the use of LINES. Both CATS and LINES are part of the Theoretical Atomic PhysicS (TAPS) code development effort at Los Alamos. 11 refs., 9 figs., 1 tab.

  8. Development and application of the GIM code for the Cyber 203 computer

    NASA Technical Reports Server (NTRS)

    Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.

    1982-01-01

    The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.

  9. Two-Phase Flow in Geothermal Wells: Development and Uses of a Good Computer Code

    SciTech Connect

    Ortiz-Ramirez, Jaime

    1983-06-01

    A computer code is developed for vertical two-phase flow in geothermal wellbores. The two-phase correlations used were developed by Orkiszewski (1967) and others and are widely applicable in the oil and gas industry. The computer code is compared to the flowing survey measurements from wells in the East Mesa, Cerro Prieto, and Roosevelt Hot Springs geothermal fields with success. Well data from the Svartsengi field in Iceland are also used. Several applications of the computer code are considered. They range from reservoir analysis to wellbore deposition studies. It is considered that accurate and workable wellbore simulators have an important role to play in geothermal reservoir engineering.

  10. Development and validation of GWHEAD, a three-dimensional groundwater head computer code

    SciTech Connect

    Beckmeyer, R.R.; Root, R.W.; Routt, K.R.

    1980-03-01

    A computer code has been developed to solve the groundwater flow equation in three dimensions. The code has finite-difference approximations solved by the strongly implicit solution procedure. Input parameters to the code include hydraulic conductivity, specific storage, porosity, accretion (recharge), and initial hydralic head. These parameters may be input as varying spatially. The hydraulic conductivity may be input as isotropic or anisotropic. The boundaries either may permit flow across them or may be impermeable. The code has been used to model leaky confined groundwater conditions and spherical flow to a continuous point sink, both of which have exact analytical solutions. The results generated by the computer code compare well with those of the analytical solutions. The code was designed to be used to model groundwater flow beneath fuel reprocessing and waste storage areas at the Savannah River Plant.

  11. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    SciTech Connect

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed.

  12. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    SciTech Connect

    Sullivan, T.M.

    1992-04-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed.

  13. The development of an intelligent interface to a computational fluid dynamics flow-solver code

    NASA Technical Reports Server (NTRS)

    Williams, Anthony D.

    1988-01-01

    Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, 3-D, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.

  14. Developing a coding scheme for detecting usability and fun problems in computer games for young children.

    PubMed

    Barendregt, W; Bekker, M M

    2006-08-01

    This article describes the development and assessment of a coding scheme for finding both usability and fun problems through observations of young children playing computer games during user tests. The proposed coding scheme is based on an existing list of breakdown indication types of the detailed video analysis method (DEVAN). This method was developed to detect usability problems in task-based products for adults. However, the new coding scheme for children's computer games takes into account that in games, fun, in addition to usability, is an important factor and that children behave differently from adults. Therefore, the proposed coding scheme uses 8 of the 14 original breakdown indications and has 7 new indications. The article first discusses the development of the new coding scheme. Subsequently, the article describes the reliability assessment of the coding scheme. The any-two agreement measure of 38.5% shows that thresholds for when certain user behavior is worth coding will be different for different evaluators. However, the any-two agreement of .92 for a fixed list of observation points shows that the distinction between the available codes is clear to most evaluators. Finally, a pilot study shows that training can increase any-two agreement considerably by decreasing the number of unique observations, in comparison with the number of agreed upon observations.

  15. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  16. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  17. MELCOR computer code manuals

    SciTech Connect

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  18. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  19. Development of a new generation solid rocket motor ignition computer code

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Jenkins, Rhonald M.; Ciucci, Alessandro; Johnson, Shelby D.

    1994-01-01

    This report presents the results of experimental and numerical investigations of the flow field in the head-end star grain slots of the Space Shuttle Solid Rocket Motor. This work provided the basis for the development of an improved solid rocket motor ignition transient code which is also described in this report. The correlation between the experimental and numerical results is excellent and provides a firm basis for the development of a fully three-dimensional solid rocket motor ignition transient computer code.

  20. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  1. Development and assessment of U.S. Nuclear Regulatory Commission thermal-hydraulic system computer codes

    SciTech Connect

    Shotkin, L.M.

    1996-11-01

    A review is provided of the reasons why the US Nuclear Regulatory Commission needs thermal-hydraulic system computer codes, the assumptions and approximations contained within these codes, and the reasons why test data are required to assess the accuracy of the codes. Specific examples of codes and test programs are given. The use of computer codes assessed against data from scaled test facilities to predict the full-scale plant response is discussed. A method to help focus resources and the need for quantifying code uncertainties are discussed. This paper concentrates on the loss-of-coolant accident (LOCA) because most of the analytical and experimental research has been concentrated in LOCAs.

  2. Thermal-hydraulic computer code development and assessment process for ALWRs

    SciTech Connect

    Lauben, G.N.

    1994-12-31

    In September 1988, the U.S. Nuclear Regulatory Commission (NRC) issued a revised emergency core cooling system (ECCS) rule (10CFR50.46) for light water (nuclear power) reactors (LWRs) to allow the use of best-estimate computer codes in safety analysis as an option. A key feature of this option requires the licensee to quantify the uncertainty of the calculations and include that uncertainty when comparing the calculated results with acceptance limits provided in 10CFR50.46. To support the revised ECCS rule and illustrate its application, the NRC and its contractors and consultants developed and demonstrated an uncertainty evaluation methodology called code scaling, applicability, and uncertainty (CSAU). The CSAU methodology as described in NUREG/CR-5249 is the culmination of 20 yr of ECCS research on current LWR designs involving extensive iteration of experiments and analysis in which the developmental process was essentially completed. This allowed establishment of a structured top-down process of determining code capabilities and adequacy of code assessment and a bottom-up process of code sensitivity and uncertainty analysis.

  3. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  4. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.

  5. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.

  6. Reeds computer code

    NASA Technical Reports Server (NTRS)

    Bjork, C.

    1981-01-01

    The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.

  7. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  8. Seals Flow Code Development

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.

  9. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.

    PubMed

    Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond

  10. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.

    PubMed

    Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond

  11. Advanced Technology Airfoil Research, volume 1, part 1. [conference on development of computational codes and test facilities

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A comprehensive review of all NASA airfoil research, conducted both in-house and under grant and contract, as well as a broad spectrum of airfoil research outside of NASA is presented. Emphasis is placed on the development of computational aerodynamic codes for airfoil analysis and design, the development of experimental facilities and test techniques, and all types of airfoil applications.

  12. Computer Code Aids Design Of Wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1993-01-01

    AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.

  13. Propulsion stability codes for liquid propellant propulsion systems developed for use on a PC computer

    NASA Technical Reports Server (NTRS)

    Doane, George B., III; Armstrong, Wilbur C.

    1991-01-01

    Research into component modeling and system synthesis leading to the analysis of the major types of propulsion system instabilities and the characterization of various components characteristics are presented. Last year, several programs designed to run on a PC were developed for Marshall Space Flight Center. These codes covered the low, intermediate, and high frequency modes of oscillation of a liquid rocket propulsion system. No graphics were built into these programs and only simple piping layouts were supported. This year's effort was to add run time graphics to the low and intermediate frequency codes, allow new types of piping elements (accumulators, pumps, and split pipes) in the low frequency code, and develop a new code for the PC to generate Nyquist plots.

  14. Development of a locally mass flux conservative computer code for calculating 3-D viscous flow in turbomachines

    NASA Technical Reports Server (NTRS)

    Walitt, L.

    1982-01-01

    The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.

  15. Development of Parallel Computing Framework to Enhance Radiation Transport Code Capabilities for Rare Isotope Beam Facility Design

    SciTech Connect

    Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji

    2013-09-25

    A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA and MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  16. Industrial Computer Codes

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1996-01-01

    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  17. Computer Code Generates Homotopic Grids

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1992-01-01

    HOMAR is computer code using homotopic procedure to produce two-dimensional grids in cross-sectional planes, which grids then stacked to produce quasi-three-dimensional grid systems for aerospace configurations. Program produces grids for use in both Euler and Navier-Stokes computation of flows. Written in FORTRAN 77.

  18. Computer-Access-Code Matrices

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr.

    1990-01-01

    Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.

  19. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  20. CNSFV code development, virtual zone Navier-Stokes computations of oscillating control surfaces and computational support of the laminar flow supersonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Klopfer, Goetz H.

    1993-01-01

    The work performed during the past year on this cooperative agreement covered two major areas and two lesser ones. The two major items included further development and validation of the Compressible Navier-Stokes Finite Volume (CNSFV) code and providing computational support for the Laminar Flow Supersonic Wind Tunnel (LFSWT). The two lesser items involve a Navier-Stokes simulation of an oscillating control surface at transonic speeds and improving the basic algorithm used in the CNSFV code for faster convergence rates and more robustness. The work done in all four areas is in support of the High Speed Research Program at NASA Ames Research Center.

  1. Development of one-dimensional computational fluid dynamics code 'GFLOW' for groundwater flow and contaminant transport analysis

    SciTech Connect

    Rahatgaonkar, P. S.; Datta, D.; Malhotra, P. K.; Ghadge, S. G.

    2012-07-01

    Prediction of groundwater movement and contaminant transport in soil is an important problem in many branches of science and engineering. This includes groundwater hydrology, environmental engineering, soil science, agricultural engineering and also nuclear engineering. Specifically, in nuclear engineering it is applicable in the design of spent fuel storage pools and waste management sites in the nuclear power plants. Ground water modeling involves the simulation of flow and contaminant transport by groundwater flow. In the context of contaminated soil and groundwater system, numerical simulations are typically used to demonstrate compliance with regulatory standard. A one-dimensional Computational Fluid Dynamics code GFLOW had been developed based on the Finite Difference Method for simulating groundwater flow and contaminant transport through saturated and unsaturated soil. The code is validated with the analytical model and the benchmarking cases available in the literature. (authors)

  2. Using the DEWSBR computer code

    SciTech Connect

    Cable, G.D.

    1989-09-01

    A computer code is described which is designed to determine the fraction of time during which a given ground location is observable from one or more members of a satellite constellation in earth orbit. Ground visibility parameters are determined from the orientation and strength of an appropriate ionized cylinder (used to simulate a beam experiment) at the selected location. Satellite orbits are computed in a simplified two-body approximation computation. A variety of printed and graphical outputs is provided. 9 refs., 50 figs., 2 tabs.

  3. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  4. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-01-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively

  5. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively

  6. New coding technique for computer generated holograms.

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  7. Development of a computer code to predict a ventilation requirement for an underground radioactive waste storage tank

    SciTech Connect

    Lee, Y.J.; Dalpiaz, E.L.

    1997-08-01

    Computer code, WTVFE (Waste Tank Ventilation Flow Evaluation), has been developed to evaluate the ventilation requirement for an underground storage tank for radioactive waste. Heat generated by the radioactive waste and mixing pumps in the tank is removed mainly through the ventilation system. The heat removal process by the ventilation system includes the evaporation of water from the waste and the heat transfer by natural convection from the waste surface. Also, a portion of the heat will be removed through the soil and the air circulating through the gap between the primary and secondary tanks. The heat loss caused by evaporation is modeled based on recent evaporation test results by the Westinghouse Hanford Company using a simulated small scale waste tank. Other heat transfer phenomena are evaluated based on well established conduction and convection heat transfer relationships. 10 refs., 3 tabs.

  8. Seals Code Development Workshop

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)

    1996-01-01

    Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.

  9. Computer design code for conical ribbon parachutes

    SciTech Connect

    Waye, D.E.

    1986-01-01

    An interactive computer design code has been developed to aid in the design of conical ribbon parachutes. The program is written to include single conical and polyconical parachute designs. The code determines the pattern length, vent diameter, radial length, ribbon top and bottom lengths, and geometric local and average porosity for the designer with inputs of constructed diameter, ribbon widths, ribbon spacings, radial width, and number of gores. The gores are designed with one mini-radial in the center with an option for the addition of two outer mini-radials. The output provides all of the dimensions necessary for the construction of the parachute. These results could also be used as input into other computer codes used to predict parachute loads.

  10. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  11. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  12. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    SciTech Connect

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  13. FIDDLE: A Computer Code for Finite Difference Development of Linear Elasticity in Generalized Curvilinear Coordinates

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2005-01-01

    A three-dimensional numerical solver based on finite-difference solution of three-dimensional elastodynamic equations in generalized curvilinear coordinates has been developed and used to generate data such as radial and tangential stresses over various gear component geometries under rotation. The geometries considered are an annulus, a thin annular disk, and a thin solid disk. The solution is based on first principles and does not involve lumped parameter or distributed parameter systems approach. The elastodynamic equations in the velocity-stress formulation that are considered here have been used in the solution of problems of geophysics where non-rotating Cartesian grids are considered. For arbitrary geometries, these equations along with the appropriate boundary conditions have been cast in generalized curvilinear coordinates in the present study.

  14. Computer Code for Nanostructure Simulation

    NASA Technical Reports Server (NTRS)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  15. Present state of the SOURCES computer code

    SciTech Connect

    Shores, E. F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  16. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant

  17. Probabilistic structural analysis computer code (NESSUS)

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.

    1988-01-01

    Probabilistic structural analysis has been developed to analyze the effects of fluctuating loads, variable material properties, and uncertain analytical models especially for high performance structures such as SSME turbopump blades. The computer code NESSUS (Numerical Evaluation of Stochastic Structure Under Stress) was developed to serve as a primary computation tool for the characterization of the probabilistic structural response due to the stochastic environments by statistical description. The code consists of three major modules NESSUS/PRE, NESSUS/FEM, and NESSUS/FPI. NESSUS/PRE is a preprocessor which decomposes the spatially correlated random variables into a set of uncorrelated random variables using a modal analysis method. NESSUS/FEM is a finite element module which provides structural sensitivities to all the random variables considered. NESSUS/FPI is Fast Probability Integration method by which a cumulative distribution function or a probability density function is calculated.

  18. H/sup 0/ precessor computer code

    SciTech Connect

    van Dyck, O.B.; Floyd, R.A.

    1981-05-01

    A spin precessor using H/sup -/ to H/sup 0/ stripping, followed by small precession magnets, has been developed for the LAMPF 800-MeV polarized H/sup -/ beam. The performance of the system was studied with the computer code documented in this report. The report starts from the fundamental physics of a system of spins with hyperfine coupling in a magnetic field and contains many examples of beam behavior as calculated by the program.

  19. Computer-Based Coding of Occupation Codes for Epidemiological Analyses.

    PubMed

    Russ, Daniel E; Ho, Kwan-Yuet; Johnson, Calvin A; Friesen, Melissa C

    2014-05-01

    Mapping job titles to standardized occupation classification (SOC) codes is an important step in evaluating changes in health risks over time as measured in inspection databases. However, manual SOC coding is cost prohibitive for very large studies. Computer based SOC coding systems can improve the efficiency of incorporating occupational risk factors into large-scale epidemiological studies. We present a novel method of mapping verbatim job titles to SOC codes using a large table of prior knowledge available in the public domain that included detailed description of the tasks and activities and their synonyms relevant to each SOC code. Job titles are compared to our knowledge base to find the closest matching SOC code. A soft Jaccard index is used to measure the similarity between a previously unseen job title and the knowledge base. Additional information such as standardized industrial codes can be incorporated to improve the SOC code determination by providing additional context to break ties in matches. PMID:25221787

  20. GMRES acceleration of computational fluid dynamics codes

    NASA Technical Reports Server (NTRS)

    Wigton, L. B.; Yu, N. J.; Young, D. P.

    1985-01-01

    The generalized minimal residual algorithm (GMRES) is a conjugate-gradient like method that applies directly to nonsymmetric linear systems of equations. In this paper, GMRES is modified to handle nonlinear equations characteristic of computational fluid dynamics. Attention is devoted to the concept of preconditioning and the role it plays in assuring rapid convergence. A formulation is developed that allows GMRES to be preconditioned by the solution procedures already built into existing computer codes. Examples are provided that demonstrate the ability of GMRES to greatly improve the robustness and rate of convergence of current state-of-the-art fluid dynamics codes. Theoretical aspects of GMRES are presented that explain why it works. Finally, the advantage GMRES enjoys over related methods such as conjugate gradients are discussed.

  1. Lawrence Livermore National Laboratories Perspective on Code Development and High Performance Computing Resources in Support of the National HED/ICF Effort

    SciTech Connect

    Clouse, C. J.; Edwards, M. J.; McCoy, M. G.; Marinak, M. M.; Verdon, C. P.

    2015-07-07

    Through its Advanced Scientific Computing (ASC) and Inertial Confinement Fusion (ICF) code development efforts, Lawrence Livermore National Laboratory (LLNL) provides a world leading numerical simulation capability for the National HED/ICF program in support of the Stockpile Stewardship Program (SSP). In addition the ASC effort provides high performance computing platform capabilities upon which these codes are run. LLNL remains committed to, and will work with, the national HED/ICF program community to help insure numerical simulation needs are met and to make those capabilities available, consistent with programmatic priorities and available resources.

  2. Thermal Hydraulic Computer Code System.

    1999-07-16

    Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

  3. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 2: Code description

    NASA Technical Reports Server (NTRS)

    Marconi, F.; Yaeger, L.

    1976-01-01

    A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  4. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  5. Overview of CODE V development

    NASA Astrophysics Data System (ADS)

    Harris, Thomas I.

    1991-01-01

    This paper is part of a session that is aimed at briefly describing some of today''s optical design software packages with emphasis on the program''s philosophy and technology. CODE V is the ongoing result of a development process that began in the 1960''s it is now the result of many people''s efforts. This paper summarizes the roots of the program some of its history dominant philosophies and technologies that have contributed to its usefulness and some that drive its continued development. ROOTS OF CODE V Conceived in the early 60''s This was at a time when there was skepticism that " automatic design" could design lenses equal or better than " hand" methods. The concepts underlying CODE V and its predecessors were based on ten years of experience and exposure to the problems of a group of lens designers in a design-for-manufacture environment. The basic challenge was to show that lens design could be done better easier and faster by high quality computer-assisted design tools. The earliest development was for our own use as an engineering services organization -an in-house tool for custom design. As a tool it had to make us efficient in providing lens design and engineering services as a self-sustaining business. PHILOSOPHY OF OVTIM!ZATION IN CODE V Error function formation Based on experience as a designer we felt very strongly that there should be a clear separation of

  6. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  7. MAGNUM-2D computer code: user's guide

    SciTech Connect

    England, R.L.; Kline, N.W.; Ekblad, K.J.; Baca, R.G.

    1985-01-01

    Information relevant to the general use of the MAGNUM-2D computer code is presented. This computer code was developed for the purpose of modeling (i.e., simulating) the thermal and hydraulic conditions in the vicinity of a waste package emplaced in a deep geologic repository. The MAGNUM-2D computer computes (1) the temperature field surrounding the waste package as a function of the heat generation rate of the nuclear waste and thermal properties of the basalt and (2) the hydraulic head distribution and associated groundwater flow fields as a function of the temperature gradients and hydraulic properties of the basalt. MAGNUM-2D is a two-dimensional numerical model for transient or steady-state analysis of coupled heat transfer and groundwater flow in a fractured porous medium. The governing equations consist of a set of coupled, quasi-linear partial differential equations that are solved using a Galerkin finite-element technique. A Newton-Raphson algorithm is embedded in the Galerkin functional to formulate the problem in terms of the incremental changes in the dependent variables. Both triangular and quadrilateral finite elements are used to represent the continuum portions of the spatial domain. Line elements may be used to represent discrete conduits. 18 refs., 4 figs., 1 tab.

  8. electromagnetics, eddy current, computer codes

    2002-03-12

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  9. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  10. PREWATE: An interactive preprocessing computer code to the Weight Analysis of Turbine Engines (WATE) computer code

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1983-01-01

    The Weight Analysis of Turbine Engines (WATE) computer code was developed by Boeing under contract to NASA Lewis. It was designed to function as an adjunct to the Navy/NASA Engine Program (NNEP). NNEP calculates the design and off-design thrust and sfc performance of User defined engine cycles. The thermodynamic parameters throughout the engine as generated by NNEP are then combined with input parameters defining the component characteristics in WATE to calculate the bare engine weight of this User defined engine. Preprocessor programs for NNEP were previously developed to simplify the task of creating input datasets. This report describes a similar preprocessor for the WATE code.

  11. Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes

    NASA Technical Reports Server (NTRS)

    Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.

    2001-01-01

    The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as

  12. Industrial Code Development

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1991-01-01

    The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.

  13. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 1: Computational technique

    NASA Technical Reports Server (NTRS)

    Marconi, F.; Salas, M.; Yaeger, L.

    1976-01-01

    A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  14. Para: a computer simulation code for plasma driven electromagnetic launchers

    SciTech Connect

    Thio, Y.-C.

    1983-03-01

    A computer code for simulation of rail-type accelerators utilizing a plasma armature has been developed and is described in detail. Some time varying properties of the plasma are taken into account in this code thus allowing the development of a dynamical model of the behavior of a plasma in a rail-type electromagnetic launcher. The code is being successfully used to predict and analyse experiments on small calibre rail-gun launchers.

  15. Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan

    2013-01-01

    Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…

  16. TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.

    1994-01-01

    The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters

  17. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  18. Independent peer review of nuclear safety computer codes

    SciTech Connect

    Boyack, B.E.; Jenks, R.P.

    1993-02-01

    A structured process of independent computer code peer review has been developed to assist the US Nuclear Regulatory Commission (NRC) and the US Department of Energy in their nuclear safety missions. This paper focuses on the process that evolved during recent reviews of NRC codes.

  19. Secure Computation from Random Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Cramer, Ronald; Goldwasser, Shafi; de Haan, Robbert; Vaikuntanathan, Vinod

    Secure computation consists of protocols for secure arithmetic: secret values are added and multiplied securely by networked processors. The striking feature of secure computation is that security is maintained even in the presence of an adversary who corrupts a quorum of the processors and who exercises full, malicious control over them. One of the fundamental primitives at the heart of secure computation is secret-sharing. Typically, the required secret-sharing techniques build on Shamir's scheme, which can be viewed as a cryptographic twist on the Reed-Solomon error correcting code. In this work we further the connections between secure computation and error correcting codes. We demonstrate that threshold secure computation in the secure channels model can be based on arbitrary codes. For a network of size n, we then show a reduction in communication for secure computation amounting to a multiplicative logarithmic factor (in n) compared to classical methods for small, e.g., constant size fields, while tolerating t < ({1 over 2} - {ɛ}) {n} players to be corrupted, where ɛ> 0 can be arbitrarily small. For large networks this implies considerable savings in communication. Our results hold in the broadcast/negligible error model of Rabin and Ben-Or, and complement results from CRYPTO 2006 for the zero-error model of Ben-Or, Goldwasser and Wigderson (BGW). Our general theory can be extended so as to encompass those results from CRYPTO 2006 as well. We also present a new method for constructing high information rate ramp schemes based on arbitrary codes, and in particular we give a new construction based on algebraic geometry codes.

  20. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  1. Thermoelectric pump performance analysis computer code

    NASA Technical Reports Server (NTRS)

    Johnson, J. L.

    1973-01-01

    A computer program is presented that was used to analyze and design dual-throat electromagnetic dc conduction pumps for the 5-kwe ZrH reactor thermoelectric system. In addition to a listing of the code and corresponding identification of symbols, the bases for this analytical model are provided.

  2. COLD-SAT Dynamic Model Computer Code

    NASA Technical Reports Server (NTRS)

    Bollenbacher, G.; Adams, N. S.

    1995-01-01

    COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.

  3. RESRAD-CHEM: A computer code for chemical risk assessment

    SciTech Connect

    Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.

    1993-10-01

    RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.

  4. NASA Space Radiation Transport Code Development Consortium.

    PubMed

    Townsend, Lawrence W

    2005-01-01

    Recently, NASA established a consortium involving the University of Tennessee (lead institution), the University of Houston, Roanoke College and various government and national laboratories, to accelerate the development of a standard set of radiation transport computer codes for NASA human exploration applications. This effort involves further improvements of the Monte Carlo codes HETC and FLUKA and the deterministic code HZETRN, including developing nuclear reaction databases necessary to extend the Monte Carlo codes to carry out heavy ion transport, and extending HZETRN to three dimensions. The improved codes will be validated by comparing predictions with measured laboratory transport data, provided by an experimental measurements consortium, and measurements in the upper atmosphere on the balloon-borne Deep Space Test Bed (DSTB). In this paper, we present an overview of the consortium members and the current status and future plans of consortium efforts to meet the research goals and objectives of this extensive undertaking.

  5. User's manual for HDR3 computer code

    SciTech Connect

    Arundale, C.J.

    1982-10-01

    A description of the HDR3 computer code and instructions for its use are provided. HDR3 calculates space heating costs for a hot dry rock (HDR) geothermal space heating system. The code also compares these costs to those of a specific oil heating system in use at the National Aeronautics and Space Administration Flight Center at Wallops Island, Virginia. HDR3 allows many HDR system parameters to be varied so that the user may examine various reservoir management schemes and may optimize reservoir design to suit a particular set of geophysical and economic parameters.

  6. Experimental methodology for computational fluid dynamics code validation

    SciTech Connect

    Aeschliman, D.P.; Oberkampf, W.L.

    1997-09-01

    Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.

  7. Neural coding: computational and biophysical perspectives

    NASA Astrophysics Data System (ADS)

    Kreiman, Gabriel

    2004-07-01

    While recognizing a face or kicking a ball may seem to be easy tasks for us, they still constitute challenging problems for even the most sophisticated computer algorithms available nowadays. The brain has evolved complex mechanisms to encode behaviorally relevant information. Here we review the types of codes used by the brain, what their constraints are and how they map the sensory environment or the motor output. We start by defining neural codes and briefly describing some of the current tools available to record activity from the brain. We give several examples of coding strategies used by different systems and multiple organisms and discuss how spiking patterns can be read out. Going beyond correlations between physiology and stimuli, we show what is currently known about the direct causal link between neuronal responses and behavioral output or sensory input. Finally, we identify what we consider to be some of the pressing questions in the field.

  8. HUDU: The Hanford Unified Dose Utility computer code

    SciTech Connect

    Scherpelz, R.I.

    1991-02-01

    The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs.

  9. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  10. TAIR: A transonic airfoil analysis computer code

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.

    1981-01-01

    The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.

  11. Connecting Neural Coding to Number Cognition: A Computational Account

    ERIC Educational Resources Information Center

    Prather, Richard W.

    2012-01-01

    The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…

  12. Seals Flow Code Development 1993

    NASA Technical Reports Server (NTRS)

    Liang, Anita D. (Compiler); Hendricks, Robert C. (Compiler)

    1994-01-01

    Seals Workshop of 1993 code releases include SPIRALI for spiral grooved cylindrical and face seal configurations; IFACE for face seals with pockets, steps, tapers, turbulence, and cavitation; GFACE for gas face seals with 'lift pad' configurations; and SCISEAL, a CFD code for research and design of seals of cylindrical configuration. GUI (graphical user interface) and code usage was discussed with hands on usage of the codes, discussions, comparisons, and industry feedback. Other highlights for the Seals Workshop-93 include environmental and customer driven seal requirements; 'what's coming'; and brush seal developments including flow visualization, numerical analysis, bench testing, T-700 engine testing, tribological pairing and ceramic configurations, and cryogenic and hot gas facility brush seal results. Also discussed are seals for hypersonic engines and dynamic results for spiral groove and smooth annular seals.

  13. Computer code for double beta decay QRPA based calculations

    SciTech Connect

    Barbero, C. A.; Mariano, A.; Krmpotić, F.; Samana, A. R.; Ferreira, V. dos Santos; Bertulani, C. A.

    2014-11-11

    The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.

  14. FLASH: A finite element computer code for variably saturated flow

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  15. High Hydrogen Concentrations Detected In The Underground Vaults For RH-TRU Waste At INEEL Compared With Calculated Values Using The INEEL-Developed Computer Code

    SciTech Connect

    Rajiv Bhatt; Soli Khericha

    2005-02-01

    About 700 remote-handled transuranic (RH-TRU) waste drums are stored in about 144 underground vaults at the Intermediate-Level Transuranic Storage Facility at the Idaho National Environmental and Engineering Laboratory’s (INEEL’s) Radioactive Waste Management Complex (RWMC). These drums were shipped to the INEEL from 1976 through 1996. During recent monitoring, concentrations of hydrogen were found to be in excess of lower explosive limits. The hydrogen concentration in one vault was detected to be as high as 18% (by volume). This condition required evaluation of the safety basis for the facility. The INEEL has developed a computer program to estimate the hydrogen gas generation as a function of time and diffusion through a series of layers (volumes), with a maximum five layers plus a sink/environment. The program solves the first-order diffusion equations as a function of time. The current version of the code is more flexible in terms of user input. The program allows the user to estimate hydrogen concentrations in the different layers of a configuration and then change the configuration after a given time; e.g.; installation of a filter on an unvented drum or placed in a vault or in a shipping cask. The code has been used to predict vault concentrations and to identify potential problems during retrieval and aboveground storage. The code has generally predicted higher hydrogen concentrations than the measured values, particularly for the drums older than 20 year, which could be due to uncertainty and conservative assumptions in drum age, heat generation rate, hydrogen generation rate, Geff, and diffusion rates through the layers.

  16. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  17. Spiking network simulation code for petascale computers

    PubMed Central

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  18. Spiking network simulation code for petascale computers.

    PubMed

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  19. Spiking network simulation code for petascale computers.

    PubMed

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  20. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Ames Code I Private Cloud Computing Environment

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew; Case, Jonathan; Venner, Jason; Moreno-Madrinan, Max J.; Delgado, Francisco

    2012-01-01

    Two projects at NASA Marshall Space Flight Center have collaborated to develop a high resolution weather forecast model for Mesoamerica: The NASA Short-term Prediction Research and Transition (SPoRT) Center, which integrates unique NASA satellite and weather forecast modeling capabilities into the operational weather forecasting community. NASA's SERVIR Program, which integrates satellite observations, ground-based data, and forecast models to improve disaster response in Central America, the Caribbean, Africa, and the Himalayas.

  1. Parallel CARLOS-3D code development

    SciTech Connect

    Putnam, J.M.; Kotulski, J.D.

    1996-02-01

    CARLOS-3D is a three-dimensional scattering code which was developed under the sponsorship of the Electromagnetic Code Consortium, and is currently used by over 80 aerospace companies and government agencies. The code has been extensively validated and runs on both serial workstations and parallel super computers such as the Intel Paragon. CARLOS-3D is a three-dimensional surface integral equation scattering code based on a Galerkin method of moments formulation employing Rao- Wilton-Glisson roof-top basis for triangular faceted surfaces. Fully arbitrary 3D geometries composed of multiple conducting and homogeneous bulk dielectric materials can be modeled. This presentation describes some of the extensions to the CARLOS-3D code, and how the operator structure of the code facilitated these improvements. Body of revolution (BOR) and two-dimensional geometries were incorporated by simply including new input routines, and the appropriate Galerkin matrix operator routines. Some additional modifications were required in the combined field integral equation matrix generation routine due to the symmetric nature of the BOR and 2D operators. Quadrilateral patched surfaces with linear roof-top basis functions were also implemented in the same manner. Quadrilateral facets and triangular facets can be used in combination to more efficiently model geometries with both large smooth surfaces and surfaces with fine detail such as gaps and cracks. Since the parallel implementation in CARLOS-3D is at high level, these changes were independent of the computer platform being used. This approach minimizes code maintenance, while providing capabilities with little additional effort. Results are presented showing the performance and accuracy of the code for some large scattering problems. Comparisons between triangular faceted and quadrilateral faceted geometry representations will be shown for some complex scatterers.

  2. Development and testing of FIDELE: a computer code for finite-difference solution to harmonic magnetic-dipole excitation of an azimuthally symmetric horizontally and radially layered earth

    SciTech Connect

    Vittitoe, C.N.

    1981-04-01

    The FORTRAN IV computer code FIDELE simulates the high-frequency electrical logging of a well in which induction and receiving coils are mounted in an instrument sonde immersed in a drilling fluid. The fluid invades layers of surrounding rock in an azimuthally symmetric pattern, superimposing radial layering upon the horizonally layered earth. Maxwell's equations are reduced to a second-order elliptic differential equation for the azimuthal electric-field intensity. The equation is solved at each spatial position where the complex dielectric constant, magnetic permeability, and electrical conductivity have been assigned. Receiver response is given as the complex open-circuit voltage on receiver coils. The logging operation is simulated by a succession of such solutions as the sonde traverses the borehole. Test problems verify consistency with available results for simple geometries. The code's main advantage is its treatment of a two-dimensional earth; its chief disadvantage is the large computer time required for typical problems. Possible code improvements are noted. Use of the computer code is outlined, and tests of most code features are presented.

  3. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems. Volume 2; Fan Suppression Model Development

    NASA Technical Reports Server (NTRS)

    Kontos, Karen B.; Kraft, Robert E.; Gliebe, Philip R.

    1996-01-01

    The Aircraft Noise Predication Program (ANOPP) is an industry-wide tool used to predict turbofan engine flyover noise in system noise optimization studies. Its goal is to provide the best currently available methods for source noise prediction. As part of a program to improve the Heidmann fan noise model, models for fan inlet and fan exhaust noise suppression estimation that are based on simple engine and acoustic geometry inputs have been developed. The models can be used to predict sound power level suppression and sound pressure level suppression at a position specified relative to the engine inlet.

  4. Comparison of computer codes for calculating dynamic loads in wind turbines

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1978-01-01

    The development of computer codes for calculating dynamic loads in horizontal axis wind turbines was examined, and a brief overview of each code was given. The performance of individual codes was compared against two sets of test data measured on a 100 KW Mod-0 wind turbine. All codes are aeroelastic and include loads which are gravitational, inertial and aerodynamic in origin.

  5. Hanford Meteorological Station computer codes: Volume 2, The PROD computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1987-09-01

    At the end of each work shift (day, swing, and graveyard), the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, issues a forecast of the 200-ft-level wind speed and direction and the weather for use at B Plant and PUREX. These forecasts are called production forecasts. The PROD computer code is used to archive these production forecasts and apply quality assurance checks to the forecasts. The code accesses an input file, which contains the previous forecast's date and shift number, and an output file, which contains the production forecasts for the current month. A data entry form consisting of 20 fields is included in the program. The fields must be filled in by the user. The information entered is appended to the current production monthly forecast file, which provides an archive for the production forecasts. This volume describes the implementation and operation of the PROD computer code at the HMS.

  6. Development of an efficient computer code to solve the time-dependent Navier-Stokes equations. [for predicting viscous flow fields about lifting bodies

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.; Oatway, T. P.

    1975-01-01

    A research effort was conducted with the goal of reducing computer time of a Navier Stokes Computer Code for prediction of viscous flow fields about lifting bodies. A two-dimensional, time-dependent, laminar, transonic computer code (STOKES) was modified to incorporate a non-uniform timestep procedure. The non-uniform time-step requires updating of a zone only as often as required by its own stability criteria or that of its immediate neighbors. In the uniform timestep scheme each zone is updated as often as required by the least stable zone of the finite difference mesh. Because of less frequent update of program variables it was expected that the nonuniform timestep would result in a reduction of execution time by a factor of five to ten. Available funding was exhausted prior to successful demonstration of the benefits to be derived from the non-uniform time-step method.

  7. Upgrades of Two Computer Codes for Analysis of Turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Liou, Meng-Sing

    2005-01-01

    Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.

  8. Fundamentals, current state of the development of, and prospects for further improvement of the new-generation thermal-hydraulic computational HYDRA-IBRAE/LM code for simulation of fast reactor systems

    NASA Astrophysics Data System (ADS)

    Alipchenkov, V. M.; Anfimov, A. M.; Afremov, D. A.; Gorbunov, V. S.; Zeigarnik, Yu. A.; Kudryavtsev, A. V.; Osipov, S. L.; Mosunova, N. A.; Strizhov, V. F.; Usov, E. V.

    2016-02-01

    The conceptual fundamentals of the development of the new-generation system thermal-hydraulic computational HYDRA-IBRAE/LM code are presented. The code is intended to simulate the thermalhydraulic processes that take place in the loops and the heat-exchange equipment of liquid-metal cooled fast reactor systems under normal operation and anticipated operational occurrences and during accidents. The paper provides a brief overview of Russian and foreign system thermal-hydraulic codes for modeling liquid-metal coolants and gives grounds for the necessity of development of a new-generation HYDRA-IBRAE/LM code. Considering the specific engineering features of the nuclear power plants (NPPs) equipped with the BN-1200 and the BREST-OD-300 reactors, the processes and the phenomena are singled out that require a detailed analysis and development of the models to be correctly described by the system thermal-hydraulic code in question. Information on the functionality of the computational code is provided, viz., the thermalhydraulic two-phase model, the properties of the sodium and the lead coolants, the closing equations for simulation of the heat-mass exchange processes, the models to describe the processes that take place during the steam-generator tube rupture, etc. The article gives a brief overview of the usability of the computational code, including a description of the support documentation and the supply package, as well as possibilities of taking advantages of the modern computer technologies, such as parallel computations. The paper shows the current state of verification and validation of the computational code; it also presents information on the principles of constructing of and populating the verification matrices for the BREST-OD-300 and the BN-1200 reactor systems. The prospects are outlined for further development of the HYDRA-IBRAE/LM code, introduction of new models into it, and enhancement of its usability. It is shown that the program of development and

  9. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  10. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  11. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  12. Computer code for determination of thermally perfect gas properties

    NASA Technical Reports Server (NTRS)

    Witte, David W.; Tatum, Kenneth E.

    1994-01-01

    A set of one-dimensional compressible flow relations for a thermally perfect, calorically imperfect gas is derived for the specific heat c(sub p), expressed as a polynomial function of temperature, and developed into the thermally perfect gas (TPG) computer code. The code produces tables of compressible flow properties similar to those of NACA Rep. 1135. Unlike the tables of NACA Rep. 1135 which are valid only in the calorically perfect temperature regime, the TPG code results are also valid in the thermally perfect calorically imperfect temperature regime which considerably extends the range of temperature application. Accuracy of the TPG code in the calorically perfect temperature regime is verified by comparisons with the tables of NACA Rep. 1135. In the thermally perfect, calorically imperfect temperature regime, the TPG code is validated by comparisons with results obtained from the method of NACA Rep. 1135 for calculating the thermally perfect calorically imperfect compressible flow properties. The temperature limits for application of the TPG code are also examined. The advantage of the TPG code is its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture thereof, whereas the method of NACA Rep. 1135 is restricted to only diatomic gases.

  13. Verification and validation plan for reactor analysis computer codes

    SciTech Connect

    Toffer, H.; Crowe, R.D.; Schwinkendorf, K.N.; Pevey, R.E.

    1989-11-01

    This report presents a verification and validation (V&V) plan for reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. This plan fulfills the commitments by Westinghouse Savannah River Company (WSRC) to the Department of Energy Savannah River (DOE-SR) as identified in a letter to R.E. Tiller (Reference 1). The plan stresses verification and validation by demonstrating successful application of the codes to predict reactor data, special measurements, and benchmarks. This is in compliance with the intent of the WSRC quality assurance requirements. Restructuring of software especially to achieve verification compliance is not recommended.

  14. Verification and validation plan for reactor analysis computer codes

    SciTech Connect

    Toffer, H.; Crowe, R.D.; Schwinkendorf, K.N. ); Pevey, R.E. )

    1989-11-01

    This report presents a verification and validation (V V) plan for reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. This plan fulfills the commitments by Westinghouse Savannah River Company (WSRC) to the Department of Energy Savannah River (DOE-SR) as identified in a letter to R.E. Tiller (Reference 1). The plan stresses verification and validation by demonstrating successful application of the codes to predict reactor data, special measurements, and benchmarks. This is in compliance with the intent of the WSRC quality assurance requirements. Restructuring of software especially to achieve verification compliance is not recommended.

  15. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  16. A DOE Computer Code Toolbox: Issues and Opportunities

    SciTech Connect

    Vincent, A.M. III

    2001-06-12

    The initial activities of a Department of Energy (DOE) Safety Analysis Software Group to establish a Safety Analysis Toolbox of computer models are discussed. The toolbox shall be a DOE Complex repository of verified and validated computer models that are configuration-controlled and made available for specific accident analysis applications. The toolbox concept was recommended by the Defense Nuclear Facilities Safety Board staff as a mechanism to partially address Software Quality Assurance issues. Toolbox candidate codes have been identified through review of a DOE Survey of Software practices and processes, and through consideration of earlier findings of the Accident Phenomenology and Consequence Evaluation program sponsored by the DOE National Nuclear Security Agency/Office of Defense Programs. Planning is described to collect these high-use codes, apply tailored SQA specific to the individual codes, and implement the software toolbox concept. While issues exist such as resource allocation and the interface among code developers, code users, and toolbox maintainers, significant benefits can be achieved through a centralized toolbox and subsequent standardized applications.

  17. New Parallel computing framework for radiation transport codes

    SciTech Connect

    Kostin, M.A.; Mokhov, N.V.; Niita, K.; /JAERI, Tokai

    2010-09-01

    A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  18. Wind tunnel requirements for computational fluid dynamics code verification

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1987-01-01

    The role of experiment in the development of Computational Fluid Dynamics (CFD) for aerodynamic flow field prediction is discussed. Requirements for code verification from two sources that pace the development of CFD are described for: (1) development of adequate flow modeling, and (2) establishment of confidence in the use of CFD to predict complex flows. The types of data needed and their accuracy differs in detail and scope and leads to definite wind tunnel requirements. Examples of testing to assess and develop turbulence models, and to verify code development, are used to establish future wind tunnel testing requirements. Versatility, appropriate scale and speed range, accessibility for nonintrusive instrumentation, computerized data systems, and dedicated use for verification were among the more important requirements identified.

  19. Implementation of a 3D mixing layer code on parallel computers

    NASA Technical Reports Server (NTRS)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  20. Simulation Code Development and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Zenghai

    2015-10-01

    Under the support of the U.S. DOE SciDAC program, SLAC has been developing a suite of 3D parallel finite-element codes aimed at high-accuracy, high-fidelity electromagnetic and beam physics simulations for the design and optimization of next-generation particle accelerators. Running on the latest supercomputers, these codes have made great strides in advancing the state of the art in applied math and computer science at the petascale that enable the integrated modeling of electromagnetics, self-consistent Particle-In-Cell (PIC) particle dynamics as well as thermal, mechanical, and multi-physics effects. This paper will present the latest development and application of ACE3P to a wide range of accelerator projects.

  1. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  2. RESRAD-ECORISK: A computer code for ecological risk assessment

    SciTech Connect

    Cheng, J.J.

    1995-12-01

    RESRAD-ECORISK is a PC-based computer code developed by Argonne National Laboratory (ANL) to estimate risks from exposure of ecological receptors at sites contaminated with potentially hazardous chemicals. The code is based on and is consistent with the methodologies of RESRAD-CHEM, an ANL-developed computer code for assessments of human health risk. RESRAD-ECORISK uses environmental fate and transport models to estimate contaminant concentrations in environmental media from an initial contaminated soil source and food-web uptake models to estimate contaminant doses to ecological receptors. The dose estimates are then used to estimate a risk for the ecological receptor and to calculate preliminary soil guidelines for reducing risks to acceptable levels. Specifically, RESRAD-ECORISK calculates (1) a species-specific applied daily dose for each contaminant (using species-specific life history information and site-specific environmental media concentrations), (2) an ecological hazard quotient (EHQ) for each contaminant and species, and (3) preliminary soil cleanup criteria for each contaminant and receptor. RESRAD-ECORISK incorporates a user-friendly menu-driven interface, databases and default values for a variety of ecological and chemical parameters, and on-line help for easy operation. The code is sufficiently flexible to simulate different contaminated sites and incorporate site-specific ecological data.

  3. NASA Multidimensional Stirling Convertor Code Developed

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Thieme, Lanny G.

    2004-01-01

    A high-efficiency Stirling Radioisotope Generator (SRG) for use on potential NASA Space Science missions is being developed by the Department of Energy, Lockheed Martin, Stirling Technology Company, and the NASA Glenn Research Center. These missions may include providing spacecraft onboard electric power for deep space missions or power for unmanned Mars rovers. Glenn is also developing advanced technology for Stirling convertors, aimed at substantially improving the specific power and efficiency of the convertor and the overall power system. Performance and mass improvement goals have been established for second- and third-generation Stirling radioisotope power systems. Multiple efforts are underway to achieve these goals, both in house at Glenn and under various grants and contracts. These efforts include the development of a multidimensional Stirling computational fluid dynamics code, high-temperature materials, advanced controllers, an end-to-end system dynamics model, low-vibration techniques, advanced regenerators, and a lightweight convertor. Under a NASA grant, Cleveland State University (CSU) and its subcontractors, the University of Minnesota (UMN) and Gedeon Associates, have developed a twodimensional computer simulation of a CSUmod Stirling convertor. The CFD-ACE commercial software developed by CFD Research Corp. of Huntsville, Alabama, is being used. The CSUmod is a scaled version of the Stirling Technology Demonstrator Convertor (TDC), which was designed and fabricated by the Stirling Technology Company and is being tested by NASA. The schematic illustrates the structure of this model. Modeled are the fluid-flow and heat-transfer phenomena that occur in the expansion space, the heater, the regenerator, the cooler, the compression space, the surrounding walls, and the moving piston and displacer. In addition, the overall heat transfer, the indicated power, and the efficiency can be calculated. The CSUmod model is being converted to a two

  4. Computer Code For Turbocompounded Adiabatic Diesel Engine

    NASA Technical Reports Server (NTRS)

    Assanis, D. N.; Heywood, J. B.

    1988-01-01

    Computer simulation developed to study advantages of increased exhaust enthalpy in adiabatic turbocompounded diesel engine. Subsytems of conceptual engine include compressor, reciprocator, turbocharger turbine, compounded turbine, ducting, and heat exchangers. Focus of simulation of total system is to define transfers of mass and energy, including release and transfer of heat and transfer of work in each subsystem, and relationship among subsystems. Written in FORTRAN IV.

  5. Computer code for the prediction of nozzle admittance

    NASA Technical Reports Server (NTRS)

    Nguyen, Thong V.

    1988-01-01

    A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.

  6. Optimization of Russian roulette parameters for the KENO computer code

    SciTech Connect

    Hoffman, T.J.

    1982-10-01

    Proper specification of the (statistical) weight standards for Monte Carlo calculations can lead to a substantial reduction in computer time. Frequently these weights are set intuitively. When optimization is performed, it is usually based on a simplified model (to enable mathematical analysis) and involves minimization of the sample variance. In this report, weight standards are optimized through consideration of the actual implementation of Russian roulette in the KENO computer code. The goal is minimization of computer time rather than minimization of sample variance. Verification of the development and assumptions is obtained from Monte Carlo simulations. The results indicate that the current default weight standards are appropriate for most problems in which thermal neutron transport is not a major consumer of computer time. For thermal systems, the optimization technique described in this report should be used.

  7. Hanford Meteorological Station computer codes: Volume 4, The SUM computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1987-09-01

    At the end of each swing shift, the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, archives a set of daily weather observations. These weather observations are a summary of the maximum and minimum temperature, total precipitation, maximum and minimum relative humidity, total snowfall, total snow depth at 1200 Greenwich Mean Time (GMT), and maximum wind speed plus the direction from which the wind occurred and the time it occurred. This summary also indicates the occurrence of rain, snow, and other weather phenomena. The SUM computer code is used to archive the summary and apply quality assurance checks to the data. This code accesses an input file that contains the date of the previous archive and an output file that contains a daily weather summary for the current month. As part of the program, a data entry form consisting of 21 fields must be filled in by the user. The information on the form is appended to the monthly file, which provides an archive for the daily weather summary. This volume describes the implementation and operation of the SUM computer code at the HMS.

  8. Hanford Meteorological Station computer codes: Volume 6, The SFC computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1987-11-01

    Each hour the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, records and archives weather observations. Hourly surface weather observations consist of weather phenomena such as cloud type and coverage; dry bulb, wet bulb, and dew point temperatures; relative humidity; atmospheric pressure; and wind speed and direction. The SFC computer code is used to archive those weather observations and apply quality assurance checks to the data. This code accesses an input file, which contains the previous archive's date and hour and an output file, which contains surface observations for the current day. As part of the program, a data entry form consisting of 24 fields must be filled in. The information on the form is appended to the daily file, which provides an archive for the hourly surface observations.

  9. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  10. Proceduracy: Computer Code Writing in the Continuum of Literacy

    ERIC Educational Resources Information Center

    Vee, Annette

    2010-01-01

    This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…

  11. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes....

  12. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes....

  13. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes....

  14. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes....

  15. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes....

  16. Hanford Meteorological Station computer codes: Volume 7, The RIVER computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1988-03-01

    The RIVER computer code is used to archive Columbia River data measured at the 100N reactor. The data are recorded every other hour starting at 0100 Pacific Standard Time (12 observations in a day), and consists of river elevation, temperature, and flow rate. The program prompts the user for river data by using a data entry form. After the data have been enetered and verified, the program appends each hour of river data to the end of each corresponding surface observation record for the current day. The appended data are then stored in the current month's surface observation file.

  17. Micromechanics Analysis Code (MAC) Developed

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The ability to accurately predict the thermomechanical deformation response of advanced composite materials continues to play an important role in the development of these strategic materials. Analytical models that predict the effective behavior of composites are used not only by engineers in performing structural analysis of large-scale composite components but also by material scientists in developing new material systems. For an analytical model to fulfill these two distinct functions, it must be based on a micromechanics approach that uses physically based deformation and life constitutive models, and it must allow one to generate the average (macro) response of a composite material given the properties of the individual constituents and their geometric arrangement. Only then can such a model be used by a material scientist to investigate the effect of different deformation mechanisms on the overall response of the composite and, thereby, identify the appropriate constituents for a given application. However, if a micromechanical model is to be used in a large-scale structural analysis it must be (1) computationally efficient, (2) able to generate accurate displacement and stress fields at both the macro and micro level, and (3) compatible with the finite element method. In addition, new advancements in processing and fabrication techniques now make it possible to engineer the architectures of these advanced composite systems. Full utilization of these emerging manufacturing capabilities require the development of a computationally efficient micromechanics analysis tool that can accurately predict the effect of microstructural details on the internal and macroscopic behavior of composites. Computational efficiency is required because (1) a large number of parameters must be varied in the course of engineering (or designing) composite materials and (2) the optimization of a material's microstructure requires that the micromechanics model be integrated with

  18. Tuning Complex Computer Codes to Data and Optimal Designs

    NASA Astrophysics Data System (ADS)

    Park, Jeong Soo

    Modern scientific researchers often use complex computer simulation codes for theoretical investigations. We model the response of computer simulation code as the realization of a stochastic process. This approach, design and analysis of computer experiments (DACE), provides a statistical basis for analysing computer data, for designing experiments for efficient prediction and for comparing computer-encoded theory to experiments. An objective of research in a large class of dynamic systems is to determine any unknown coefficients in a theory. The coefficients can be determined by "tuning" the computer model to the real data so that the tuned code gives a good match to the real experimental data. Three design strategies for computer experiments are considered: data-adaptive sequential A-optimal design, maximum entropy design and optimal Latin-hypercube design. The following "code tuning" methodologies are proposed: nonlinear least squares, joint MLE, "separated" joint MLE and Bayesian method. The performance of these methods have been studied in several toy models. In the application to nuclear fusion devices, a cheaper emulator of the simulation code (BALDUR) has been constructed, and the transport coefficients were estimated from data of two tokamaks (ASDEX and PDX). Tuning complex computer codes to data using some statistical estimation methods and a cheap emulator of the code along with careful designs of computer experiments, with applications to nuclear fusion devices, is the topic of this thesis.

  19. Python interface generator for Fortran based codes (a code development aid)

    SciTech Connect

    Grote, D. P.

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling code can be written in the much more versatile Python language.

  20. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1992-01-01

    The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.

  1. Space Radiation Transport Code Development: 3DHZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The space radiation transport code, HZETRN, has been used extensively for research, vehicle design optimization, risk analysis, and related applications. One of the simplifying features of the HZETRN transport formalism is the straight-ahead approximation, wherein all particles are assumed to travel along a common axis. This reduces the governing equation to one spatial dimension allowing enormous simplification and highly efficient computational procedures to be implemented. Despite the physical simplifications, the HZETRN code is widely used for space applications and has been found to agree well with fully 3D Monte Carlo simulations in many circumstances. Recent work has focused on the development of 3D transport corrections for neutrons and light ions (Z < 2) for which the straight-ahead approximation is known to be less accurate. Within the development of 3D corrections, well-defined convergence criteria have been considered, allowing approximation errors at each stage in model development to be quantified. The present level of development assumes the neutron cross sections have an isotropic component treated within N explicit angular directions and a forward component represented by the straight-ahead approximation. The N = 1 solution refers to the straight-ahead treatment, while N = 2 represents the bi-directional model in current use for engineering design. The figure below shows neutrons, protons, and alphas for various values of N at locations in an aluminum sphere exposed to a solar particle event (SPE) spectrum. The neutron fluence converges quickly in simple geometry with N > 14 directions. The improved code, 3DHZETRN, transports neutrons, light ions, and heavy ions under space-like boundary conditions through general geometry while maintaining a high degree of computational efficiency. A brief overview of the 3D transport formalism for neutrons and light ions is given, and extensive benchmarking results with the Monte Carlo codes Geant4, FLUKA, and

  2. A computer code for performance of spur gears

    NASA Technical Reports Server (NTRS)

    Wang, K. L.; Cheng, H. S.

    1983-01-01

    In spur gears both performance and failure predictions are known to be strongly dependent on the variation of load, lubricant film thickness, and total flash or contact temperature of the contacting point as it moves along the contact path. The need of an accurate tool for predicting these variables has prompted the development of a computer code based on recent findings in EHL and on finite element methods. The analyses and some typical results which to illustrate effects of gear geometry, velocity, load, lubricant viscosity, and surface convective heat transfer coefficient on the performance of spur gears are analyzed.

  3. The Basis Code Development System

    1994-03-15

    BASIS9.4 is a system for developing interactive computer programs in Fortran, with some support for C and C++ as well. Using BASIS9.4 you can create a program that has a sophisticated programming language as its user interface so that the user can set, calculate with, and plot, all the major variables in the program. The program author writes only the scientific part of the program; BASIS9.4 supplies an environment in which to exercise that scientificmore » programming which includes an interactive language, an interpreter, graphics, terminal logs, error recovery, macros, saving and retrieving variables, formatted I/O, and online documentation.« less

  4. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  5. SLAC Parallel Tracking Code Development and Applications

    SciTech Connect

    McCandless, Brian C.

    2001-01-19

    The increase in single processor speed based on Moore's law alone will not be able to deliver the dramatic speedup needed in many beam tracking simulations to uncover very slowly evolving effects in a reasonable time. SLAC has embarked on an effort to bring the power of parallel computing to bear on such computations with the goal to reduce the turnaround time by orders of magnitude so that the results may impact present facilities and future machine designs. This poster will describe the approaches adopted for parallelizing the LIAR code and the ION{_}MAD code. The scalability of these tracking codes and their further improvement will be discussed.

  6. Hanford Meteorological Station computer codes: Volume 8, The REVIEW computer code

    SciTech Connect

    Andrews, G.L.; Burk, K.W.

    1988-08-01

    The Hanford Meteorological Station (HMS) routinely collects meteorological data from sources on and off the Hanford Site. The data are averaged over both 15 minutes and 1 hour and are maintained in separate databases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS. The databases are transferred to the Emergency Management System (EMS) DEC VAX 11/750 computer. The EMS is part of the Unified Dose Assessment Center, which is located on on the ground-level floor of the Federal building in Richland and operated by Pacific Northwest Laboratory. The computer program REVIEW is used to display meteorological data in graphical and alphanumeric form from either the 15-minute or hourly database. The code is available on the HMS and EMS computer. The REVIEW program helps maintain a high level of quality assurance on the instruments that collect the data and provides a convenient mechanism for analyzing meteorological data on a routine basis and during emergency response situations.

  7. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    SciTech Connect

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data.

  8. Optimization of KINETICS Chemical Computation Code

    NASA Technical Reports Server (NTRS)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  9. Advances in Parallel Electromagnetic Codes for Accelerator Science and Development

    SciTech Connect

    Ko, Kwok; Candel, Arno; Ge, Lixin; Kabel, Andreas; Lee, Rich; Li, Zenghai; Ng, Cho; Rawat, Vineet; Schussman, Greg; Xiao, Liling; /SLAC

    2011-02-07

    Over a decade of concerted effort in code development for accelerator applications has resulted in a new set of electromagnetic codes which are based on higher-order finite elements for superior geometry fidelity and better solution accuracy. SLAC's ACE3P code suite is designed to harness the power of massively parallel computers to tackle large complex problems with the increased memory and solve them at greater speed. The US DOE supports the computational science R&D under the SciDAC project to improve the scalability of ACE3P, and provides the high performance computing resources needed for the applications. This paper summarizes the advances in the ACE3P set of codes, explains the capabilities of the modules, and presents results from selected applications covering a range of problems in accelerator science and development important to the Office of Science.

  10. Computer code for charge-exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Kaufman, H. R.

    1981-01-01

    The propagation of the charge-exchange plasma from an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ASNI Standard FORTRAN.

  11. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    SciTech Connect

    Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

  12. A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.

    1989-01-01

    A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.

  13. Computer Code Systems for Use with Meteorological Data.

    1983-09-14

    Version 00 The staff of the Nuclear Regulatory Commission uses the computer codes in this collection to examine, assess, and utilize the hourly values of meteorological data which are received on magnetic tapes in a specified format.

  14. Computer-assisted coding and clinical documentation: first things first.

    PubMed

    Tully, Melinda; Carmichael, Angela

    2012-10-01

    Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.

  15. Code 672 observational science branch computer networks

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  16. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  17. Hanford Meteorological Station computer codes: Volume 10, The ARCHIVE computer code

    SciTech Connect

    Andrews, G.L.; Burk, K.W.

    1989-08-01

    The purpose of the ARCHIVE computer program is twofold: (1) convert selected hourly binary data into formatted ASCII data, and (2) organize the converted data into monthly files. Formatted ASCII files are easier to access on a routine basis. The program is executed once a day and is initiated from a command file that submits itself to the SYS$BATCH queue on a daily basis. The monthly files are stored on the HMS computer's fixed hard disk and are merged into yearly files (located on removable disk packs) at the end of each year. This report describes the data bases maintained at the HMS, gives an overview of the ARCHIVE program, describes input and output files accessed by the ARCHIVE program, provides a description of program initiation, and discusses the limitations of the ARCHIVE program. A section on trouble-shooting is included. In addition, the appendixes contain flow charts, detailed descriptions, and source code listings for the ARCHIVE program and related subroutines. A description of the ARCHIVE command file and the data input and output files completes the report. 3 refs., 1 fig.

  18. Analysis of airborne antenna systems using geometrical theory of diffraction and moment method computer codes

    NASA Technical Reports Server (NTRS)

    Hartenstein, Richard G., Jr.

    1985-01-01

    Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.

  19. Enhancements to the STAGS computer code

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.; Stehlin, P.; Brogan, F. A.

    1986-01-01

    The power of the STAGS family of programs was greatly enhanced. Members of the family include STAGS-C1 and RRSYS. As a result of improvements implemented, it is now possible to address the full collapse of a structural system, up to and beyond critical points where its resistance to the applied loads vanishes or suddenly changes. This also includes the important class of problems where a multiplicity of solutions exists at a given point (bifurcation), and where until now no solution could be obtained along any alternate (secondary) load path with any standard production finite element code.

  20. NASA Lewis Stirling engine computer code evaluation

    NASA Technical Reports Server (NTRS)

    Sullivan, Timothy J.

    1989-01-01

    In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.

  1. NASA Lewis Stirling engine computer code evaluation

    SciTech Connect

    Sullivan, T.J.

    1989-01-01

    In support of the US Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was /minus/11 percent for the P-40 and 12 percent for the RE-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvement to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions. 13 refs., 26 figs., 3 tabs.

  2. Computer code for intraply hybrid composite design

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1981-01-01

    A computer program is described for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories, and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.

  3. An algorithm for computing the distance spectrum of trellis codes

    NASA Technical Reports Server (NTRS)

    Rouanne, Marc; Costello, Daniel J., Jr.

    1989-01-01

    A class of quasiregular codes is defined for which the distance spectrum can be calculated from the codeword corresponding to the all-zero information sequence. Convolutional codes and regular codes are both quasiregular, as well as most of the best known trellis codes. An algorithm to compute the distance spectrum of linear, regular, and quasiregular trellis codes is presented. In particular, it can calculate the weight spectrum of convolutional (linear trellis) codes and the distance spectrum of most of the best known trellis codes. The codes do not have to be linear or regular, and the signals do not have to be used with equal probabilities. The algorithm is derived from a bidirectional stack algorithm, although it could also be based on the Viterbi algorithm. The algorithm is used to calculate the beginning of the distance spectrum of some of the best known trellis codes and to compute tight estimates on the first-event-error probability and on the bit-error probability.

  4. Python interface generator for Fortran based codes (a code development aid)

    2012-02-22

    Forthon generates links between Fortran and Python. Python is a high level, object oriented, interactive and scripting language that allows a flexible and versatile interface to computational tools. The Forthon package generates the necessary wrapping code which allows access to the Fortran database and to the Fortran subroutines and functions. This provides a development package where the computationally intensive parts of a code can be written in efficient Fortran, and the high level controlling codemore » can be written in the much more versatile Python language.« less

  5. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  6. Computer vision cracks the leaf code.

    PubMed

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  7. Computer vision cracks the leaf code.

    PubMed

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  8. Codes & standards research, development & demonstration Roadmap

    SciTech Connect

    None, None

    2008-07-22

    This Roadmap is a guide to the Research, Development & Demonstration activities that will provide data required for SDOs to develop performance-based codes and standards for a commercial hydrogen fueled transportation sector in the U.S.

  9. Application of computational fluid dynamics methods to improve thermal hydraulic code analysis

    NASA Astrophysics Data System (ADS)

    Sentell, Dennis Shannon, Jr.

    A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.

  10. Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide

    SciTech Connect

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    1983-02-01

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  11. Interface design of VSOP'94 computer code for safety analysis

    SciTech Connect

    Natsir, Khairina Andiwijayakusuma, D.; Wahanani, Nursinta Adi; Yazid, Putranto Ilham

    2014-09-30

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  12. Interface design of VSOP'94 computer code for safety analysis

    NASA Astrophysics Data System (ADS)

    Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi

    2014-09-01

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  13. Analyzing Pulse-Code Modulation On A Small Computer

    NASA Technical Reports Server (NTRS)

    Massey, David E.

    1988-01-01

    System for analysis pulse-code modulation (PCM) comprises personal computer, computer program, and peripheral interface adapter on circuit board that plugs into expansion bus of computer. Functions essentially as "snapshot" PCM decommutator, which accepts and stores thousands of frames of PCM data, sifts through them repeatedly to process according to routines specified by operator. Enables faster testing and involves less equipment than older testing systems.

  14. SCDAP/RELAP5 code development and assessment

    SciTech Connect

    Allison, C.M.; Hohorst, J.K.

    1996-03-01

    The SCDAP/RELAP5 computer code is designed to describe the overall reactor coolant system thermal-hydraulic response, core damage progression, and fission product release during severe accidents. The code is being developed at the Idaho National Engineering Laboratory under the primary sponsorship of the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission. The current version of the code is SCDAP/RELAP5/MOD3.1e. Although MOD3.1e contains a number of significant improvements since the initial version of MOD3.1 was released, new models to treat the behavior of the fuel and cladding during reflood have had the most dramatic impact on the code`s calculations. This paper provides a brief description of the new reflood models, presents highlights of the assessment of the current version of MOD3.1, and discusses future SCDAP/RELAP5/MOD3.2 model development activities.

  15. Guidelines for developing vectorizable computer programs

    NASA Astrophysics Data System (ADS)

    Miner, E. W.

    Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.

  16. Preliminary blade design using integrated computer codes

    NASA Astrophysics Data System (ADS)

    Ryan, Arve

    1988-12-01

    Loads on the root of a horizontal axis wind turbine (HAWT) rotor blade were analyzed. A design solution for the root area is presented. The loads on the blades are given by different load cases that are specified. To get a clear picture of the influence of different parameters, the whole blade is designed from scratch. This is only a preliminary design study and the blade should not be looked upon as a construction reference. The use of computer programs for the design and optimization is extensive. After the external geometry is set and the aerodynamic loads calculated, parameters like design stresses and laminate thicknesses are run through the available programs, and a blade design optimized on basis of facts and estimates used is shown.

  17. User's guide for vectorized code EQUIL for calculating equilibrium chemistry on Control Data STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.

    1980-01-01

    A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.

  18. External exposure model in the RESRAD computer code.

    SciTech Connect

    Kamboj, S.; Yu, C.; Environmental Assessment

    2002-06-01

    An external exposure model has been developed for the RESRAD computer code that provides flexibility in modeling soil contamination configurations for calculating external doses to exposed individuals. This model is based on the dose coefficients given in the U.S. Environmental Protection Agency's Federal Guidance Report No. 12 (FGR-12) and the point kernel method. It extends the applicability of FGR-12 data to include the effects of different source geometries, such as cover thickness, source thickness, source area, and shape of contaminated area of a specific site. A depth factor function was developed to express the dependence of the dose on the source thickness. A cover-and-depth factor function, derived from this depth factor function, takes into account the dependence of dose on the thickness of the source region and the thickness of the cover above the source region. To further extend the model for realistic geometries, area and shape factors were derived that depend not only on the lateral extent of the contamination, but also on source thickness, cover thickness, and radionuclides present. Results obtained with the model generally compare well with those from the Monte Carlo N-Particle transport code.

  19. A three-dimensional magnetostatics computer code for insertion devices.

    PubMed

    Chubar, O; Elleaume, P; Chavanne, J

    1998-05-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.

  20. Recent applications of the transonic wing analysis computer code, TWING

    NASA Technical Reports Server (NTRS)

    Subramanian, N. R.; Holst, T. L.; Thomas, S. D.

    1982-01-01

    An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.

  1. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  2. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  3. A FORTRAN computer code for calculating flows in multiple-blade-element cascades

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1985-01-01

    A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.

  4. A Computer Code for TRIGA Type Reactors.

    SciTech Connect

    1992-04-09

    Version 00 TRIGAP was developed for reactor physics calculations of the 250 kW TRIGA reactor. The program can be used for criticality predictions, power peaking predictions, fuel element burn-up calculations and data logging, and in-core fuel management and fuel utilization improvement.

  5. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    SciTech Connect

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.

  6. Fire aerosol experiment and comparisons with computer code predictions

    NASA Astrophysics Data System (ADS)

    Gregory, W. S.; Nichols, B. D.; White, B. W.; Smith, P. R.; Leslie, I. H.; Corkran, J. R.

    1988-08-01

    Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the U.S. Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300 C. To date, we have used quartz aerosol with a median diameter of about 10 microns as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 cu ft/min) and three nominal gas temperatures (ambient, 150 C, and 300 C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data.

  7. Application of the RESRAD computer code to VAMP scenario S

    SciTech Connect

    Gnanapragasam, E.K.; Yu, C.

    1997-03-01

    The RESRAD computer code developed at Argonne National Laboratory was among 11 models from 11 countries participating in the international Scenario S validation of radiological assessment models with Chernobyl fallout data from southern Finland. The validation test was conducted by the Multiple Pathways Assessment Working Group of the Validation of Environmental Model Predictions (VAMP) program coordinated by the International Atomic Energy Agency. RESRAD was enhanced to provide an output of contaminant concentrations in environmental media and in food products to compare with measured data from southern Finland. Probability distributions for inputs that were judged to be most uncertain were obtained from the literature and from information provided in the scenario description prepared by the Finnish Centre for Radiation and Nuclear Safety. The deterministic version of RESRAD was run repeatedly to generate probability distributions for the required predictions. These predictions were used later to verify the probabilistic RESRAD code. The RESRAD predictions of radionuclide concentrations are compared with measured concentrations in selected food products. The radiological doses predicted by RESRAD are also compared with those estimated by the Finnish Centre for Radiation and Nuclear Safety.

  8. HYDRA, A finite element computational fluid dynamics code: User manual

    SciTech Connect

    Christon, M.A.

    1995-06-01

    HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.

  9. Foundational development of an advanced nuclear reactor integrated safety code.

    SciTech Connect

    Clarno, Kevin; Lorber, Alfred Abraham; Pryor, Richard J.; Spotz, William F.; Schmidt, Rodney Cannon; Belcourt, Kenneth; Hooper, Russell Warren; Humphries, Larry LaRon

    2010-02-01

    This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.

  10. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  11. Proceedings of the conference on computer codes and the linear accelerator community

    SciTech Connect

    Cooper, R.K.

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  12. RESRAD: A computer code for evaluating radioactively contaminated sites

    SciTech Connect

    Yu, C.; Zielen, A.J.; Cheng, J.J.

    1993-12-31

    This document briefly describes the uses of the RESRAD computer code in calculating site-specific residual radioactive material guidelines and radiation dose-risk to an on-site individual (worker or resident) at a radioactively contaminated site. The adoption by the DOE in order 5400.5, pathway analysis methods, computer requirements, data display, the inclusion of chemical contaminants, benchmarking efforts, and supplemental information sources are all described. (GHH)

  13. DYNA3D Code Practices and Developments

    SciTech Connect

    Lin, L.; Zywicz, E.; Raboin, P.

    2000-04-21

    DYNA3D is an explicit, finite element code developed to solve high rate dynamic simulations for problems of interest to the engineering mechanics community. The DYNA3D code has been under continuous development since 1976[1] by the Methods Development Group in the Mechanical Engineering Department of Lawrence Livermore National Laboratory. The pace of code development activities has substantially increased in the past five years, growing from one to between four and six code developers. This has necessitated the use of software tools such as CVS (Concurrent Versions System) to help manage multiple version updates. While on-line documentation with an Adobe PDF manual helps to communicate software developments, periodically a summary document describing recent changes and improvements in DYNA3D software is needed. The first part of this report describes issues surrounding software versions and source control. The remainder of this report details the major capability improvements since the last publicly released version of DYNA3D in 1996. Not included here are the many hundreds of bug corrections and minor enhancements, nor the development in DYNA3D between the manual release in 1993[2] and the public code release in 1996.

  14. Software requirements specification document for the AREST code development

    SciTech Connect

    Engel, D.W.; McGrail, B.P.; Whitney, P.D.; Gray, W.J.; Williford, R.E.; White, M.D.; Eslinger, P.W.; Altenhofen, M.K.

    1993-11-01

    The Analysis of the Repository Source Term (AREST) computer code was selected in 1992 by the U.S. Department of Energy. The AREST code will be used to analyze the performance of an underground high level nuclear waste repository. The AREST code is being modified by the Pacific Northwest Laboratory (PNL) in order to evaluate the engineered barrier and waste package designs, model regulatory compliance, analyze sensitivities, and support total systems performance assessment modeling. The current version of the AREST code was developed to be a very useful tool for analyzing model uncertainties and sensitivities to input parameters. The code has also been used successfully in supplying source-terms that were used in a total systems performance assessment. The current version, however, has been found to be inadequate for the comparison and selection of a design for the waste package. This is due to the assumptions and simplifications made in the selection of the process and system models. Thus, the new version of the AREST code will be designed to focus on the details of the individual processes and implementation of more realistic models. This document describes the requirements of the new models that will be implemented. Included in this document is a section describing the near-field environmental conditions for this waste package modeling, description of the new process models that will be implemented, and a description of the computer requirements for the new version of the AREST code.

  15. User's manual for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.

    1980-07-01

    This report describes how to use a revised version of the ORIGEN computer code, designated ORIGEN2. Included are a description of the input data, input deck organization, and sample input and output. ORIGEN2 can be obtained from the Radiation Shielding Information Center at ORNL.

  16. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  17. Validation of Numerical Codes to Compute Tsunami Runup And Inundation

    NASA Astrophysics Data System (ADS)

    Velioğlu, Deniz; Cevdet Yalçıner, Ahmet; Kian, Rozita; Zaytsev, Andrey

    2015-04-01

    FLOW 3D and NAMI DANCE are two numerical codes which can be applied to analysis of flow and motion of long waves. Flow 3D simulates linear and nonlinear propagating surface waves as well as irregular waves including long waves. NAMI DANCE uses finite difference computational method to solve nonlinear shallow water equations (NSWE) in long wave problems, specifically tsunamis. Both codes can be applied to tsunami simulations and visualization of long waves. Both codes are capable of solving flooding problems. However, FLOW 3D is designed mainly to solve flooding problem from land and NAMI DANCE is designed to solve flooding problem from the sea. These numerical codes are applied to some benchmark problems for validation and verification. One useful benchmark problem is the runup of solitary waves which is investigated analytically and experimentally by Synolakis (1987). Since 1970s, solitary waves have commonly been used to model tsunamis especially in experimental and numerical studies. In this respect, a benchmark problem on runup of solitary waves is a relevant choice to assess the capability and validity of the numerical codes on amplification of tsunamis. In this study both codes have been tested, compared and validated by applying to the analytical benchmark problem of solitary wave runup on a sloping beach. Comparison of the results showed that both codes are in good agreement with the analytical and experimental results and thus can be proposed to be used in inundation of long waves and tsunami hazard analysis.

  18. Health Code Number (HCN) Development Procedure

    SciTech Connect

    Petrocchi, Rocky; Craig, Douglas K.; Bond, Jayne-Anne; Trott, Donna M.; Yu, Xiao-Ying

    2013-09-01

    This report provides the detailed description of health code numbers (HCNs) and the procedure of how each HCN is assigned. It contains many guidelines and rationales of HCNs. HCNs are used in the chemical mixture methodology (CMM), a method recommended by the department of energy (DOE) for assessing health effects as a result of exposures to airborne aerosols in an emergency. The procedure is a useful tool for proficient HCN code developers. Intense training and quality assurance with qualified HCN developers are required before an individual comprehends the procedure to develop HCNs for DOE.

  19. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  20. Computer code for the calculation of the temperature distribution of cooled turbine blades

    NASA Astrophysics Data System (ADS)

    Tietz, Thomas A.; Koschel, Wolfgang W.

    A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.

  1. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1993-01-01

    Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.

  2. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  3. Computer codes for evaluation of control room habitability (HABIT)

    SciTech Connect

    Stage, S.A.

    1996-06-01

    This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.

  4. War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?

    PubMed Central

    Rzhetsky, Andrey; Evans, James A.

    2011-01-01

    The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276

  5. Development of Tritium Permeation Analysis Code (TPAC)

    SciTech Connect

    Eung S. Kim; Chang H. Oh; Mike Patterson

    2010-10-01

    Idaho National Laboratory developed the Tritium Permeation Analysis Code (TPAC) for tritium permeation in the Very High Temperature Gas Cooled Reactor (VHTR). All the component models in the VHTR were developed and were embedded into the MATHLAB SIMULINK package with a Graphic User Interface. The governing equations of the nuclear ternary reaction and thermal neutron capture reactions from impurities in helium and graphite core, reflector, and control rods were implemented. The TPAC code was verified using analytical solutions for the tritium birth rate from the ternary fission, the birth rate from 3He, and the birth rate from 10B. This paper also provides comparisons of the TPAC with the existing other codes. A VHTR reference design was selected for tritium permeation study from the reference design to the nuclear-assisted hydrogen production plant and some sensitivity study results are presented based on the HTGR outlet temperature of 750 degrees C.

  6. Computational Aeroacoustic Analysis System Development

    NASA Technical Reports Server (NTRS)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  7. Benchmarking of computer codes and approaches for modeling exposure scenarios

    SciTech Connect

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  8. The 1992 Seals Flow Code Development Workshop

    NASA Technical Reports Server (NTRS)

    Liang, Anita D.; Hendricks, Robert C.

    1993-01-01

    A two-day meeting was conducted at the NASA Lewis Research Center on August 5 and 6, 1992, to inform the technical community of the progress of NASA Contract NAS3-26544. This contract was established in 1990 to develop industrial and CFD codes for the design and analysis of seals. Codes were demonstrated and disseminated to the user community for evaluation. The peer review panel which was formed in 1991 provided recommendations on this effort. The technical community presented results of their activities in the area of seals, with particular emphasis on brush seal systems.

  9. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  10. Computer Code For Calculation Of The Mutual Coherence Function

    NASA Astrophysics Data System (ADS)

    Bugnolo, Dimitri S.

    1986-05-01

    We present a computer code in FORTRAN 77 for the calculation of the mutual coherence function (MCF) of a plane wave normally incident on a stochastic half-space. This is an exact result. The user need only input the path length, the wavelength, the outer scale size, and the structure constant. This program may be used to calculate the MCF of a well-collimated laser beam in the atmosphere.

  11. Bragg optics computer codes for neutron scattering instrument design

    SciTech Connect

    Popovici, M.; Yelon, W.B.; Berliner, R.R.; Stoica, A.D.

    1997-09-01

    Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.

  12. User's manual for the vertical axis wind turbine performance computer code darter

    SciTech Connect

    Klimas, P. C.; French, R. E.

    1980-05-01

    The computer code DARTER (DARrieus, Turbine, Elemental Reynolds number) is an aerodynamic performance/loads prediction scheme based upon the conservation of momentum principle. It is the latest evolution in a sequence which began with a model developed by Templin of NRC, Canada and progressed through the Sandia National Laboratories-developed SIMOSS (SSImple MOmentum, Single Streamtube) and DART (SARrieus Turbine) to DARTER.

  13. Fault-tolerant quantum computation with asymmetric Bacon-Shor codes

    NASA Astrophysics Data System (ADS)

    Brooks, Peter; Preskill, John

    2013-03-01

    We develop a scheme for fault-tolerant quantum computation based on asymmetric Bacon-Shor codes, which works effectively against highly biased noise dominated by dephasing. We find the optimal Bacon-Shor block size as a function of the noise strength and the noise bias, and estimate the logical error rate and overhead cost achieved by this optimal code. Our fault-tolerant gadgets, based on gate teleportation, are well suited for hardware platforms with geometrically local gates in two dimensions.

  14. Methodology for computational fluid dynamics code verification/validation

    SciTech Connect

    Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.

    1995-07-01

    The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.

  15. User's manual for PELE3D: a computer code for three-dimensional incompressible fluid dynamics

    SciTech Connect

    McMaster, W H

    1982-05-07

    The PELE3D code is a three-dimensional semi-implicit Eulerian hydrodynamics computer program for the solution of incompressible fluid flow coupled to a structure. The fluid and coupling algorithms have been adapted from the previously developed two-dimensional code PELE-IC. The PELE3D code is written in both plane and cylindrical coordinates. The coupling algorithm is general enough to handle a variety of structural shapes. The free surface algorithm is able to accommodate a top surface and several independent bubbles. The code is in a developmental status since all the intended options have not been fully implemented and tested. Development of this code ended in 1980 upon termination of the contract with the Nuclear Regulatory Commission.

  16. Theoretical atomic physics code development at Los Alamos

    SciTech Connect

    Clark, R.E.H.; Abdallah, J. Jr.

    1989-01-01

    We have developed a set of computer codes for atomic physics calculations at Los Alamos. These codes can calculate a large variety of data with a minimum of effort on the part of the user. In particular, differential cross sections and electron impact coherence parameters can be readily obtained for arbitrary ions or atoms. Currently, the theory consists of non-relativistic Hartree-Fock structure calculations and non relativistic distorted wave approximation or first order many body theory collisional calculations. 12 refs., 2 figs., 5 tabs.

  17. Utilization of recently developed codes for high power Brayton and Rankine cycle power systems

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    1993-01-01

    Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.

  18. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    SciTech Connect

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  19. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  20. CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows

    SciTech Connect

    Winters, W.S.; Evans, G.H.; Moen, C.D.

    1996-10-01

    This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.

  1. ASHMET: a computer code for estimating insolation incident on tilted surfaces

    SciTech Connect

    Elkin, R.F.; Toelle, R.G.

    1980-05-01

    A computer code, ASHMET, has been developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors have been included. Climatological data for 248 US locations are built into the code. This report describes the methodology of the code, and its input and output. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.

  2. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  3. A Compact Code for Simulations of Quantum Error Correction in Classical Computers

    SciTech Connect

    Nyman, Peter

    2009-03-10

    This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.

  4. Recommended documentation plan for the FLAG and CHEMFLUB computer codes

    SciTech Connect

    1983-09-02

    Reviews have been conducted on both FLAG and CHEMFLUB's documentation and computer codes. The documentation of both models is: (1) incomplete, (2) confusing, (3) not helpful to the reader, (4) filled with extraneous information and (5) lack claimed versatility in analyzing coal gasifier systems. The documentation is such that the computer coding itself must be used as a reference to complete the documentation. Once the codes are set up they are relatively easy to run. We have exercised both of them. Most of our efforts thus far have been concentrated on FLAG because of its importance and complexity. FLAG in its present form can not be expected to yield meaningful data applicable to coal gasifier systems. The reasons for this are twofold. First, the model is incorrect in describing some aspects of fluid particle behavior in coal gasifier systems. Second, the numerical formulation/solution methodology is incorrectly implemented and introduces spurious numerical effects, thereby obscuring the physics of the model. In brief, this means that resulting calculations are not correctly related to the physics. CHEMFLUB, while less extensively exercised, shows that it should be no surprise that CHEMFLUB is best utilized as a tool for generating first approximations. We have concluded from these reviews that we cannot perform meaningful comparisons as required under tasks 3.3, 3.4, and 3.5 without first reconstructing and correcting when necessary the physical/numerical models. A plan is presented for accomplishing this reconstruction/modification.

  5. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    NASA Astrophysics Data System (ADS)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  6. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    SciTech Connect

    Aeschliman, D.P.; Oberkampf, W.L.; Blottner, F.G.

    1995-07-01

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  7. Application of software to development of reactor-safety codes

    SciTech Connect

    Wilburn, N.P.; Niccoli, L.G.

    1980-09-01

    Over the past two-and-a-half decades, the application of new techniques has reduced hardware cost for digital computer systems and increased computational speed by several orders of magnitude. A corresponding cost reduction in business and scientific software development has not occurred. The same situation is seen for software developed to model the thermohydraulic behavior of nuclear systems under hypothetical accident situations. For all cases this is particularly noted when costs over the total software life cycle are considered. A solution to this dilemma for reactor safety code systems has been demonstrated by applying the software engineering techniques which have been developed over the course of the last few years in the aerospace and business communities. These techniques have been applied recently with a great deal of success in four major projects at the Hanford Engineering Development Laboratory (HEDL): 1) a rewrite of a major safety code (MELT); 2) development of a new code system (CONACS) for description of the response of LMFBR containment to hypothetical accidents, and 3) development of two new modules for reactor safety analysis.

  8. Computer code to interchange CDS and wave-drag geometry formats

    NASA Technical Reports Server (NTRS)

    Johnson, V. S.; Turnock, D. L.

    1986-01-01

    A computer program has been developed on the PRIME minicomputer to provide an interface for the passage of aircraft configuration geometry data between the Rockwell Configuration Development System (CDS) and a wireframe geometry format used by aerodynamic design and analysis codes. The interface program allows aircraft geometry which has been developed in CDS to be directly converted to the wireframe geometry format for analysis. Geometry which has been modified in the analysis codes can be transformed back to a CDS geometry file and examined for physical viability. Previously created wireframe geometry files may also be converted into CDS geometry files. The program provides a useful link between a geometry creation and manipulation code and analysis codes by providing rapid and accurate geometry conversion.

  9. Heat pipe design handbook, part 2. [digital computer code specifications

    NASA Technical Reports Server (NTRS)

    Skrabek, E. A.

    1972-01-01

    The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.

  10. Majorana Fermion Surface Code for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Vijay, Sagar; Hsieh, Tim; Fu, Liang

    We introduce an exactly solvable model of interacting Majorana fermions realizing Z2 topological order with a Z2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid state systems, including topological insulators, nanowires or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physical ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.

  11. Majorana Fermion Surface Code for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang

    2015-10-01

    We introduce an exactly solvable model of interacting Majorana fermions realizing Z2 topological order with a Z2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s -wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physical ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.

  12. Multicode comparison of selected source-term computer codes

    SciTech Connect

    Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.

    1989-04-01

    This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.

  13. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  15. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  16. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  17. Development of a massively parallel parachute performance prediction code

    SciTech Connect

    Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.

    1997-04-01

    The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.

  18. Development of the Glenn-Heat-Transfer (Glenn-HT) Computer Code to Enable Time-Filtered Navier Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.

  19. Development of the Glenn-HT Computer Code to Enable Time-Filtered Navier-Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Philip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations which are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminarturbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes which take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-HT code and applied to film cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30 holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and four blowing ratios of 0.5, 1.0, 1.5 and 2.0 are shown. Flow features under those conditions are also described.

  20. Development of the Glenn Heat-Transfer (Glenn-HT) Computer Code to Enable Time-Filtered Navier-Stokes (TFNS) Simulations and Application to Film Cooling on a Flat Plate Through Long Cooling Tubes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur

    2014-01-01

    Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.

  1. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    ERIC Educational Resources Information Center

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  2. Nyx: A MASSIVELY PARALLEL AMR CODE FOR COMPUTATIONAL COSMOLOGY

    SciTech Connect

    Almgren, Ann S.; Bell, John B.; Lijewski, Mike J.; Lukic, Zarija; Van Andel, Ethan

    2013-03-01

    We present a new N-body and gas dynamics code, called Nyx, for large-scale cosmological simulations. Nyx follows the temporal evolution of a system of discrete dark matter particles gravitationally coupled to an inviscid ideal fluid in an expanding universe. The gas is advanced in an Eulerian framework with block-structured adaptive mesh refinement; a particle-mesh scheme using the same grid hierarchy is used to solve for self-gravity and advance the particles. Computational results demonstrating the validation of Nyx on standard cosmological test problems, and the scaling behavior of Nyx to 50,000 cores, are presented.

  3. Methodology, status and plans for development and assessment of Cathare code

    SciTech Connect

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests or integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.

  4. GEOS Code Development Road Map - May, 2013

    SciTech Connect

    Johnson, Scott; Settgast, Randolph; Fu, Pengcheng; Antoun, Tarabay; Ryerson, F. J.

    2013-05-03

    GEOS is a massively parallel computational framework designed to enable HPC-based simulations of subsurface reservoir stimulation activities with the goal of optimizing current operations and evaluating innovative stimulation methods. GEOS will enable coupling of different solvers associated with the various physical processes occurring during reservoir stimulation in unique and sophisticated ways, adapted to various geologic settings, materials and stimulation methods. The overall architecture of the framework includes consistent data structures and will allow incorporation of additional physical and materials models as demanded by future applications. Along with predicting the initiation, propagation and reactivation of fractures, GEOS will also generate a seismic source term that can be linked with seismic wave propagation codes to generate synthetic microseismicity at surface and downhole arrays. Similarly, the output from GEOS can be linked with existing fluid/thermal transport codes. GEOS can also be linked with existing, non-intrusive uncertainty quantification schemes to constrain uncertainty in its predictions and sensitivity to the various parameters describing the reservoir and stimulation operations. We anticipate that an implicit-explicit 3D version of GEOS, including a preliminary seismic source model, will be available for parametric testing and validation against experimental and field data by Oct. 1, 2013.

  5. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    ERIC Educational Resources Information Center

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  6. Temporal codes and computations for sensory representation and scene analysis.

    PubMed

    Cariani, Peter A

    2004-09-01

    This paper considers a space of possible temporal codes, surveys neurophysiological and psychological evidence for their use in nervous systems, and presents examples of neural timing networks that operate in the time-domain. Sensory qualities can be encoded temporally by means of two broad strategies: stimulus-driven temporal correlations (phase-locking) and stimulus-triggering of endogenous temporal response patterns. Evidence for stimulus-related spike timing patterns exists in nearly every sensory modality, and such information can be potentially utilized for representation of stimulus qualities, localization of sources, and perceptual grouping. Multiple strategies for temporal (time, frequency, and code-division) multiplexing of information for transmission and grouping are outlined. Using delays and multiplications (coincidences), neural timing networks perform time-domain signal processing operations to compare, extract and separate temporal patterns. Separation of synthetic double vowels by a recurrent neural timing network is used to illustrate how coherences in temporal fine structure can be exploited to build up and separate periodic signals with different fundamentals. Timing nets constitute a time-domain scene analysis strategy based on temporal pattern invariance rather than feature-based labeling, segregation and binding of channels. Further potential implications of temporal codes and computations for new kinds of neural networks are explored.

  7. RISKIND: An enhanced computer code for National Environmental Policy Act transportation consequence analysis

    SciTech Connect

    Biwer, B.M.; LePoire, D.J.; Chen, S.Y.

    1996-03-01

    The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive materials. The code is intended to provide scenario-specific analyses when evaluating alternatives for environmental assessment activities, including those for major federal actions involving radioactive material transport as required by the National Environmental Policy Act (NEPA). As such, rigorous procedures have been implemented to enhance the code`s credibility and strenuous efforts have been made to enhance ease of use of the code. To increase the code`s reliability and credibility, a new version of RISKIND was produced under a quality assurance plan that covered code development and testing, and a peer review process was conducted. During development of the new version, the flexibility and ease of use of RISKIND were enhanced through several major changes: (1) a Windows{sup {trademark}} point-and-click interface replaced the old DOS menu system, (2) the remaining model input parameters were added to the interface, (3) databases were updated, (4) the program output was revised, and (5) on-line help has been added. RISKIND has been well received by users and has been established as a key component in radiological transportation risk assessments through its acceptance by the U.S. Department of Energy community in recent environmental impact statements (EISs) and its continued use in the current preparation of several EISs.

  8. Development of Tripropellant CFD Design Code

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Cheng, Gary C.; Anderson, Peter G.

    1998-01-01

    A tripropellant, such as GO2/H2/RP-1, CFD design code has been developed to predict the local mixing of multiple propellant streams as they are injected into a rocket motor. The code utilizes real fluid properties to account for the mixing and finite-rate combustion processes which occur near an injector faceplate, thus the analysis serves as a multi-phase homogeneous spray combustion model. Proper accounting of the combustion allows accurate gas-side temperature predictions which are essential for accurate wall heating analyses. The complex secondary flows which are predicted to occur near a faceplate cannot be quantitatively predicted by less accurate methodology. Test cases have been simulated to describe an axisymmetric tripropellant coaxial injector and a 3-dimensional RP-1/LO2 impinger injector system. The analysis has been shown to realistically describe such injector combustion flowfields. The code is also valuable to design meaningful future experiments by determining the critical location and type of measurements needed.

  9. A Computer Code for Swirling Turbulent Axisymmetric Recirculating Flows in Practical Isothermal Combustor Geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.; Rhode, D. L.

    1982-01-01

    A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.

  10. High frame rate photoacoustic computed tomography using coded excitation

    NASA Astrophysics Data System (ADS)

    Azuma, Masataka; Zhang, Haichong K.; Kondo, Kengo; Namita, Takeshi; Yamakawa, Makoto; Shiina, Tsuyoshi

    2015-03-01

    Photoacoustic Computed Tomography (PACT) records signals from a wide range of angles to achieve uniform, highresolution images. A high-power laser is generally used for PACT, but the long acquisition time with a single probe is a problem due to the low pulse-repetition frequency (PRF). For PACT, this degrades image resolution and contrast because it is hard to scan with a small step interval. Moreover, in vivo measurement requires a fast image acquisition system to avoid motion artifacts. The problem can be resolved by using a high PRF laser, which provides only weak energy. Averaging measured signals many times can mitigate the low signal-to-noise issue, but the PRF is restricted by the acoustic time of flight, so this is a new source of measurement time increase. Here, we present the coded-excitation approach, which we previously proposed for linear scanning, to increase the PACT frame rate. Coded excitation irradiates temporally encoded pulses and enhances the signal amplitude through decoding. The PRF is thus not restricted to acoustic time of flight. Consequently, acquisition time can be shortened by increasing PRF, and the SNR increases for the same measurement time. To validate the proposed idea, we conducted experiments using a high PRF laser with a revolving motor and compared the performance of coded excitation to that of averaging. Results demonstrated that the contamination of a signal acquired from different angles was negligible, and that the scanning pitch was remarkably improved because the start point of decoding can be set in any code in the periodic sequence.

  11. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  12. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  13. A 3D-PNS computer code for the calculation of supersonic combusting flows

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit; Northam, G. Burton

    1988-01-01

    A computer code has been developed based on the three-dimensional parabolized Navier-Stokes (PNS) equations which govern the supersonic combusting flow of the hydrogen-air system. The finite difference algorithm employed was a hybrid of the Schiff-Steger algorithm and the Vigneron, et al., algorithm which is fully implicit and fully coupled. The combustion of hydrogen and air was modeled by the finite-rate two-step combustion model of Rogers-Chinitz. A new dependent variable vector was introduced to simplify the numerical algorithm. Robustness of the algorithm was considerably enhanced by introducing an adjustable parameter. The computer code was used to solve a premixed shock-induced combustion problem and the results were compared with those of a full Navier-Stokes code. Reasonably good agreement was obtained at a fraction of the cost of the full Navier-Stokes procedure.

  14. Inlet-Compressor Analysis Performed Using Coupled Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Suresh, Ambady; Townsend, Scott

    1999-01-01

    A thorough understanding of dynamic interactions between inlets and compressors is extremely important to the design and development of propulsion control systems, particularly for supersonic aircraft such as the High-Speed Civil Transport (HSCT). Computational fluid dynamics (CFD) codes are routinely used to analyze individual propulsion components. By coupling the appropriate CFD component codes, it is possible to investigate inlet-compressor interactions. The objectives of this work were to gain a better understanding of inlet-compressor interaction physics, formulate a more realistic compressor-face boundary condition for time-accurate CFD simulations of inlets, and to take a first step toward the CFD simulation of an entire engine by coupling multidimensional component codes. This work was conducted at the NASA Lewis Research Center by a team of civil servants and support service contractors as part of the High Performance Computing and Communications Program (HPCCP).

  15. XSECT: A computer code for generating fuselage cross sections - user's manual

    NASA Technical Reports Server (NTRS)

    Ames, K. R.

    1982-01-01

    A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.

  16. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  17. TEMP: a computer code to calculate fuel pin temperatures during a transient. [LMFBR

    SciTech Connect

    Bard, F E; Christensen, B Y; Gneiting, B C

    1980-04-01

    The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method.

  18. Users manual for CAFE-3D : a computational fluid dynamics fire code.

    SciTech Connect

    Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma

    2005-03-01

    The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.

  19. Reasoning with Computer Code: a new Mathematical Logic

    NASA Astrophysics Data System (ADS)

    Pissanetzky, Sergio

    2013-01-01

    A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self

  20. Guide to AERO2S and WINGDES Computer Codes for Prediction and Minimization of Drag Due to Lift

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Chu, Julio; Ozoroski, Lori P.; McCullers, L. Arnold

    1997-01-01

    The computer codes, AER02S and WINGDES, are now widely used for the analysis and design of airplane lifting surfaces under conditions that tend to induce flow separation. These codes have undergone continued development to provide additional capabilities since the introduction of the original versions over a decade ago. This code development has been reported in a variety of publications (NASA technical papers, NASA contractor reports, and society journals). Some modifications have not been publicized at all. Users of these codes have suggested the desirability of combining in a single document the descriptions of the code development, an outline of the features of each code, and suggestions for effective code usage. This report is intended to supply that need.

  1. Development, Verification and Validation of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.

    2016-01-01

    With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.

  2. Neural coding of computational factors affecting decision making.

    PubMed

    Dreher, Jean-Claude

    2013-01-01

    We constantly need to make decisions that can result in rewards of different amounts with different probabilities and at different timing. To characterize the neural coding of such computational factors affecting value-based decision making, we have investigated how reward information processing is influenced by parameters such as reward magnitude, probability, delay, effort, and uncertainty using either fMRI in healthy humans or intracranial recordings in patients with epilepsy. We decomposed brain signals modulated by these computational factors, showing that prediction error (PE), salient PE, and uncertainty signals are computed in partially overlapping brain circuits and that both transient and sustained uncertainty signals coexist in the brain. When investigating the neural representation of primary and secondary rewards, we found both a common brain network, including the ventromedial prefrontal cortex and ventral striatum, and a functional organization of the orbitofrontal cortex according to reward type. Moreover, separate valuation systems were engaged for delay and effort costs when deciding between options. Finally, genetic variations in dopamine-related genes influenced the response of the reward system and may contribute to individual differences in reward-seeking behavior and in predisposition to neuropsychiatric disorders.

  3. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  4. Three-dimensional radiation dose mapping with the TORT computer code

    SciTech Connect

    Slater, C.O.; Pace, J.V. III; Childs, R.L.; Haire, M.J. ); Koyama, T. )

    1991-01-01

    The Consolidated Fuel Reprocessing Program (CFRP) at Oak Ridge National Laboratory (ORNL) has performed radiation shielding studies in support of various facility designs for many years. Computer codes employing the point-kernel method have been used, and the accuracy of these codes is within acceptable limits. However, to further improve the accuracy and to calculate dose at a larger number of locations, a higher order method is desired, even for analyses performed in the early stages of facility design. Consequently, the three-dimensional discrete ordinates transport code TORT, developed at ORNL in the mid-1980s, was selected to examine in detail the dose received at equipment locations. The capabilities of the code have been previously reported. Recently, the Power Reactor and Nuclear Fuel Development Corporation in Japan and the US Department of Energy have used the TORT code as part of a collaborative agreement to jointly develop breeder reactor fuel reprocessing technology. In particular, CFRP used the TORT code to estimate radiation dose levels within the main process cell for a conceptual plant design and to establish process equipment lifetimes. The results reported in this paper are for a conceptual plant design that included the mechanical head and (i.e., the disassembly and shear machines), solvent extraction equipment, and miscellaneous process support equipment.

  5. Computer Tensor Codes to Design the War Drive

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.

  6. A proposed framework for computational fluid dynamics code calibration/validation

    SciTech Connect

    Oberkampf, W.L.

    1993-12-31

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.

  7. A high temperature fatigue life prediction computer code based on the Total Strain Version of Strainrange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1991-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented, based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  8. A Modular Computer Code for Simulating Reactive Multi-Species Transport in 3-Dimensional Groundwater Systems

    SciTech Connect

    TP Clement

    1999-06-24

    RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in

  9. CIRCE. 001: A computer code for analysis of point-focus concentrators with flat targets

    SciTech Connect

    Ratzel, A.C.; Boughton, B.D.

    1987-02-11

    In this report, a computer simulation code called CIRCE is discussed and examples of its application to several solar collector geometries are presented. CIRCE, an acronym for Convolution of Incident Radiation with Concentrator Errors, was developed for the optical analysis of point-focus concentrating dish collector systems. CIRCE, as in Greek mythology, is the ''daughter'' of HELIOS, a computer code developed at Sandia National Laboratories, Albuquerque, NM, for evaluating the optical performance of solar central receiver systems. CIRCE was developed from HELIOS specifically for the analysis of dish systems with the objective of providing users with a design tool that is relatively easy to implement and does not require a large investment of time to obtain results.

  10. The PARTRAC code: Status and recent developments

    NASA Astrophysics Data System (ADS)

    Friedland, Werner; Kundrat, Pavel

    Biophysical modeling is of particular value for predictions of radiation effects due to manned space missions. PARTRAC is an established tool for Monte Carlo-based simulations of radiation track structures, damage induction in cellular DNA and its repair [1]. Dedicated modules describe interactions of ionizing particles with the traversed medium, the production and reactions of reactive species, and score DNA damage determined by overlapping track structures with multi-scale chromatin models. The DNA repair module describes the repair of DNA double-strand breaks (DSB) via the non-homologous end-joining pathway; the code explicitly simulates the spatial mobility of individual DNA ends in parallel with their processing by major repair enzymes [2]. To simulate the yields and kinetics of radiation-induced chromosome aberrations, the repair module has been extended by tracking the information on the chromosome origin of ligated fragments as well as the presence of centromeres [3]. PARTRAC calculations have been benchmarked against experimental data on various biological endpoints induced by photon and ion irradiation. The calculated DNA fragment distributions after photon and ion irradiation reproduce corresponding experimental data and their dose- and LET-dependence. However, in particular for high-LET radiation many short DNA fragments are predicted below the detection limits of the measurements, so that the experiments significantly underestimate DSB yields by high-LET radiation [4]. The DNA repair module correctly describes the LET-dependent repair kinetics after (60) Co gamma-rays and different N-ion radiation qualities [2]. First calculations on the induction of chromosome aberrations have overestimated the absolute yields of dicentrics, but correctly reproduced their relative dose-dependence and the difference between gamma- and alpha particle irradiation [3]. Recent developments of the PARTRAC code include a model of hetero- vs euchromatin structures to enable

  11. The MELTSPREAD-1 computer code for the analysis of transient spreading in containments

    SciTech Connect

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1990-01-01

    A one-dimensional, multicell, Eulerian finite difference computer code (MELTSPREAD-1) has been developed to provide an improved prediction of the gravity driven spreading and thermal interactions of molten corium flowing over a concrete or steel surface. In this paper, the modeling incorporated into the code is described and the spreading models are benchmarked against a simple dam break'' problem as well as water simulant spreading data obtained in a scaled apparatus of the Mk I containment. Results are also presented for a scoping calculation of the spreading behavior and shell thermal response in the full scale Mk I system following vessel meltthrough. 24 refs., 15 figs.

  12. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  13. Development of parallel DEM for the open source code MFIX

    SciTech Connect

    Gopalakrishnan, Pradeep; Tafti, Danesh

    2013-02-01

    The paper presents the development of a parallel Discrete Element Method (DEM) solver for the open source code, Multiphase Flow with Interphase eXchange (MFIX) based on the domain decomposition method. The performance of the code was evaluated by simulating a bubbling fluidized bed with 2.5 million particles. The DEM solver shows strong scalability up to 256 processors with an efficiency of 81%. Further, to analyze weak scaling, the static height of the fluidized bed was increased to hold 5 and 10 million particles. The results show that global communication cost increases with problem size while the computational cost remains constant. Further, the effects of static bed height on the bubble hydrodynamics and mixing characteristics are analyzed.

  14. Computer code for the atomistic simulation of lattice defects and dynamics. [COMENT code

    SciTech Connect

    Schiffgens, J.O.; Graves, N.J.; Oster, C.A.

    1980-04-01

    This document has been prepared to satisfy the need for a detailed, up-to-date description of a computer code that can be used to simulate phenomena on an atomistic level. COMENT was written in FORTRAN IV and COMPASS (CDC assembly language) to solve the classical equations of motion for a large number of atoms interacting according to a given force law, and to perform the desired ancillary analysis of the resulting data. COMENT is a dual-purpose intended to describe static defect configurations as well as the detailed motion of atoms in a crystal lattice. It can be used to simulate the effect of temperature, impurities, and pre-existing defects on radiation-induced defect production mechanisms, defect migration, and defect stability.

  15. Application of advanced computational procedures for modeling solar-wind interactions with Venus: Theory and computer code

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.

    1980-01-01

    Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

  16. A Sample of NASA Langley Unsteady Pressure Experiments for Computational Aerodynamics Code Evaluation

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Scott, Robert C.; Bartels, Robert E.; Edwards, John W.; Bennett, Robert M.

    2000-01-01

    As computational fluid dynamics methods mature, code development is rapidly transitioning from prediction of steady flowfields to unsteady flows. This change in emphasis offers a number of new challenges to the research community, not the least of which is obtaining detailed, accurate unsteady experimental data with which to evaluate new methods. Researchers at NASA Langley Research Center (LaRC) have been actively measuring unsteady pressure distributions for nearly 40 years. Over the last 20 years, these measurements have focused on developing high-quality datasets for use in code evaluation. This paper provides a sample of unsteady pressure measurements obtained by LaRC and available for government, university, and industry researchers to evaluate new and existing unsteady aerodynamic analysis methods. A number of cases are highlighted and discussed with attention focused on the unique character of the individual datasets and their perceived usefulness for code evaluation. Ongoing LaRC research in this area is also presented.

  17. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  18. Methodology, status and plans for development and assessment of TUF and CATHENA codes

    SciTech Connect

    Luxat, J.C.; Liu, W.S.; Leung, R.K.

    1997-07-01

    An overview is presented of the Canadian two-fluid computer codes TUF and CATHENA with specific focus on the constraints imposed during development of these codes and the areas of application for which they are intended. Additionally a process for systematic assessment of these codes is described which is part of a broader, industry based initiative for validation of computer codes used in all major disciplines of safety analysis. This is intended to provide both the licensee and the regulator in Canada with an objective basis for assessing the adequacy of codes for use in specific applications. Although focused specifically on CANDU reactors, Canadian experience in developing advanced two-fluid codes to meet wide-ranging application needs while maintaining past investment in plant modelling provides a useful contribution to international efforts in this area.

  19. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  20. GASPS: A time-dependent, one-dimensional, planar gas dynamics computer code

    SciTech Connect

    Pierce, R.E.; Sutton, S.B.; Comfort, W.J. III

    1986-12-05

    GASP is a transient, one-dimensional planar gas dynamic computer code that can be used to calculate the propagation of a shock wave. GASP, developed at LLNL, solves the one-dimensional planar equations governing momentum, mass and energy conservation. The equations are cast in an Eulerian formulation where the mesh is fixed in space, and material flows through it. Thus it is necessary to account for convection of material from one cell to its neighbor.

  1. IMPLEMENTING SCIENTIFIC SIMULATION CODES HIGHLY TAILORED FOR VECTOR ARCHITECTURES USING CUSTOM CONFIGURABLE COMPUTING MACHINES

    NASA Technical Reports Server (NTRS)

    Rutishauser, David K.

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  2. Plutonium explosive dispersal modeling using the MACCS2 computer code

    SciTech Connect

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  3. Visualization of elastic wavefields computed with a finite difference code

    SciTech Connect

    Larsen, S.; Harris, D.

    1994-11-15

    The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.

  4. Developing Fortran Code for Kriging on the Stampede Supercomputer

    NASA Astrophysics Data System (ADS)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  5. Computer code simulations of the formation of Meteor Crater, Arizona - Calculations MC-1 and MC-2

    NASA Technical Reports Server (NTRS)

    Roddy, D. J.; Schuster, S. H.; Kreyenhagen, K. N.; Orphal, D. L.

    1980-01-01

    It has been widely accepted that hypervelocity impact processes play a major role in the evolution of the terrestrial planets and satellites. In connection with the development of quantitative methods for the description of impact cratering, it was found that the results provided by two-dimensional finite difference, computer codes is greatly improved when initial impact conditions can be defined and when the numerical results can be tested against field and laboratory data. In order to address this problem, a numerical code study of the formation of Meteor (Barringer) Crater, Arizona, has been undertaken. A description is presented of the major results from the first two code calculations, MC-1 and MC-2, that have been completed for Meteor Crater. Both calculations used an iron meteorite with a kinetic energy of 3.8 Megatons. Calculation MC-1 had an impact velocity of 25 km/sec and MC-2 had an impact velocity of 15 km/sec.

  6. Role asymmetry and code transmission in signaling games: an experimental and computational investigation.

    PubMed

    Moreno, Maggie; Baggio, Giosuè

    2015-07-01

    In signaling games, a sender has private access to a state of affairs and uses a signal to inform a receiver about that state. If no common association of signals and states is initially available, sender and receiver must coordinate to develop one. How do players divide coordination labor? We show experimentally that, if players switch roles at each communication round, coordination labor is shared. However, in games with fixed roles, coordination labor is divided: Receivers adjust their mappings more frequently, whereas senders maintain the initial code, which is transmitted to receivers and becomes the common code. In a series of computer simulations, player and role asymmetry as observed experimentally were accounted for by a model in which the receiver in the first signaling round has a higher chance of adjusting its code than its partner. From this basic division of labor among players, certain properties of role asymmetry, in particular correlations with game complexity, are seen to follow.

  7. Performance of a parallel code for the Euler equations on hypercube computers

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.

    1990-01-01

    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.

  8. HIBRA: A computer code for heavy ion binary reaction analysis employing ion track detectors

    NASA Astrophysics Data System (ADS)

    Jamil, Khalid; Ahmad, Siraj-ul-Islam; Manzoor, Shahid

    2016-01-01

    Collisions of heavy ions many times result in production of only two reaction products. Study of heavy ions using ion track detectors allows experimentalists to observe the track length in the plane of the detector, depth of the tracks in the volume of the detector and angles between the tracks on the detector surface, all known as track parameters. How to convert these into useful physics parameters such as masses, energies, momenta of the reaction products and the Q-values of the reaction? This paper describes the (a) model used to analyze binary reactions in terms of measured etched track parameters of the reaction products recorded in ion track detectors, and (b) the code developed for computing useful physics parameters for fast and accurate analysis of a large number of binary events. A computer code, HIBRA (Heavy Ion Binary Reaction Analysis) has been developed both in C++ and FORTRAN programming languages. It has been tested on the binary reactions from 12.5 MeV/u 84Kr ions incident upon U (natural) target deposited on mica ion track detector. The HIBRA code can be employed with any ion track detector for which range-velocity relation is available including the widely used CR-39 ion track detectors. This paper provides the source code of HIBRA in C++ language along with input and output data to test the program.

  9. Impact of revised 10 CFR 20 on existing performance assessment computer codes used for LLW disposal facilities

    SciTech Connect

    Leonard, P.R.; Seitz, R.R.

    1992-04-01

    The US Nuclear Regulatory Commission (NRC) recently announced a revision to Chapter 10 of the Code of Federal Regulations, Part 20 (10 CFR 20) ``Standards for Protection Against Radiation,`` which incorporates recommendations contained in Publications 26 and 30 of the International Commission on Radiological Protection (ICRP), issued in 1977 and 1979, respectively. The revision to 10 CFR 20 was also developed in parallel with Presidential Guidance on occupational radiation protection published in the Federal Register. Thus, this study concludes that the issuance of the revised 10 CFR 20 will not affect calculations using the computer codes considered in this report. In general, the computer codes and EPA and DOE guidance on which computer codes are based were developed in a manner consistent with the guidance provided in ICRP 26/30, well before the revision of 10 CFR 20.

  10. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  11. The Computer Code NOVO for the Calculation of Wake Potentials of the Very Short Ultra-relativistic Bunches

    SciTech Connect

    Novokhatski, Alexander; /SLAC

    2005-12-01

    The problem of electromagnetic interaction of a beam and accelerator elements is very important for linear colliders, electron-positron factories, and free electron lasers. Precise calculation of wake fields is required for beam dynamics study in these machines. We describe a method which allows computation of wake fields of the very short bunches. Computer code NOVO was developed based on this method. This method is free of unphysical solutions like ''self-acceleration'' of a bunch head, which is common to well known wake field codes. Code NOVO was used for the wake fields study for many accelerator projects all over the world.

  12. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  13. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1993-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  14. A computer code for three-dimensional incompressible flows using nonorthogonal body-fitted coordinate systems

    NASA Astrophysics Data System (ADS)

    Chen, Y. S.

    1986-03-01

    In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.

  15. A computer code for three-dimensional incompressible flows using nonorthogonal body-fitted coordinate systems

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1986-01-01

    In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.

  16. Computer code for predicting coolant flow and heat transfer in turbomachinery

    NASA Technical Reports Server (NTRS)

    Meitner, Peter L.

    1990-01-01

    A computer code was developed to analyze any turbomachinery coolant flow path geometry that consist of a single flow passage with a unique inlet and exit. Flow can be bled off for tip-cap impingement cooling, and a flow bypass can be specified in which coolant flow is taken off at one point in the flow channel and reintroduced at a point farther downstream in the same channel. The user may either choose the coolant flow rate or let the program determine the flow rate from specified inlet and exit conditions. The computer code integrates the 1-D momentum and energy equations along a defined flow path and calculates the coolant's flow rate, temperature, pressure, and velocity and the heat transfer coefficients along the passage. The equations account for area change, mass addition or subtraction, pumping, friction, and heat transfer.

  17. Methodology, status, and plans for development and assessment of the RELAP5 code

    SciTech Connect

    Johnson, G.W.; Riemke, R.A.

    1997-07-01

    RELAP/MOD3 is a computer code used for the simulation of transients and accidents in light-water nuclear power plants. The objective of the program to develop and maintain RELAP5 was and is to provide the U.S. Nuclear Regulatory Commission with an independent tool for assessing reactor safety. This paper describes code requirements, models, solution scheme, language and structure, user interface validation, and documentation. The paper also describes the current and near term development program and provides an assessment of the code`s strengths and limitations.

  18. Development of UMARC (University of Maryland Advanced Rotorcraft Code)

    NASA Technical Reports Server (NTRS)

    Bir, Gunjit; Chopra, Inderjit; Nguyen, Khanh

    1990-01-01

    The University of Maryland Advanced Rotorcraft Code (UMARC) is a user-friendly, FEM-based comprehensive helicopter rotor simulation code of high numerical robustness and computational efficiency. UMARC formulates the rotor-fuselage equations using Hamilton's principle, and are discretized using finite elements in space and time. The FEM formulation allows the code to analyze a wide variety of rotor designs. Dynamic inflow modeling is used for unsteady wake inflow computations. Predicted stability, response, and blade-load data are validated with experimental data for several configurations, including representative articulated, hingeless, and bearingless rotors.

  19. Spent fuel management fee methodology and computer code user's manual.

    SciTech Connect

    Engel, R.L.; White, M.K.

    1982-01-01

    The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

  20. MELMRK 2. 0: A description of computer models and results of code testing

    SciTech Connect

    Wittman, R.S. ); Denny, V.; Mertol, A. )

    1992-05-31

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.

  1. MELMRK 2.0: A description of computer models and results of code testing

    SciTech Connect

    Wittman, R.S.; Denny, V.; Mertol, A.

    1992-05-31

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.

  2. Software tools for developing parallel applications. Part 1: Code development and debugging

    SciTech Connect

    Brown, J.; Geist, A.; Pancake, C.; Rover, D.

    1997-04-01

    Developing an application for parallel computers can be a lengthy and frustrating process making it a perfect candidate for software tool support. Yet application programmers are often the last to hear about new tools emerging from R and D efforts. This paper provides an overview of two focuses of tool support: code development and debugging. Each is discussed in terms of the programmer needs addressed, the extent to which representative current tools meet those needs, and what new levels of tool support are important if parallel computing is to become more widespread.

  3. Applications of RESRAD family of computer codes to sites contaminated with radioactive residues.

    SciTech Connect

    Yu, C.; Kamboj, S.; Cheng, J.-J.; LePoire, D.; Gnanapragasam, E.; Zielen, A.; Williams, W. A.; Wallo, A.; Peterson, H.

    1999-10-21

    The RESIL4D family of computer codes was developed to provide a scientifically defensible answer to the question ''How clean is clean?'' and to provide useful tools for evaluating human health risk at sites contaminated with radioactive residues. The RESRAD codes include (1) RESRAD for soil contaminated with radionuclides; (2) RESRAD-BUILD for buildings contaminated with radionuclides; (3) RESRAD-CHEM for soil contaminated with hazardous chemicals; (4) RESRAD-BASELINE for baseline risk assessment with measured media concentrations of both radionuclides and chemicals; (5) RESRAD-ECORISK for ecological risk assessment; (6) RESRAD-RECYCLE for recycle and reuse of radiologically contaminated metals and equipment; and (7) RESRAD-OFFSITE for off-site receptor radiological dose assessment. Four of these seven codes (RESRAD, RESRAD-BUILD, RESRAD-RECYCLE, and RESRAD-OFFSITE) also have uncertainty analysis capabilities that allow the user to input distributions of parameters. RESRAD has been widely used in the United States and abroad and approved by many federal and state agencies. Experience has shown that the RESRAD codes are useful tools for evaluating sites contaminated with radioactive residues. The use of RESRAD codes has resulted in significant savings in cleanup cost. Analysis of 19 site-specific uranium guidelines is discussed in the paper.

  4. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  5. High pressure humidification columns: Design equations, algorithm, and computer code

    SciTech Connect

    Enick, R.M.; Klara, S.M.; Marano, J.J.

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  6. T-Matrix: Codes for Computing Electromagnetic Scattering by Nonspherical and Aggregated Particles

    NASA Astrophysics Data System (ADS)

    Waterman, Peter; Mishchenko, Michael I.; Travis, Larry D.; Mackowski, Daniel W.

    2015-11-01

    The T-Matrix package includes codes to compute electromagnetic scattering by homogeneous, rotationally symmetric nonspherical particles in fixed and random orientations, randomly oriented two-sphere clusters with touching or separated components, and multi-sphere clusters in fixed and random orientations. All codes are written in Fortran-77. LAPACK-based, extended-precision, Gauss-elimination- and NAG-based, and superposition codes are available, as are double-precision superposition, parallelized double-precision, double-precision Lorenz-Mie codes, and codes for the computation of the coefficients for the generalized Chebyshev shape.

  7. Status of Continuum Edge Gyrokinetic Code Physics Development

    SciTech Connect

    Xu, X Q; Xiong, Z; Dorr, M R; Hittinger, J A; Kerbel, G D; Nevins, W M; Cohen, B I; Cohen, R H

    2005-05-31

    We are developing an edge gyro-kinetic continuum simulation code to study the boundary plasma over a region extending from inside the H-mode pedestal across the separatrix to the divertor plates. A 4-D ({psi}, {theta}, {epsilon}, {mu}) version of this code is presently being implemented, en route to a full 5-D version. A set of gyrokinetic equations[1] are discretized on computational grid which incorporates X-point divertor geometry. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. A fourth order upwinding algorithm is used for particle cross-field drifts, parallel streaming, and acceleration. Boundary conditions at conducting material surfaces are implemented on the plasma side of the sheath. The Poisson-like equation is solved using GMRES with multi-grid preconditioner from HYPRE. A nonlinear Fokker-Planck collision operator from STELLA[2] in ({nu}{sub {parallel}},{nu}{sub {perpendicular}}) has been streamlined and integrated into the gyro-kinetic package using the same implicit Newton-Krylov solver and interpolating F and dF/dt|{sub coll} to/from ({epsilon}, {mu}) space. With our 4D code we compute the ion thermal flux, ion parallel velocity, self-consistent electric field, and geo-acoustic oscillations, which we compare with standard neoclassical theory for core plasma parameters; and we study the transition from collisional to collisionless end-loss. In the real X-point geometry, we find that the particles are trapped near outside midplane and in the X-point regions due to the magnetic configurations. The sizes of banana orbits are comparable to the pedestal width and/or the SOL width for energetic trapped particles. The effect of the real X-point geometry and edge plasma conditions on standard neoclassical theory will be evaluated, including a comparison of our 4D code with other kinetic

  8. GIANT: a computer code for General Interactive ANalysis of Trajectories

    SciTech Connect

    Jaeger, J.; Lee, M.; Servranckx, R.; Shoaee, H.

    1985-04-01

    Many model-driven diagnostic and correction procedures have been developed at SLAC for the on-line computer controlled operation of SPEAR, PEP, the LINAC, and the Electron Damping Ring. In order to facilitate future applications and enhancements, these procedures are being collected into a single program, GIANT. The program allows interactive diagnosis as well as performance optimization of any beam transport line or circular machine. The test systems for GIANT are those of the SLC project. The organization of this program and some of the recent applications of the procedures will be described in this paper.

  9. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  10. Developing Improved MD Codes for Understanding Processive Cellulases

    SciTech Connect

    Crowley, M. F.; Uberbacher, E. C.; Brooks III, C. L.; Walker, R.C.; Nimlos, M. R.; Himmel, M. E.

    2008-01-01

    The mechanism of action of cellulose-degrading enzymes is illuminated through a multidisciplinary collaboration that uses molecular dynamics (MD) simulations and expands the capabilities of MD codes to allow simulations of enzymes and substrates on petascale computational facilities. There is a class of glycoside hydrolase enzymes called cellulases that are thought to decrystallize and processively depolymerize cellulose using biochemical processes that are largely not understood. Understanding the mechanisms involved and improving the efficiency of this hydrolysis process through computational models and protein engineering presents a compelling grand challenge. A detailed understanding of cellulose structure, dynamics and enzyme function at the molecular level is required to direct protein engineers to the right modifications or to understand if natural thermodynamic or kinetic limits are in play. Much can be learned about processivity by conducting carefully designed molecular dynamics (MD) simulations of the binding and catalytic domains of cellulases with various substrate configurations, solvation models and thermodynamic protocols. Most of these numerical experiments, however, will require significant modification of existing code and algorithms in order to efficiently use current (terascale) and future (petascale) hardware to the degree of parallelism necessary to simulate a system of the size proposed here. This work will develop MD codes that can efficiently use terascale and petascale systems, not just for simple classical MD simulations, but also for more advanced methods, including umbrella sampling with complex restraints and reaction coordinates, transition path sampling, steered molecular dynamics, and quantum mechanical/molecular mechanical simulations of systems the size of cellulose degrading enzymes acting on cellulose.

  11. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  12. Operations analysis (study 2.1). Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1974-01-01

    A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.

  13. Performance analysis of large scale parallel CFD computing based on Code_Saturne

    NASA Astrophysics Data System (ADS)

    Shang, Zhi

    2013-02-01

    In order to run computational fluid dynamics (CFD) codes on large scales, parallel computing has to be employed. For instance, on Petascale computing, general parallel computing without any optimization is not enough, especially for complex industrial issues that employ a large number of mesh cells to capture the details of the geometry. How to distribute these mesh cells among the multi-processors for Terascale and Petascale systems to obtain a good performance on parallel computing is really a challenge. Some mesh partitioning software packages, such as Metis, ParMetis, PT-Scotch and Zoltan, were chosen as the candidates ported into Code_Saturne to test if they can lead Code_Saturne towards Petascale and Exascale parallel CFD computing. Through the studies, it was found that mesh partitioning optimization software packages based on the graph mesh partitioning method can help the CFD code obtain good mesh distributions for high performance computing (HPC).

  14. Multiphase integral reacting flow computer code (ICOMFLO): User`s guide

    SciTech Connect

    Chang, S.L.; Lottes, S.A.; Petrick, M.

    1997-11-01

    A copyrighted computational fluid dynamics computer code, ICOMFLO, has been developed for the simulation of multiphase reacting flows. The code solves conservation equations for gaseous species and droplets (or solid particles) of various sizes. General conservation laws, expressed by elliptic type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy, and turbulent dissipation. Associated phenomenological submodels of the code include integral combustion, two parameter turbulence, particle evaporation, and interfacial submodels. A newly developed integral combustion submodel replacing an Arrhenius type differential reaction submodel has been implemented to improve numerical convergence and enhance numerical stability. A two parameter turbulence submodel is modified for both gas and solid phases. An evaporation submodel treats not only droplet evaporation but size dispersion. Interfacial submodels use correlations to model interfacial momentum and energy transfer. The ICOMFLO code solves the governing equations in three steps. First, a staggered grid system is constructed in the flow domain. The staggered grid system defines gas velocity components on the surfaces of a control volume, while the other flow properties are defined at the volume center. A blocked cell technique is used to handle complex geometry. Then, the partial differential equations are integrated over each control volume and transformed into discrete difference equations. Finally, the difference equations are solved iteratively by using a modified SIMPLER algorithm. The results of the solution include gas flow properties (pressure, temperature, density, species concentration, velocity, and turbulence parameters) and particle flow properties (number density, temperature, velocity, and void fraction). The code has been used in many engineering applications, such as coal-fired combustors, air

  15. Development and testing of a Monte Carlo code system for analysis of ionization chamber responses

    SciTech Connect

    Johnson, J.O.; Gabriel, T.A.

    1986-01-01

    To predict the perturbation of interactions between radiation and material by the presence of a detector, a differential Monte Carlo computer code system entitled MICAP was developed and tested. This code system determines the neutron, photon, and total response of an ionization chamber to mixed field radiation environments. To demonstrate the ability of MICAP in calculating an ionization chamber response function, a comparison was made to 05S, an established Monte Carlo code extensively used to accurately calibrate liquid organic scintillators. Both code systems modeled an organic scintillator with a parallel beam of monoenergetic neutrons incident on the scintillator. (LEW)

  16. Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields

    NASA Technical Reports Server (NTRS)

    Tannehill, J. C.; Wadawadigi, G.

    1992-01-01

    Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the

  17. Computational Methods Development at Ames

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Smith, Charles A. (Technical Monitor)

    1998-01-01

    This viewgraph presentation outlines the development at Ames Research Center of advanced computational methods to provide appropriate fidelity computational analysis/design capabilities. Current thrusts of the Ames research include: 1) methods to enhance/accelerate viscous flow simulation procedures, and the development of hybrid/polyhedral-grid procedures for viscous flow; 2) the development of real time transonic flow simulation procedures for a production wind tunnel, and intelligent data management technology; and 3) the validation of methods and the flow physics study gives historical precedents to above research, and speculates on its future course.

  18. Phase code multiplexed ROM type holographic memory using the computer generated hologram

    NASA Astrophysics Data System (ADS)

    Ohuchi, Yasuhiro; Takahata, Yosuke; Yoshida, Shuhei; Yamamoto, Manabu

    2009-05-01

    For holographic memory, write-once type data recording has been studied by using photopolymer material. By considering the fact that the development of optical disks has been undertaken for both the ROM type and recordable type, there seems to exist a need to develop a ROM type disk for holographic memory. For this ROM type disk, the desired manufacturing method will be the one used for DVD disk production. Also, from the view point of data transfer speed, the function to reproduce data from a disk continuously rotating at high speed seems necessary. This paper describes a phase code multiplexed ROM type holographic memory using computer generated hologram as recorded data.

  19. Benchmark and partial validation testing of the FLASH computer code, Version 3.0

    SciTech Connect

    Martian, P.; Smith, C.S.

    1993-09-01

    This document presents methods and results of benchmark testing (i.e., code-to-code comparisons) and partial validation testing (i.e., tests which compare field data to the computer generated solutions) of the FLASH computer code, Version 3.0, which were conducted to determine if the code is ready for performance assessment studies of the Radioactive Waste Management Complex. Three test problems are presented that were designed to check computational efficiency, accuracy of the numerical algorithms, and the capability of the code to simulate diverse hydrological conditions. These test problems were designed to specifically test the code`s ability to simulate, (a) seasonal infiltration in response to meteorological conditions, (b) changing watertable elevations due to a transient areal source of water, (i.e., influx from spreading basins), and (c) infiltration into fractured basalt as a result of seasonal water in drainage ditches. The FLASH simulations generally compared well with the benchmark codes, indicating good stability and acceptable computational efficiency while simulating a wide range of conditions. The code appears operational for modeling both unsaturated and saturated flow in fractured, heterogeneous porous media. However, the code failed to converge when a unsaturated to saturated transition occurred. Consequently, the code should not be used when this condition occurs or is expected to occur, i.e. when perched water is present or when infiltration rates exceed the saturated conductivity of the soil.

  20. Development and Verification of Enclosure Radiation Capabilities in the CHarring Ablator Response (CHAR) Code

    NASA Technical Reports Server (NTRS)

    Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.

    2016-01-01

    With the recent development of multi-dimensional thermal protection system (TPS) material response codes, the capability to account for surface-to-surface radiation exchange in complex geometries is critical. This paper presents recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute geometric view factors for radiation problems involving multiple surfaces. Verification of the code's radiation capabilities and results of a code-to-code comparison are presented. Finally, a demonstration case of a two-dimensional ablating cavity with enclosure radiation accounting for a changing geometry is shown.

  1. Recent developments in the Los Alamos radiation transport code system

    SciTech Connect

    Forster, R.A.; Parsons, K.

    1997-06-01

    A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results.

  2. PROTEUS two-dimensional Navier-Stokes computer code, version 1. 0. Volume 1: Analysis description

    SciTech Connect

    Towne, C.E.; Schwab, J.R.; Benson, T.J.; Suresh, A.

    1990-03-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.

  3. The Proteus Navier-Stokes code. [two and three dimensional computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.

    1992-01-01

    An effort is currently underway at NASA Lewis to develop two and three dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. Proteus solves the Reynolds-averaged, unsteady, compressible Navier-Stokes equations in strong conservation law form. Turbulence is modeled using a Baldwin-Lomax based algebraic eddy viscosity model. In addition, options are available to solve thin layer or Euler equations, and to eliminate the energy equation by assuming constant stagnation enthalpy. An extensive series of validation cases have been run, primarily using the two dimensional planar/axisymmetric version of the code. Several flows were computed that have exact solution such as: fully developed channel and pipe flow; Couette flow with and without pressure gradients; unsteady Couette flow formation; flow near a suddenly accelerated flat plate; flow between concentric rotating cylinders; and flow near a rotating disk. The two dimensional version of the Proteus code has been released, and the three dimensional code is scheduled for release in late 1991.

  4. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1992-01-01

    Presented is a collection of papers on research activities carried out during the funding period of October 1991 to March 1992. Topics covered include: blunt body flows in thermochemical equilibrium; thermochemical relaxation in high enthalpy nozzle flow; single expansion ramp nozzle simulations; lunar return aerobraking; line boundary problem for three dimensional grids; and unsteady shock induced combustion.

  5. Development of depletion perturbation theory for a reactor nodal code

    SciTech Connect

    Bowman, S.M.

    1981-09-01

    A generalized depletion perturbation (DPT) theory formulation for light water reactor (LWR) depletion problems is developed and implemented into the three-dimensional LWR nodal code SIMULATE. This development applies the principles of the original derivation by M.L. Williams to the nodal equations solved by SIMULATE. The present formulation is first described in detail, and the nodal coupling methodology in SIMULATE is used to determine partial derivatives of the coupling coefficients. The modifications to the original code and the new DPT options available to the user are discussed. Finally, the accuracy and the applicability of the new DPT capability to LWR design analysis are examined for several LWR depletion test cases. The cases range from simple static cases to a realistic PWR model for an entire fuel cycle. Responses of interest included K/sub eff/, nodal peaking, and peak nodal exposure. The nonlinear behavior of responses with respect to perturbations of the various types of cross sections was also investigated. The time-dependence of the sensitivity coefficients for different responses was examined and compared. Comparison of DPT results for these examples to direct calculations reveals the limited applicability of depletion perturbation theory to LWR design calculations at the present. The reasons for these restrictions are discussed, and several methods which might improve the computational accuracy of DPT are proposed for future research.

  6. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam; Sundararaghavan, Veera

    2015-06-01

    In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.

  7. A computer code (SKINTEMP) for predicting transient missile and aircraft heat transfer characteristics

    NASA Astrophysics Data System (ADS)

    Cummings, Mary L.

    1994-09-01

    A FORTRAN computer code (SKINTEMP) has been developed to calculate transient missile/aircraft aerodynamic heating parameters utilizing basic flight parameters such as altitude, Mach number, and angle of attack. The insulated skin temperature of a vehicle surface on either the fuselage (axisymmetric body) or wing (two-dimensional body) is computed from a basic heat balance relationship throughout the entire spectrum (subsonic, transonic, supersonic, hypersonic) of flight. This calculation method employs a simple finite difference procedure which considers radiation, forced convection, and non-reactive chemistry. Surface pressure estimates are based on a modified Newtonian flow model. Eckert's reference temperature method is used as the forced convection heat transfer model. SKINTEMP predictions are compared with a limited number of test cases. SKINTEMP was developed as a tool to enhance the conceptual design process of high speed missiles and aircraft. Recommendations are made for possible future development of SKINTEMP to further support the design process.

  8. Developing Computational Physics in Nigeria

    NASA Astrophysics Data System (ADS)

    Akpojotor, Godfrey; Enukpere, Emmanuel; Akpojotor, Famous; Ojobor, Sunny

    2009-03-01

    Computer based instruction is permeating the educational curricula of many countries oweing to the realization that computational physics which involves computer modeling, enhances the teaching/learning process when combined with theory and experiment. For the students, it gives them more insight and understanding in the learning process and thereby equips them with scientific and computing skills to excel in the industrial and commercial environments as well as at the Masters and doctoral levels. And for the teachers, among others benefits, the availability of open access sites on both instructional and evaluation materials can improve their performances. With a growing population of students and new challenges to meet developmental goals, this paper examine the challenges and prospects of current drive to develop Computational physics as a university undergraduate programme or as a choice of specialized modules or laboratories within the mainstream physics programme in Nigeria institutions. In particular, the current effort of the Nigerian Computational Physics Working Group to design computational physics programmes to meet the developmental goals of the country is discussed.

  9. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding.

    PubMed

    Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H; DeBenedictis, Erik P; James, Conrad D; Marinella, Matthew J; Aimone, James B

    2015-01-01

    The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946

  10. Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

    DOE PAGESBeta

    Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

    2016-01-06

    In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less

  11. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding.

    PubMed

    Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H; DeBenedictis, Erik P; James, Conrad D; Marinella, Matthew J; Aimone, James B

    2015-01-01

    The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  12. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding

    PubMed Central

    Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H.; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

    2016-01-01

    The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946

  13. The role of the PIRT process in identifying code improvements and executing code development

    SciTech Connect

    Wilson, G.E.; Boyack, B.E.

    1997-07-01

    In September 1988, the USNRC issued a revised ECCS rule for light water reactors that allows, as an option, the use of best estimate (BE) plus uncertainty methods in safety analysis. The key feature of this licensing option relates to quantification of the uncertainty in the determination that an NPP has a {open_quotes}low{close_quotes} probability of violating the safety criteria specified in 10 CFR 50. To support the 1988 licensing revision, the USNRC and its contractors developed the CSAU evaluation methodology to demonstrate the feasibility of the BE plus uncertainty approach. The PIRT process, Step 3 in the CSAU methodology, was originally formulated to support the BE plus uncertainty licensing option as executed in the CSAU approach to safety analysis. Subsequent work has shown the PIRT process to be a much more powerful tool than conceived in its original form. Through further development and application, the PIRT process has shown itself to be a robust means to establish safety analysis computer code phenomenological requirements in their order of importance to such analyses. Used early in research directed toward these objectives, PIRT results also provide the technical basis and cost effective organization for new experimental programs needed to improve the safety analysis codes for new applications. The primary purpose of this paper is to describe the generic PIRT process, including typical and common illustrations from prior applications. The secondary objective is to provide guidance to future applications of the process to help them focus, in a graded approach, on systems, components, processes and phenomena that have been common in several prior applications.

  14. BRYNTRN: A baryon transport computer code, computation procedures and data base

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank

    1988-01-01

    The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).

  15. Application of the TEMPEST computer code to canister-filling heat transfer problems

    SciTech Connect

    Farnsworth, R.K.; Faletti, D.W.; Budden, M.J.

    1988-03-01

    Pacific Northwest Laboratory (PNL) researchers used the TEMPEST computer code to simulate thermal cooldown behavior of nuclear waste glass after it was poured into steel canisters for long-term storage. The objective of this work was to determine the accuracy and applicability of the TEMPEST code when used to compute canister thermal histories. First, experimental data were obtained to provide the basis for comparing TEMPEST-generated predictions. Five canisters were instrumented with appropriately located radial and axial thermocouples. The canister were filled using the pilot-scale ceramic melter (PSCM) at PNL. Each canister was filled in either a continous or a batch filling mode. One of the canisters was also filled within a turntable simulant (a group of cylindrical shells with heat transfer resistances similar to those in an actual melter turntable). This was necessary to provide a basis for assessing the ability of the TEMPEST code to also model the transient cooling of canisters in a melter turntable. The continous-fill model, Version M, was found to predict temperatures with more accuracy. The turntable simulant experiment demonstrated that TEMPEST can adequately model the asymmetric temperature field caused by the turntable geometry. Further, TEMPEST can acceptably predict the canister cooling history within a turntable, despite code limitations in computing simultaneous radiation and convection heat transfer between shells, along with uncertainty in stainless-steel surface emissivities. Based on the successful performance of TEMPEST Version M, development was initiated to incorporate 1) full viscous glass convection, 2) a dynamically adaptive grid that automatically follows the glass/air interface throughout the transient, and 3) a full enclosure radiation model to allow radiation heat transfer to non-nearest neighbor cells. 5 refs., 47 figs., 17 tabs.

  16. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  17. On the Development of a Gridless Inflation Code for Parachute Simulations

    SciTech Connect

    STRICKLAND,JAMES H.; HOMICZ,GREGORY F.; GOSSLER,ALBERT A.; WOLFE,WALTER P.; PORTER,VICKI L.

    2000-08-29

    In this paper the authors present the current status of an unsteady 3D parachute simulation code which is being developed at Sandia National Laboratories under the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The Vortex Inflation PARachute code (VIPAR) which embodies this effort will eventually be able to perform complete numerical simulations of ribbon parachute deployment, inflation, and steady descent. At the present time they have a working serial version of the uncoupled fluids code which can simulate unsteady 3D incompressible flows around bluff bodies made up of triangular membrane elements. A parallel version of the code has just been completed which will allow one to compute flows over complex geometries utilizing several thousand processors on one of the new DOE teraFLOP computers.

  18. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined

  19. REMAP: A computer code that transfers node information between dissimilar grids

    SciTech Connect

    Shapiro, A.B.

    1990-04-01

    REMAP is a computer code that transfers the axisymmetric, two dimensional planar, or three dimensional temperature field from one finite element mesh to another. The meshes may be arbitrary as far as the number of elements and their geometry. REMAP interpolates or extrapolates the node temperatures from the old mesh to the new mesh using linear, bilinear, or trilinear isoparametric finite element shape functions. REMAP is used to transfer the temperature field from a thermal analysis mesh to a more finely discretized structural analysis mesh when performing a thermal stress analysis. REMAP was designed to be used with the finite element heat transfer codes TOPAZ2D and TOPAZ3D, and the solid mechanics codes NIKE2D and NIKE3D. The I/O formats in REMAP can be easily modified to accept input from other codes (e.g., finite difference) and generate output files for other structural codes. REMAP can be used to transfer any scalar field variable between dissimilar finite element meshes. The idea of a coarse filter by a fine filter to determine which element from the old mesh contains a node point from the new mesh was used. The coarse filter determines a subset of elements from the old mesh that may contain the new node point. The fine filter determines the element that contains the new node point. REMAP uses the ray-surface intersection algorithm developed for the FACET code for the fine filter. This algorithm has the added capability to determine which element the node is closest to if the node point lies outside the perimeter of the old mesh. Once an element from the old mesh has been identified as containing or closest to the new node point, the natural coordinates for the node point are calculated. The isoparametric finite element shape functions are calculated next. These shape functions are then used to interpolate or extrapolate the temperatures from the nodes comprising the old element to the new node point.

  20. Developing a computational model of human hand kinetics using AVS

    SciTech Connect

    Abramowitz, Mark S.

    1996-05-01

    As part of an ongoing effort to develop a finite element model of the human hand at the Institute for Scientific Computing Research (ISCR), this project extended existing computational tools for analyzing and visualizing hand kinetics. These tools employ a commercial, scientific visualization package called AVS. FORTRAN and C code, originally written by David Giurintano of the Gillis W. Long Hansen`s Disease Center, was ported to a different computing platform, debugged, and documented. Usability features were added and the code was made more modular and readable. When the code is used to visualize bone movement and tendon paths for the thumb, graphical output is consistent with expected results. However, numerical values for forces and moments at the thumb joints do not yet appear to be accurate enough to be included in ISCR`s finite element model. Future work includes debugging the parts of the code that calculate forces and moments and verifying the correctness of these values.

  1. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  2. User Instructions for the Systems Assessment Capability, Rev. 0, Computer Codes Volume 2: Impact Modules

    SciTech Connect

    Eslinger, Paul W. ); Arimescu, Carmen ); Kanyid, Beverly A. ); Miley, Terri B. )

    2001-12-01

    One activity of the Department of Energy?s Groundwater/Vadose Zone Integration Project is an assessment of cumulative impacts from Hanford Site wastes on the subsurface environment and the Columbia River. Through the application of a system assessment capability (SAC), decisions for each cleanup and disposal action will be able to take into account the composite effect of other cleanup and disposal actions. The SAC has developed a suite of computer programs to simulate the migration of contaminants (analytes) present on the Hanford Site and to assess the potential impacts of the analytes, including dose to humans, socio-cultural impacts, economic impacts, and ecological impacts. The general approach to handling uncertainty in the SAC computer codes is a Monte Carlo approach. Conceptually, one generates a value for every stochastic parameter in the code (the entire sequence of modules from inventory through transport and impacts) and then executes the simulation, obtaining an output value, or result. This document provides user instructions for the SAC codes that generate human, ecological, economic, and cultural impacts.

  3. Light curves for bump Cepheids computed with a dynamically zoned pulsation code

    NASA Astrophysics Data System (ADS)

    Adams, T. F.; Castor, J. I.; Davis, C. G.

    1980-05-01

    The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.

  4. Light curves for bump Cepheids computed with a dynamically zoned pulsation code

    NASA Technical Reports Server (NTRS)

    Adams, T. F.; Castor, J. I.; Davis, C. G.

    1980-01-01

    The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.

  5. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  6. User's manual for GEOTEMP, a computer code for predicting downhole wellbore and soil temperatures in geothermal wells. Appendix to Part I report

    SciTech Connect

    Wooley, G.R.

    1980-03-01

    GEOTEMP is a computer code that calculates downhole temperatures in and surrounding a well. Temperatures are computed as a function of time in a flowing stream, in the wellbore, and in the soil. Flowing options available in the model include the following: injection/production, forward/reverse circulation, and drilling. This manual describes how to input data to the code and what results are printed out, provides six examples of both input and output, and supplies a listing of the code. The user's manual is an appendix to the Part I report Development of Computer Code and Acquisition of Field Temperature Data.

  7. TEMPEST: A computer code for three-dimensional analysis of transient fluid dynamics

    SciTech Connect

    Fort, J.A.

    1995-06-01

    TEMPEST (Transient Energy Momentum and Pressure Equations Solutions in Three dimensions) is a powerful tool for solving engineering problems in nuclear energy, waste processing, chemical processing, and environmental restoration because it analyzes and illustrates 3-D time-dependent computational fluid dynamics and heat transfer analysis. It is a family of codes with two primary versions, a N- Version (available to public) and a T-Version (not currently available to public). This handout discusses its capabilities, applications, numerical algorithms, development status, and availability and assistance.

  8. Input to the PRAST computer code used in the SRS probabilistic risk assessment

    SciTech Connect

    Kearnaghan, D.P.

    1992-10-15

    The PRAST (Production Reactor Algorithm for Source Terms) computer code was developed by Westinghouse Savannah River Company and Science Application International Corporation for the quantification of source terms for the SRS Savannah River Site (SRS) Reactor Probabilistic Risk Assessment. PRAST requires as input a set of release fractions, decontamination factors, transfer fractions and source term characteristics that accurately reflect the conditions that are evaluated by PRAST. This document links the analyses which form the basis for the PRAST input parameters. In addition, it gives the distribution of the input parameters that are uncertain and considered to be important to the evaluation of the source terms to the environment.

  9. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  10. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  11. Development of Covariance Capabilities in EMPIRE Code

    SciTech Connect

    Herman, M. Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-12-15

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  12. Development of covariance capabilities in EMPIRE code

    SciTech Connect

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  13. Development of Visualization Software for McCARD Code

    NASA Astrophysics Data System (ADS)

    Park, Chang Je; Lee, Byungchul; Shim, Hyung Jin; Yoeng Choi, Kwang; Roh, Chang Hyun

    2014-06-01

    In order to confirm geometrical modeling of input file of the McCARD (Monte Carlo Code for Advanced Reactor Design and analysis) code, a 2D visualization program with a 3D modeling has been developed. It requires lots of mathematical operations and advanced technologies for design graphical program of complicated geometries. The software is coded with the visual C++ language and run under the Windows PC environment.

  14. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  15. Literature review of United States utilities computer codes for calculating actinide isotope content in irradiated fuel

    SciTech Connect

    Horak, W.C.; Lu, Ming-Shih

    1991-12-01

    This paper reviews the accuracy and precision of methods used by United States electric utilities to determine the actinide isotopic and element content of irradiated fuel. After an extensive literature search, three key code suites were selected for review. Two suites of computer codes, CASMO and ARMP, are used for reactor physics calculations; the ORIGEN code is used for spent fuel calculations. They are also the most widely used codes in the nuclear industry throughout the world. Although none of these codes calculate actinide isotopics as their primary variables intended for safeguards applications, accurate calculation of actinide isotopic content is necessary to fulfill their function.

  16. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  17. Second Generation Integrated Composite Analyzer (ICAN) Computer Code

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.

    1993-01-01

    This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.

  18. RTE: A computer code for Rocket Thermal Evaluation

    NASA Technical Reports Server (NTRS)

    Naraghi, Mohammad H. N.

    1995-01-01

    The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.

  19. Benchmarking computations using the Monte Carlo code ritracks with data from a tissue equivalent proportional counter

    NASA Astrophysics Data System (ADS)

    Brogan, John

    Understanding the dosimetry for high-energy, heavy ions (HZE), especially within living systems, is complex and requires the use of both experimental and computational methods. Tissue-equivalent proportional counters (TEPCs) have been used experimentally to measure energy deposition in volumes similar in dimension to a mammalian cell. As these experiments begin to include a wider range of ions and energies, considerations to cost, time, and radiation protection are necessary and may limit the extent of these studies. Multiple Monte Carlo computational codes have been created to remediate this problem and serve as a mode of verification for pervious experimental methods. One such code, Relativistic-Ion Tracks (RITRACKS), is currently being developed at the NASA Johnson Space center. RITRACKS was designed to describe patterns of ionizations responsible for DNA damage on the molecular scale (nanometers). This study extends RITRACKS version 3.07 into the microdosimetric scale (microns), and compares computational results to previous experimental TEPC data. Energy deposition measurements for 1000 MeV nucleon-1 Fe ions in a 1 micron spherical target were compared. Different settings within RITRACKS were tested to verify their effects on dose to a target and the resulting energy deposition frequency distribution. The results were then compared to the TEPC data.

  20. GAM-HEAT: A computer code to compute heat transfer in complex enclosures. Revision 2

    SciTech Connect

    Cooper, R.E.; Taylor, J.R.

    1992-12-01

    This report discusses the GAM{underscore}HEAT code which was developed for heat transfer analyses associated with postulated Double Ended Guilliotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re-radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices as discussed below, and also accounts for convective, conductive, and advective heat exchanges. The code is structured such that it is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium.

  1. GAM-HEAT: A computer code to compute heat transfer in complex enclosures

    SciTech Connect

    Cooper, R.E.; Taylor, J.R.

    1992-12-01

    This report discusses the GAM[underscore]HEAT code which was developed for heat transfer analyses associated with postulated Double Ended Guilliotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re-radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices as discussed below, and also accounts for convective, conductive, and advective heat exchanges. The code is structured such that it is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium.

  2. Computational Participation: Understanding Coding as an Extension of Literacy Instruction

    ERIC Educational Resources Information Center

    Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.

    2016-01-01

    Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…

  3. Energy standards and model codes development, adoption, implementation, and enforcement

    SciTech Connect

    Conover, D.R.

    1994-08-01

    This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.

  4. New developments of the CARTE thermochemical code: I-parameter optimization

    NASA Astrophysics Data System (ADS)

    Desbiens, N.; Dubois, V.

    We present the calibration of the CARTE thermochemical code that allows to compute the properties of a wide variety of CHON explosives. We have developed an optimization procedure to obtain an accurate multicomponents EOS (fluid phase and condensed phase of carbon). We show here that the results of CARTE code are in good agreement with the specific data of molecular systems and we extensively compare our calculations with measured detonation properties for several explosives.

  5. Development of the 3DHZETRN code for space radiation protection

    NASA Astrophysics Data System (ADS)

    Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert

    Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.

  6. A new 3-D integral code for computation of accelerator magnets

    SciTech Connect

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.

  7. Fallout computer codes. A bibliographic perspective. Technical report, 1 November 1992-1 September 1993

    SciTech Connect

    Rowland, R.

    1994-07-01

    This report is a summary overview of the basic features and differences among the major radioactive fallout models and computer codes that are either in current use or that form the basis for more contemporary codes and other computational tools. The DELFIC, WSEG-10, KDFOC2, SEER3, and DNAF-1 codes and the EM-1 model are addressed. The review is based only on the information that is available in the general body of literature. This report describes the fallout process, gives an overview of each code/model, summarizes how each code/model handles the basic fallout parameters (initial cloud, particle distributions, fall mechanics, total activity and activity to dose rate conversion, and transport), cites the literature references used, and provides an annotated bibliography for other fallout code literature that was not cited. Nuclear weapons, Radiation, Radioactivity, Fallout, DELFIC, WSEG, Nuclear weapon effects, KDFOC, SEER, DNAF, EM-1.

  8. Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liu, Nan-Suey

    2005-01-01

    The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.

  9. Computer codes for the evaluation of thermodynamic and transport properties for equilibrium air to 30000 K

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.

    1991-01-01

    The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.

  10. Balancing Particle and Mesh Computation in a Particle-In-Cell Code

    SciTech Connect

    Worley, Patrick H; D'Azevedo, Eduardo; Hager, Robert; Ku, Seung-Hoe; Yoon, Eisung; Chang, C. S.

    2016-01-01

    The XGC1 plasma microturbulence particle-in-cell simulation code has both particle-based and mesh-based computational kernels that dominate performance. Both of these are subject to load imbalances that can degrade performance and that evolve during a simulation. Each separately can be addressed adequately, but optimizing just for one can introduce significant load imbalances in the other, degrading overall performance. A technique has been developed based on Golden Section Search that minimizes wallclock time given prior information on wallclock time, and on current particle distribution and mesh cost per cell, and also adapts to evolution in load imbalance in both particle and mesh work. In problems of interest this doubled the performance on full system runs on the XK7 at the Oak Ridge Leadership Computing Facility compared to load balancing only one of the kernels.

  11. Digital Poetry: A Narrow Relation between Poetics and the Codes of the Computational Logic

    NASA Astrophysics Data System (ADS)

    Laurentiz, Silvia

    The project "Percorrendo Escrituras" (Walking Through Writings Project) has been developed at ECA-USP Fine Arts Department. Summarizing, it intends to study different structures of digital information that share the same universe and are generators of a new aesthetics condition. The aim is to search which are the expressive possibilities of the computer among the algorithm functions and other of its specific properties. It is a practical, theoretical and interdisciplinary project where the study of programming evolutionary language, logic and mathematics take us to poetic experimentations. The focus of this research is the digital poetry, and it comes from poetics of permutation combinations and culminates with dynamic and complex systems, autonomous, multi-user and interactive, through agents generation derivations, filtration and emergent standards. This lecture will present artworks that use some mechanisms introduced by cybernetics and the notion of system in digital poetry that demonstrate the narrow relationship between poetics and the codes of computational logic.

  12. Computer Code System to Assess Skin Dose from Skin Contamination

    2011-07-10

    Version 00 VARSKIN 4 code is designed to operate in both Windows? and MacIntosh? environments and is expected to be significantly easier to learn and use than its predecessors. PC and MAC users will unzip different executable files, but the functionality is identical. Five different predefined source configurations are available in VARSKIN 4 to allow simulations of point, disk, cylinder, sphere, and slab sources.

  13. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 3: Assessment Manual

    SciTech Connect

    Müller, C.; Hughes, E. D.; Niederauer, G. F.; Wilkening, H.; Travis, J. R.; Spore, J. W.; Royl, P.; Baumann, W.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume

  14. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 2: User's Manual

    SciTech Connect

    Nichols, B. D.; Mueller, C.; Necker, G. A.; Travis, J. R.; Spore, J. W.; Lam, K. L.; Royl, P.; Wilson, T. L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III

  15. Selection of a computer code for Hanford low-level waste engineered-system performance assessment. Revision 1

    SciTech Connect

    McGrail, B.P.; Bacon, D.H.

    1998-02-01

    Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites.

  16. GATO Code Modification to Compute Plasma Response to External Perturbations

    NASA Astrophysics Data System (ADS)

    Turnbull, A. D.; Chu, M. S.; Ng, E.; Li, X. S.; James, A.

    2006-10-01

    It has become increasingly clear that the plasma response to an external nonaxiymmetric magnetic perturbation cannot be neglected in many situations of interest. This response can be described as a linear combination of the eigenmodes of the ideal MHD operator. The eigenmodes of the system can be obtained numerically with the GATO ideal MHD stability code, which has been modified for this purpose. A key requirement is the removal of inadmissible continuum modes. For Finite Hybrid Element codes such as GATO, a prerequisite for this is their numerical restabilization by addition of small numerical terms to δ,to cancel the analytic numerical destabilization. In addition, robustness of the code was improved and the solution method speeded up by use of the SuperLU package to facilitate calculation of the full set of eigenmodes in a reasonable time. To treat resonant plasma responses, the finite element basis has been extended to include eigenfunctions with finite jumps at rational surfaces. Some preliminary numerical results for DIII-D equilibria will be given.

  17. Development of tokamak reactor system analysis code NEW-TORSAC

    NASA Astrophysics Data System (ADS)

    Kasai, Masao; Ida, Toshio; Nishikawa, Masana; Kameari, Akihisa; Nishio, Satoshi; Tone, Tatsuzo

    1987-07-01

    A systems analysis code named NEW-TORSAC (TOkamak Reactor Systems Analysis Code) has been developed by modifying the TORSAC which had been already developed by us. The NEW-TORSAC is available for tokamak reactor designs and evaluations from experimental machines to commercial reactor plants. It has functions to design tokamaks automatically from plasma parameter setting to determining configurations of reactor equipments and calculating main characteristics parameters of auxiliary systems and the capital costs. In the case of analyzing tokamak reactor plants, the code can calculate busbar energy costs. In addition to numerical output, some output of this code such as a reactor configuration, plasma equilibrium, electro-magnetic forces, etc., are graphically displayed. The code has been successfully applied to the scoping studies of the next generation machines and commercial reactor plants.

  18. The Unified English Braille Code: Examination by Science, Mathematics, and Computer Science Technical Expert Braille Readers

    ERIC Educational Resources Information Center

    Holbrook, M. Cay; MacCuspie, P. Ann

    2010-01-01

    Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…

  19. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    ERIC Educational Resources Information Center

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela

    2015-01-01

    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…

  20. Proposed standards for peer-reviewed publication of computer code

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...

  1. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  2. Landsat, computers, and development projects.

    PubMed

    Adrien, P M; Baumgardner, M F

    1977-11-01

    Data provided by earth-orbiting satellites and analyzed through specific computer techniques are rapidly providing policy-makers around the world with new information on the location and extent of their countries' renewable and nonrenewable resources. Development projects utilizing remote sensing technology are being supported, for example, by the Inter-American Development Bank, the World Bank, and other international funding agencies. The Inter-American Development Bank is financing a natural resources inventory of five countries in Central America, and this will require the application of remote sensing in the analysis of 33 Landsat images covering the area. Although the Landsat program remains experimental in nature, studies pertaining to its follow-on aspects will ensure continuation of the program so that developed and developing countries will be able to maintain better control of the management of their natural resources.

  3. Landsat, computers, and development projects.

    PubMed

    Adrien, P M; Baumgardner, M F

    1977-11-01

    Data provided by earth-orbiting satellites and analyzed through specific computer techniques are rapidly providing policy-makers around the world with new information on the location and extent of their countries' renewable and nonrenewable resources. Development projects utilizing remote sensing technology are being supported, for example, by the Inter-American Development Bank, the World Bank, and other international funding agencies. The Inter-American Development Bank is financing a natural resources inventory of five countries in Central America, and this will require the application of remote sensing in the analysis of 33 Landsat images covering the area. Although the Landsat program remains experimental in nature, studies pertaining to its follow-on aspects will ensure continuation of the program so that developed and developing countries will be able to maintain better control of the management of their natural resources. PMID:17842110

  4. Benchmark testing and independent verification of the VS2DT computer code

    SciTech Connect

    McCord, J.T.; Goodrich, M.T.

    1994-11-01

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code`s original documentation.

  5. Characterizing the Properties of a Woven SiC/SiC Composite Using W-CEMCAN Computer Code

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.

    1999-01-01

    A micromechanics based computer code to predict the thermal and mechanical properties of woven ceramic matrix composites (CMC) is developed. This computer code, W-CEMCAN (Woven CEramic Matrix Composites ANalyzer), predicts the properties of two-dimensional woven CMC at any temperature and takes into account various constituent geometries and volume fractions. This computer code is used to predict the thermal and mechanical properties of an advanced CMC composed of 0/90 five-harness (5 HS) Sylramic fiber which had been chemically vapor infiltrated (CVI) with boron nitride (BN) and SiC interphase coatings and melt-infiltrated (MI) with SiC. The predictions, based on the bulk constituent properties from the literature, are compared with measured experimental data. Based on the comparison. improved or calibrated properties for the constituent materials are then developed for use by material developers/designers. The computer code is then used to predict the properties of a composite with the same constituents but with different fiber volume fractions. The predictions are compared with measured data and a good agreement is achieved.

  6. ATHENA/MOD1 code development requirements document

    SciTech Connect

    Carlson, K.E.

    1991-09-01

    This document states the requirements and capabilities of the ATHENA code for the Fusion Safety Program. The history of the development of ATHENA is given, along with requirements for the Fusion Safety Program, manuals quality assessment, documentation, and support.

  7. User's manual for airfoil flow field computer code SRAIR

    NASA Technical Reports Server (NTRS)

    Shamroth, S. J.

    1985-01-01

    A two dimensional unsteady Navier-Stokes calculation procedure with specific application to the isolated airfoil problem is presented. The procedure solves the full, ensemble averaged Navier-Stokes equations with turbulence represented by a mixing length model. The equations are solved in a general nonorthogonal coordinate system which is obtained via an external source. Specific Cartesian locations of grid points are required as input for this code. The method of solution is based upon the Briley-McDonald LBI procedure. The manual discusses the analysis, flow of the program, control steam, input and output.

  8. Theoretical developments in evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1999-11-01

    Recent developments in the theory of evolutionary computation offer evidence and proof that overturns several conventionally held beliefs. In particular, the no free lunch theorem and other related theorems show that there can be no best evolutionary algorithm, and that no particular variation operator or selection mechanism provides a general advantage over another choice. Furthermore, the fundamental nature of the notion of schema processing is called into question by recent theory that shows that the schema theorem does not hold when schema fitness is stochastic. Moreover, the analysis that underlies schema theory, namely the k- armed bandit analysis, does not generate a sampling plan that yields an optimal allocation of trials, as has been suggested in the literature for almost 25 years. The importance of these new findings is discussed in the context of future progress in the field of evolutionary computation.

  9. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

    SciTech Connect

    Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included

  10. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    SciTech Connect

    Harrisson, G.; Marleau, G.

    2012-07-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  11. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    ERIC Educational Resources Information Center

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  12. Computer code for analysing three-dimensional viscous flows in impeller passages and other duct geometries

    NASA Technical Reports Server (NTRS)

    Tatchell, D. G.

    1979-01-01

    A code, CATHY3/M, was prepared and demonstrated by application to a sample case. The preparation is reviewed, a summary of the capabilities and main features of the code is given, and the sample case results are discussed. Recommendations for future use and development of the code are provided.

  13. User's manual to the ICRP Code: a series of computer programs to perform dosimetric calculations for the ICRP Committee 2 report

    SciTech Connect

    Watson, S.B.; Ford, M.R.

    1980-02-01

    A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.

  14. Additional extensions to the NASCAP computer code, volume 2

    NASA Technical Reports Server (NTRS)

    Stannard, P. R.; Katz, I.; Mandell, M. J.

    1982-01-01

    Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results.

  15. Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.

    2005-01-01

    In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.

  16. TPASS: a gamma-ray spectrum analysis and isotope identification computer code

    SciTech Connect

    Dickens, J.K.

    1981-03-01

    The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.

  17. Scalable Computational Chemistry: New Developments and Applications

    SciTech Connect

    Yuri Alexeev

    2002-12-31

    The computational part of the thesis is the investigation of titanium chloride (II) as a potential catalyst for the bis-silylation reaction of ethylene with hexaclorodisilane at different levels of theory. Bis-silylation is an important reaction for producing bis(silyl) compounds and new C-Si bonds, which can serve as monomers for silicon containing polymers and silicon carbides. Ab initio calculations on the steps involved in a proposed mechanism are presented. This choice of reactants allows them to study this reaction at reliable levels of theory without compromising accuracy. The calculations indicate that this is a highly exothermic barrierless reaction. The TiCl{sub 2} catalyst removes a 50 kcal/mol activation energy barrier required for the reaction without the catalyst. The first step is interaction of TiCl{sub 2} with ethylene to form an intermediate that is 60 kcal/mol below the energy of the reactants. This is the driving force for the entire reaction. Dynamic correlation plays a significant role because RHF calculations indicate that the net barrier for the catalyzed reaction is 50 kcal/mol. They conclude that divalent Ti has the potential to become an important industrial catalyst for silylation reactions. In the programming part of the thesis, parallelization of different quantum chemistry methods is presented. The parallelization of code is becoming important aspects of quantum chemistry code development. Two trends contribute to it: the overall desire to study large chemical systems and the desire to employ highly correlated methods which are usually computationally and memory expensive. In the presented distributed data algorithms computation is parallelized and the largest arrays are evenly distributed among CPUs. First, the parallelization of the Hartree-Fock self-consistent field (SCF) method is considered. SCF method is the most common starting point for more accurate calculations. The Fock build (sub step of SCF) from AO integrals is also

  18. NASTRAN as a resource in code development

    NASA Technical Reports Server (NTRS)

    Stanton, E. L.; Crain, L. M.; Neu, T. F.

    1975-01-01

    A case history is presented in which the NASTRAN system provided both guidelines and working software for use in the development of a discrete element program, PATCHES-111. To avoid duplication and to take advantage of the wide spread user familiarity with NASTRAN, the PATCHES-111 system uses NASTRAN bulk data syntax, NASTRAN matrix utilities, and the NASTRAN linkage editor. Problems in developing the program are discussed along with details on the architecture of the PATCHES-111 parametric cubic modeling system. The system includes model construction procedures, checkpoint/restart strategies, and other features.

  19. Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.

    1980-01-01

    Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.

  20. TVENT1: a computer code for analyzing tornado-induced flow in ventilation systems

    SciTech Connect

    Andrae, R.W.; Tang, P.K.; Gregory, W.S.

    1983-07-01

    TVENT1 is a new version of the TVENT computer code, which was designed to predict the flows and pressures in a ventilation system subjected to a tornado. TVENT1 is essentially the same code but has added features for turning blowers off and on, changing blower speeds, and changing the resistance of dampers and filters. These features make it possible to depict a sequence of events during a single run. Other features also have been added to make the code more versatile. Example problems are included to demonstrate the code's applications.

  1. Practical methods to improve the development of computational software

    SciTech Connect

    Osborne, A. G.; Harding, D. W.; Deinert, M. R.

    2013-07-01

    The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)

  2. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  3. A Line Source Shielding Code for Personal Computers.

    1990-12-22

    Version 00 LINEDOSE computes the gamma-ray dose from a pipe source modeled as a line. The pipe is assumed to be iron and has a concrete shield of arbitrary thickness. The calculation is made for eight source energies between 0.1 and 3.5 MeV.

  4. Overview of NASA Multi-Dimensional Stirling Convertor Code Development and Validation Effort

    NASA Astrophysics Data System (ADS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2003-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and ``two space'' test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow rig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this multi-D code development effort.

  5. Comparison of various NLTE codes in computing the charge-state populations of an argon plasma

    SciTech Connect

    Stone, S.R.; Weisheit, J.C.

    1984-11-01

    A comparison among nine computer codes shows surprisingly large differences where it had been believed that the theroy was well understood. Each code treats an argon plasma, optically thin and with no external photon flux; temperatures vary around 1 keV and ion densities vary from 6 x 10/sup 17/ cm/sup -3/ to 6 x 10/sup 21/ cm/sup -3/. At these conditions most ions have three or fewer bound electrons. The calculated populations of 0-, 1-, 2-, and 3-electron ions differ from code to code by typical factors of 2, in some cases by factors greater than 300. These differences depend as sensitively on how may Rydberg states a code allows as they do on variations among computed collision rates. 29 refs., 23 figs.

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  7. Theoretical and user`s manual for pc-PRAISE: A probabilistic fracture mechanics computer code for piping reliability analysis

    SciTech Connect

    Harris, D.O.; Dedhia, D.D.; Lu, S.C.

    1992-07-01

    The purpose of this document is to collect under one cover and update the documentation related to the PRAISE Computer Code. pc-PRAISE is the most recent version of the code, which is a probabilistic fracture mechanics code that has recently been modified to run on an IBM personal computer to evaluate the reliability of welds in nuclear power plant piping systems. pc-PRAISE was adapted from the PRAISE Computer Code, which was originally developed in 1980--81 by Lawrence Livermore National Laboratory (LLNL) under funding from the US Nuclear Regulatory Commission for assessment of the influence of seismic events on the failure probability of piping in pressurized water reactors. PRAISE is an acronym for Piping Reliability Analysis Including Seismic Events, and has been significantly expanded in recent years to allow consideration of both crack initiation and growth in a variety of piping materials in pressurized and boiling water reactors. PRAISE has a deterministic basis in fracture mechanics. Some of the inputs, such as initial crack size and inspection detection probability, are considered to be random variables, and failure probability versus time for a given weldment is evaluated by Monte Carlo simulation. Complex realistic stress histories can be treated by the code, and sets of random material properties for representative piping materials are built into the code. This document provides a comprehensive summary of the deterministic basis of the code, along with description of statistical distributions of random variables. Code inputs are described and an extensive set of sample problems is provided along with descriptions of representative outputs.

  8. Theoretical and user's manual for pc-PRAISE: A probabilistic fracture mechanics computer code for piping reliability analysis

    SciTech Connect

    Harris, D.O.; Dedhia, D.D. ); Lu, S.C. )

    1992-07-01

    The purpose of this document is to collect under one cover and update the documentation related to the PRAISE Computer Code. pc-PRAISE is the most recent version of the code, which is a probabilistic fracture mechanics code that has recently been modified to run on an IBM personal computer to evaluate the reliability of welds in nuclear power plant piping systems. pc-PRAISE was adapted from the PRAISE Computer Code, which was originally developed in 1980--81 by Lawrence Livermore National Laboratory (LLNL) under funding from the US Nuclear Regulatory Commission for assessment of the influence of seismic events on the failure probability of piping in pressurized water reactors. PRAISE is an acronym for Piping Reliability Analysis Including Seismic Events, and has been significantly expanded in recent years to allow consideration of both crack initiation and growth in a variety of piping materials in pressurized and boiling water reactors. PRAISE has a deterministic basis in fracture mechanics. Some of the inputs, such as initial crack size and inspection detection probability, are considered to be random variables, and failure probability versus time for a given weldment is evaluated by Monte Carlo simulation. Complex realistic stress histories can be treated by the code, and sets of random material properties for representative piping materials are built into the code. This document provides a comprehensive summary of the deterministic basis of the code, along with description of statistical distributions of random variables. Code inputs are described and an extensive set of sample problems is provided along with descriptions of representative outputs.

  9. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 3: programmer's manual. Computer code manual. [PWR; BWR

    SciTech Connect

    Hofmann, R.

    1981-11-01

    This volume contains a description of a programming and documentation structure for the STEALTH finite difference computer programs based on general principles applicable to most large scientific computer programs. Program modularization (as well as documentation format) is based entirely on the theoretical elements of analysis of a physical system that were presented in Volume 1. FORTRAN programming and naming conventions are also described. Among the programming formats presented is a FORTRAN manual (Appendix FTN) which can be used as the basis for developing portable codes. STEALTH was developed on a CDC 7600. However, it has been designed so that it can be installed on most large scientific computers. Installation documentation exists for some facilities and can be generated easily for others.

  10. Using a Serious Game Approach to Teach Secure Coding in Introductory Programming: Development and Initial Findings

    ERIC Educational Resources Information Center

    Adamo-Villani, Nicoletta; Oania, Marcus; Cooper, Stephen

    2013-01-01

    We report the development and initial evaluation of a serious game that, in conjunction with appropriately designed matching laboratory exercises, can be used to teach secure coding and Information Assurance (IA) concepts across a range of introductory computing courses. The IA Game is a role-playing serious game (RPG) in which the student travels…

  11. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  12. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  13. Benchmark testing and independent verification of the VS2DT computer code

    NASA Astrophysics Data System (ADS)

    McCord, James T.; Goodrich, Michael T.

    1994-11-01

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code's original documentation.

  14. Development of a coupling code for PWR reactor cavity radiation streaming calculation

    SciTech Connect

    Zheng, Z.; Wu, H.; Cao, L.; Zheng, Y.; Zhang, H.; Wang, M.

    2012-07-01

    PWR reactor cavity radiation streaming is important for the safe of the personnel and equipment, thus calculation has to be performed to evaluate the neutron flux distribution around the reactor. For this calculation, the deterministic codes have difficulties in fine geometrical modeling and need huge computer resource; and the Monte Carlo codes require very long sampling time to obtain results with acceptable precision. Therefore, a coupling method has been developed to eliminate the two problems mentioned above in each code. In this study, we develop a coupling code named DORT2MCNP to link the Sn code DORT and Monte Carlo code MCNP. DORT2MCNP is used to produce a combined surface source containing top, bottom and side surface simultaneously. Because SDEF card is unsuitable for the combined surface source, we modify the SOURCE subroutine of MCNP and compile MCNP for this application. Numerical results demonstrate the correctness of the coupling code DORT2MCNP and show reasonable agreement between the coupling method and the other two codes (DORT and MCNP). (authors)

  15. Structural reliability methods: Code development status

    NASA Technical Reports Server (NTRS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-01-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  16. Codes and standards research, development and demonstration roadmap

    SciTech Connect

    None, None

    2008-07-22

    C&S RD&D Roadmap - 2008: This Roadmap is a guide to the Research, Development & Demonstration activities that will provide data required for Standards Development Organizations (SDOs) to develop performance-based codes and standards for a commercial hydrogen fueled transportation sector in the U.S.

  17. Software Development Environment with Integrated Code Rocket Capabilities

    NASA Astrophysics Data System (ADS)

    Parkes, Steve; Paterson, David; Spark, Alan; Yu, Bruce Guoxia

    2013-08-01

    The development of software for embedded systems such as spacecraft instruments, data processing or other on-board applications, faces a number of challenges not always fully met by many of the currently available software development environments. In this paper we describe a new suite of software tools, the STAR Software Development Environment (SSDE), which is intended to address many of these challenges, and which should simplify the development of software for spacecraft applications, and for other embedded environments. The SSDE includes Code Rocket, a code visualisation and documentation tool, which provides both pseudocode and flowchart editing facilities. These are fully integrated with the code editing and debugging features of the underlying integrated development environment (IDE).

  18. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    SciTech Connect

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  19. Toward Reproducible Computational Research: An Empirical Analysis of Data and Code Policy Adoption by Journals

    PubMed Central

    Stodden, Victoria; Guo, Peixuan; Ma, Zhaokun

    2013-01-01

    Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals. PMID:23805293

  20. The modification and application of RAMS computer code. Final report

    SciTech Connect

    McKee, T.B.

    1995-01-17

    The Regional Atmospheric Modeling System (RAMS) has been utilized in its most updated form, version 3a, to simulate a case night from the Atmospheric Studies in COmplex Terrain (ASCOT) experimental program. ASCOT held a wintertime observational campaign during February, 1991 to observe the often strong drainage flows which form on the Great Plains and in the canyons embedded within the slope from the Continental Divide to the Great Plains. A high resolution (500 m grid spacing) simulation of the 4-5 February 1991 case night using the more advanced turbulence closure now available in RAMS 3a allowed greater analysis of the physical processes governing the drainage flows. It is found that shear interaction above and within the drainage flow are important, and are overpredicted with the new scheme at small grid spacing (< {approximately}1000 m). The implication is that contaminants trapped in nighttime stable flows such as these, will be mixed too strongly in the vertical reducing predicted ground concentrations. The HYPACT code has been added to the capability at LANL, although due to the reduced scope of work, no simulations with HYPACT were performed.

  1. Development of a parallelization strategy for the VARIANT code

    SciTech Connect

    Hanebutte, U.R.; Khalil, H.S.; Palmiotti, G.; Tatsumi, M.

    1996-12-31

    The VARIANT code solves the multigroup steady-state neutron diffusion and transport equation in three-dimensional Cartesian and hexagonal geometries using the variational nodal method. VARIANT consists of four major parts that must be executed sequentially: input handling, calculation of response matrices, solution algorithm (i.e. inner-outer iteration), and output of results. The objective of the parallelization effort was to reduce the overall computing time by distributing the work of the two computationally intensive (sequential) tasks, the coupling coefficient calculation and the iterative solver, equally among a group of processors. This report describes the code`s calculations and gives performance results on one of the benchmark problems used to test the code. The performance analysis in the IBM SPx system shows good efficiency for well-load-balanced programs. Even for relatively small problem sizes, respectable efficiencies are seen for the SPx. An extension to achieve a higher degree of parallelism will be addressed in future work. 7 refs., 1 tab.

  2. Issues in computational fluid dynamics code verification and validation

    SciTech Connect

    Oberkampf, W.L.; Blottner, F.G.

    1997-09-01

    A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.

  3. The development and performance of a message-passing version of the PAGOSA shock-wave physics code

    SciTech Connect

    Gardner, D.R.; Vaughan, C.T.

    1997-10-01

    A message-passing version of the PAGOSA shock-wave physics code has been developed at Sandia National Laboratories for multiple-instruction, multiple-data stream (MIMD) computers. PAGOSA is an explicit, Eulerian code for modeling the three-dimensional, high-speed hydrodynamic flow of fluids and the dynamic deformation of solids under high rates of strain. It was originally developed at Los Alamos National Laboratory for the single-instruction, multiple-data (SIMD) Connection Machine parallel computers. The performance of Sandia`s message-passing version of PAGOSA has been measured on two MIMD machines, the nCUBE 2 and the Intel Paragon XP/S. No special efforts were made to optimize the code for either machine. The measured scaled speedup (computational time for a single computational node divided by the computational time per node for fixed computational load) and grind time (computational time per cell per time step) show that the MIMD PAGOSA code scales linearly with the number of computational nodes used on a variety of problems, including the simulation of shaped-charge jets perforating an oil well casing. Scaled parallel efficiencies for MIMD PAGOSA are greater than 0.70 when the available memory per node is filled (or nearly filled) on hundreds to a thousand or more computational nodes on these two machines, indicating that the code scales very well. Thus good parallel performance can be achieved for complex and realistic applications when they are first implemented on MIMD parallel computers.

  4. The 3D MHD code GOEMHD3 for astrophysical plasmas with large Reynolds numbers. Code description, verification, and computational performance

    NASA Astrophysics Data System (ADS)

    Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.

    2015-08-01

    Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very

  5. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach

    PubMed Central

    Eslami-Mossallam, Behrouz; Schram, Raoul D.; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  6. Symbolic coding for noninvertible systems: uniform approximation and numerical computation

    NASA Astrophysics Data System (ADS)

    Beyn, Wolf-Jürgen; Hüls, Thorsten; Schenke, Andre

    2016-11-01

    It is well known that the homoclinic theorem, which conjugates a map near a transversal homoclinic orbit to a Bernoulli subshift, extends from invertible to specific noninvertible dynamical systems. In this paper, we provide a unifying approach that combines such a result with a fully discrete analog of the conjugacy for finite but sufficiently long orbit segments. The underlying idea is to solve appropriate discrete boundary value problems in both cases, and to use the theory of exponential dichotomies to control the errors. This leads to a numerical approach that allows us to compute the conjugacy to any prescribed accuracy. The method is demonstrated for several examples where invertibility of the map fails in different ways.

  7. Universal holonomic quantum computing with cat-codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Shu, Chi; Krastanov, Stefan; Shen, Chao; Liu, Ren-Bao; Yang, Zhen-Biao; Schoelkopf, Robert J.; Mirrahimi, Mazyar; Devoret, Michel H.; Jiang, Liang

    2016-05-01

    Universal computation of a quantum system consisting of superpositions of well-separated coherent states of multiple harmonic oscillators can be achieved by three families of adiabatic holonomic gates. The first gate consists of moving a coherent state around a closed path in phase space, resulting in a relative Berry phase between that state and the other states. The second gate consists of ``colliding'' two coherent states of the same oscillator, resulting in coherent population transfer between them. The third gate is an effective controlled-phase gate on coherent states of two different oscillators. Such gates should be realizable via reservoir engineering of systems which support tunable nonlinearities, such as trapped ions and circuit QED.

  8. Analysis of BIOMOVS II Uranium Mill Tailings scenario 1.07 with the RESRAD computer code

    SciTech Connect

    Gnanapragasam, E.K.; Yu, C.

    1997-08-01

    The residual radioactive material guidelines (RESRAD) computer code developed at Argonne National Laboratory was selected for participation in the model intercomparison test scenario, version 1.07, conducted by the Uranium Mill Tailings Working Group in the second phase of the international Biospheric Model Validation Study. The RESRAD code was enhanced to provide an output attributing radiological dose to the nuclide at the point of exposure, in addition to the existing output attributing radiological dose to the nuclide in the contaminated zone. A conceptual model to account for off-site accumulation following atmospheric deposition was developed and showed the importance of considering this process for this off-site scenario. The RESRAD predictions for the atmospheric release compared well with most of the other models. The peak and steady-state doses and concentrations predicted by RESRAD for the groundwater release also agreed well with most of the other models participating in the study; however, the RESRAD plots shows a later breakthrough time and sharp changes compared with the plots of the predictions of other models. These differences were due to differences in the formulation for the retardation factor and to not considering the effects of longitudinal dispersion.

  9. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  10. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  11. Overview of numerical codes developed for predicted electrothermal deicing of aircraft blades

    NASA Technical Reports Server (NTRS)

    Keith, Theo G.; De Witt, Kenneth J.; Wright, William B.; Masiulaniec, K. Cyril

    1988-01-01

    An overview of the deicing computer codes that have been developed at the University of Toledo under sponsorship of the NASA-Lewis Research Center is presented. These codes simulate the transient heat conduction and phase change occurring in an electrothermal deicier pad that has an arbitrary accreted ice shape on its surface. The codes are one-dimensional rectangular, two-dimensional rectangular, and two-dimensional with a coordinate transformation to model the true blade geometry. All modifications relating to the thermal physics of the deicing problem that have been incorporated into the codes will be discussed. Recent results of reformulating the codes using different numerical methods to increase program efficiency are described. In particular, this reformulation has enabled a more comprehensive two-dimensional code to run in much less CPU time than the original version. The code predictions are compared with experimental data obtained in the NASA-Lewis Icing Research Tunnel with a UH1H blade fitted with a B. F. Goodrich electrothermal deicer pad. Both continuous and cyclic heater firing cases are considered. The major objective in this comparison is to illustrate which codes give acceptable results in different regions of the airfoil for different heater firing sequences.

  12. Value Conflicts in Computing Developments: Developed and Developing Countries.

    ERIC Educational Resources Information Center

    Kling, Rob

    1983-01-01

    This paper examines the value conflicts engendered by computing developments in two different institutional settings: electronic funds transfer systems and instructional computing in primary and secondary schools. While specific values depend upon culture and upon the character of the particular institutional setting studied, these two cases serve…

  13. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  14. Overview of NASA Multi-Dimensional Stirling Convertor Code Development and Validation Effort

    NASA Astrophysics Data System (ADS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-12-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  15. Development of a code for wall contour design in the transonic region of axisymmetric and square nozzles

    NASA Technical Reports Server (NTRS)

    Alcenius, Timothy; Schneider, Steven P.

    1994-01-01

    Nozzle design codes developed earlier under NAG1-1133 were modified and used in order to design a supersonic wind tunnel nozzle with square cross sections. As part of the design process, a computer code was written to implement the Hopkins and Hill perturbation solution for the flow in the transonic region of axisymmetric nozzles. This technique is used to design the bleed slot of quiet-flow nozzles. This new design code is documented in this report.

  16. Decision framework for technology choice. Volume 2: decision analysis user's manual. [TCM computer code

    SciTech Connect

    Sicherman, A.; Keeney, R.L.

    1982-03-01

    A computer program was developed to aid decision makers in choosing among alternatives. It facilitiates the implementation of the decision analysis approach to multiobjective decision-making problems. The program's main functions are to store the information and perform all the necessary computations required by the approach. The program is designed so that only a few basic commands need to be understood in order to use it effectively. The style of input can be both batch and interactively oriented. Detailed specification of preferences and alternatives is usually done in batch mode while sensitivity analysis can be performed interactively. The output consists of ranking, preference and alternative information displays. The program is quite general and should be applicable to a wide variety of problems. The code allows for an interface to user supplied models when that is desirable. It is designed to run on most computer systems without or with very minor system-specific modifications. This report presents a user's manual for the program that includes a simple illustrative example.

  17. A parametric study of smoke propagation using the CFAST computer code

    SciTech Connect

    Kalinich, D.A.; Bailey, R.T.

    1993-09-01

    When performing a Fire Hazards Analysis (FHA), fire protection engineers are often. interested in determining the degree and timing of smoke propagation from one room to Another within buildings. Often, the engineer must make judgments based on a limited set of guidelines and available information. In order to provide additional data to assist in these judgments, the Risk and Source Term Technology (R&STT) Group of Westinghouse Savannah River Company (WSRC) has conducted a parametric study of smoke, propagation in a single-room/adjacent-hallway geometry. The computer code CFAST (Consolidated Model of Fire Growth And Smoke Transport) [1], developed by the National Institute for Standards and Technology (NIST), was used to perform the calculations.

  18. Finite Element Simulation Code for Computing Thermal Radiation from a Plasma

    NASA Astrophysics Data System (ADS)

    Nguyen, C. N.; Rappaport, H. L.

    2004-11-01

    A finite element code, ``THERMRAD,'' for computing thermal radiation from a plasma is under development. Radiation from plasma test particles is found in cylindrical geometry. Although the plasma equilibrium is assumed axisymmetric individual test particle excitation produces a non-axisymmetric electromagnetic response. Specially designed Whitney class basis functions are to be used to allow the solution to be solved on a two-dimensional grid. The basis functions enforce both a vanishing of the divergence of the electric field within grid elements where the complex index of refraction is assumed constant and continuity of tangential electric field across grid elements while allowing the normal component of the electric field to be discontinuous. An appropriate variational principle which incorporates the Sommerfeld radiation condition on the simulation boundary, as well as its discretization by the Rayleigh-Ritz technique is given. 1. ``Finte Element Method for Electromagnetics Problems,'' Volakis et al., Wiley, 1998.

  19. Compendium of computer codes for the researcher in magnetic fusion energy

    SciTech Connect

    Porter, G.D.

    1989-03-10

    This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs.

  20. Development and testing of the autobiographical memory coding tool.

    PubMed

    Kovach, C R

    1993-04-01

    Development and testing of the autobiographical memory coding tool (AMCT) is detailed. The tool uses quantitative content analysis procedures to code interpretations of autobiographical memories as validating or lamenting. Development of a system of measuring autobiographical memories from transcribed reminiscence interviews involved defining the units of analysis, defining the categories and themes, constructing a codebook, assessing content validity, assessing reliability and making revisions. Thirty-nine transcripts of reminiscence interviews were used for tool development and testing. The AMCT contains a series of step-by-step guidelines for conducting the analysis and includes an autobiographical memory thematic dictionary and descriptions of range and variations in each theme to assist with coding data. Intercoder reliability estimates were 0.93, 0.93 and 0.95. Test-retest reliability was 1.00.

  1. HIFI: a computer code for projectile fragmentation accompanied by incomplete fusion

    SciTech Connect

    Wu, J.R.

    1980-07-01

    A brief summary of a model proposed to describe projectile fragmentation accompanied by incomplete fusion and the instructions for the use of the computer code HIFI are given. The code HIFI calculates single inclusive spectra, coincident spectra and excitation functions resulting from particle-induced reactions. It is a multipurpose program which can calculate any type of coincident spectra as long as the reaction is assumed to take place in two steps.

  2. Computer code for controller partitioning with IFPC application: A user's manual

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Yarkhan, Asim

    1994-01-01

    A user's manual for the computer code for partitioning a centralized controller into decentralized subcontrollers with applicability to Integrated Flight/Propulsion Control (IFPC) is presented. Partitioning of a centralized controller into two subcontrollers is described and the algorithm on which the code is based is discussed. The algorithm uses parameter optimization of a cost function which is described. The major data structures and functions are described. Specific instructions are given. The user is led through an example of an IFCP application.

  3. Verification of a Viscous Computational Aeroacoustics Code using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  4. Verification of a Viscous Computational Aeroacoustics Code Using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  5. A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters

    NASA Technical Reports Server (NTRS)

    Mackowski, D. W.; Mishchenko, M. I.

    2011-01-01

    A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.

  6. Verification of computational aerodynamic predictions for complex hypersonic vehicles using the INCA{trademark} code

    SciTech Connect

    Payne, J.L.; Walker, M.A.

    1995-01-01

    This paper describes a process of combining two state-of-the-art CFD tools, SPRINT and INCA, in a manner which extends the utility of both codes beyond what is possible from either code alone. The speed and efficiency of the PNS code, SPRING, has been combined with the capability of a Navier-Stokes code to model fully elliptic, viscous separated regions on high performance, high speed flight systems. The coupled SPRINT/INCA capability is applicable for design and evaluation of high speed flight vehicles in the supersonic to hypersonic speed regimes. This paper describes the codes involved, the interface process and a few selected test cases which illustrate the SPRINT/INCA coupling process. Results have shown that the combination of SPRINT and INCA produces correct results and can lead to improved computational analyses for complex, three-dimensional problems.

  7. Items Supporting the Hanford Internal Dosimetry Program Implementation of the IMBA Computer Code

    SciTech Connect

    Carbaugh, Eugene H.; Bihl, Donald E.

    2008-01-07

    The Hanford Internal Dosimetry Program has adopted the computer code IMBA (Integrated Modules for Bioassay Analysis) as its primary code for bioassay data evaluation and dose assessment using methodologies of ICRP Publications 60, 66, 67, 68, and 78. The adoption of this code was part of the implementation plan for the June 8, 2007 amendments to 10 CFR 835. This information release includes action items unique to IMBA that were required by PNNL quality assurance standards for implementation of safety software. Copie of the IMBA software verification test plan and the outline of the briefing given to new users are also included.

  8. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lee, K.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L. S.; Ranft, J.; Roesler, S.; Sala, P. R.; Wilson, T. L.; Townsend, L. W. (Principal Investigator)

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  9. VARSKIN MOD 2 and SADDE MOD2: Computer codes for assessing skin dose from skin contamination

    SciTech Connect

    Durham, J.S. )

    1992-12-01

    The computer code VARSKIN has been modified to calculate dose to skin from three-dimensional sources, sources separated from the skin by layers of protective clothing, and gamma dose from certain radionuclides correction for backscatter has also been incorporated for certain geometries. This document describes the new code, VARSKIN Mod 2, including installation and operation instructions, provides detailed descriptions of the models used, and suggests methods for avoiding misuse of the code. The input data file for VARSKIN Mod 2 has been modified to reflect current physical data, to include the contribution to dose from internal conversion and Auger electrons, and to reflect a correction for low-energy electrons. In addition, the computer code SADDE: Scaled Absorbed Dose Distribution Evaluator has been modified to allow the generation of scaled absorbed dose distributions for mixtures of radionuclides and intereat conversion and Auger electrons. This new code, SADDE Mod 2, is also described in this document. Instructions for installation and operation of the code and detailed descriptions of the models used in the code are provided.

  10. MULTI2D - a computer code for two-dimensional radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.

    2009-06-01

    Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are

  11. A Computer Code for Gas Turbine Engine Weight And Disk Life Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Ghosn, Louis J.; Halliwell, Ian; Wickenheiser, Tim (Technical Monitor)

    2002-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. In this paper, the major enhancements to NASA's engine-weight estimate computer code (WATE) are described. These enhancements include the incorporation of improved weight-calculation routines for the compressor and turbine disks using the finite-difference technique. Furthermore, the stress distribution for various disk geometries was also incorporated, for a life-prediction module to calculate disk life. A material database, consisting of the material data of most of the commonly-used aerospace materials, has also been incorporated into WATE. Collectively, these enhancements provide a more realistic and systematic way to calculate the engine weight. They also provide additional insight into the design trade-off between engine life and engine weight. To demonstrate the new capabilities, the enhanced WATE code is used to perform an engine weight/life trade-off assessment on a production aircraft engine.

  12. A computational theory for the classification of natural biosonar targets based on a spike code.

    PubMed

    Müller, Rolf

    2003-08-01

    A computational theory for the classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes (84 800) from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution scales and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential processing of echo trains by biological sonar systems. The dimensions of the representation are the first moments of duration and amplitude location of these interspike intervals as well as their number. All three quantities are readily reconciled with known principles of neural signal representation, since they correspond to the centre of gravity of excitation on a neural map and the total amount of excitation.

  13. NIMROD: A Customer Focused, Team Driven Approach for Fusion Code Development

    NASA Astrophysics Data System (ADS)

    Karandikar, H. M.; Schnack, D. D.

    1996-11-01

    NIMROD is a new code that will be used for the analysis of existing fusion experiments, prediction of operational limits, and design of future devices. An approach called Integrated Product Development (IPD) is being used for the development of NIMROD. It is a dramatic departure from existing practice in the fusion program. Code development is being done by a self-directed, multi-disciplinary, multi-institutional team that consists of experts in plasma theory, experiment, computational physics, and computer science. Customer representatives (ITER, US experiments) are an integral part of the team. The team is using techniques such as Quality Function Deployment (QFD), Pugh Concept Selection, Rapid Prototyping, and Risk Management, during the design phase of NIMROD. Extensive use is made of communication and internet technology to support collaborative work. Our experience with using these team techniques for such a complex software development project will be reported.

  14. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  15. Equivalence of computer codes for calculation of coincidence summing correction factors - Part II.

    PubMed

    Vidmar, T; Camp, A; Hurtado, S; Jäderström, H; Kastlander, J; Lépy, M-C; Lutter, G; Ramebäck, H; Sima, O; Vargas, A

    2016-03-01

    The aim of this study was to check for equivalence of computer codes that are capable of performing calculations of true coincidence summing (TCS) correction factors. All calculations were performed for a set of well-defined detector parameters, sample parameters and decay scheme data. The studied geometry was a point source of (133)Ba positioned directly on the detector window of a low-energy (n-type) detector. Good agreement was established between the TCS correction factors computed by the different codes. PMID:26651169

  16. Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1994-01-01

    An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.

  17. HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.

  18. Developing PYTHON Codes for the Undergraduate ALFALFA Team

    NASA Astrophysics Data System (ADS)

    Troischt, Parker; Ryan, Nicholas; Alfalfa Team

    2016-03-01

    We describe here progress toward developing a number of new PYTHON routines to be used by members of the Undergraduate ALFALFA Team. The codes are designed to analyze HI spectra and assist in identifying and categorizing some of the intriguing sources found in the initial blind ALFALFA survey. Numerical integration is performed on extragalactic sources using 21cm line spectra produced with the L-Band Wide receiver at the National Astronomy and Ionosphere Center. Prior to the integration, polynomial fits are employed to obtain an appropriate baseline for each source. The codes developed here are part of a larger team effort to use new PYTHON routines in order to replace, upgrade, or supplement a wealth of existing IDL codes within the collaboration. This work has been supported by NSF Grant AST-1211005.

  19. Engine structures modeling software system: Computer code. User's manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.

  20. Users manual and modeling improvements for axial turbine design and performance computer code TD2-2

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.

  1. TERRA: a computer code for simulating the transport of environmentally released radionuclides through agriculture

    SciTech Connect

    Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.; Hermann, O.W.

    1984-11-01

    TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-defined deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.

  2. Independent verification and validation testing of the FLASH computer code, Versiion 3. 0

    SciTech Connect

    Martian, P.; Chung, J.N. . Dept. of Mechanical and Materials Engineering)

    1992-06-01

    Independent testing of the FLASH computer code, Version 3.0, was conducted to determine if the code is ready for use in hydrological and environmental studies at various Department of Energy sites. This report describes the technical basis, approach, and results of this testing. Verification tests, and validation tests, were used to determine the operational status of the FLASH computer code. These tests were specifically designed to test: correctness of the FORTRAN coding, computational accuracy, and suitability to simulating actual hydrologic conditions. This testing was performed using a structured evaluation protocol which consisted of: blind testing, independent applications, and graduated difficulty of test cases. Both quantitative and qualitative testing was performed through evaluating relative root mean square values and graphical comparisons of the numerical, analytical, and experimental data. Four verification test were used to check the computational accuracy and correctness of the FORTRAN coding, and three validation tests were used to check the suitability to simulating actual conditions. These tests cases ranged in complexity from simple 1-D saturated flow to 2-D variably saturated problems. The verification tests showed excellent quantitative agreement between the FLASH results and analytical solutions. The validation tests showed good qualitative agreement with the experimental data. Based on the results of this testing, it was concluded that the FLASH code is a versatile and powerful two-dimensional analysis tool for fluid flow. In conclusion, all aspects of the code that were tested, except for the unit gradient bottom boundary condition, were found to be fully operational and ready for use in hydrological and environmental studies.

  3. Independent verification and validation testing of the FLASH computer code, Versiion 3.0

    SciTech Connect

    Martian, P.; Chung, J.N.

    1992-06-01

    Independent testing of the FLASH computer code, Version 3.0, was conducted to determine if the code is ready for use in hydrological and environmental studies at various Department of Energy sites. This report describes the technical basis, approach, and results of this testing. Verification tests, and validation tests, were used to determine the operational status of the FLASH computer code. These tests were specifically designed to test: correctness of the FORTRAN coding, computational accuracy, and suitability to simulating actual hydrologic conditions. This testing was performed using a structured evaluation protocol which consisted of: blind testing, independent applications, and graduated difficulty of test cases. Both quantitative and qualitative testing was performed through evaluating relative root mean square values and graphical comparisons of the numerical, analytical, and experimental data. Four verification test were used to check the computational accuracy and correctness of the FORTRAN coding, and three validation tests were used to check the suitability to simulating actual conditions. These tests cases ranged in complexity from simple 1-D saturated flow to 2-D variably saturated problems. The verification tests showed excellent quantitative agreement between the FLASH results and analytical solutions. The validation tests showed good qualitative agreement with the experimental data. Based on the results of this testing, it was concluded that the FLASH code is a versatile and powerful two-dimensional analysis tool for fluid flow. In conclusion, all aspects of the code that were tested, except for the unit gradient bottom boundary condition, were found to be fully operational and ready for use in hydrological and environmental studies.

  4. TRIO-EF: a general thermal hydraulics computer code applied to the AVLIS process

    NASA Astrophysics Data System (ADS)

    Magnaud, Jean P.; Claveau, Michel; Coulon, Nadia; Yala, Philippe; Guilbaud, Daniel; Mejane, Albert

    1993-05-01

    TRIO-EF is a general purpose Fluid Mechanics 3D Finite Element Code. The system capabilities cover areas such as steady state or transient, laminar or turbulent, isothermal or temperature dependent fluid flows; it is applicable to the study of coupled thermo-fluid problems involving heat conduction and possibly radiative heat transfer. TRIO-EF is developed by the Heat Transfer and Structural Mechanics Department of the French Atomic Energy Commission CEA/DMT. It is widely used for applications in reactor design, safety analysis and final nuclear waster disposal. More recently, it has been used to study the thermal behavior of the AVLIS process separation module. In this process, a linear electron beam impinges the free surface of a uranium ingot, generating a two dimensional curtain emission of vapor. The metal is contained in a water-cooled crucible. The energy transferred to the metal causes its partial melting, forming a pool where strong convective motion increases heat transfer towards the crucible. In the upper part of the Separation Module, the internal structures are devoted to two main functions: vapor containment and reflux, irradiation and physical separation. They are subjected to very high temperature levels and heat transfer occurs mainly by radiation. Moreover, special attention has to be paid to electron backscattering. These two major points have been simulated numerically with TRIO-EF and in this paper, we present and comment the results of such a computation, for each of them. After a brief overview of the computer code, two examples of the TRIO-EF capabilities are given: a crucible thermal hydraulics model, and a thermal analysis of the internal structures.

  5. Computer Developments in Air Conditioning.

    ERIC Educational Resources Information Center

    Pancoast, Ferendino, Grafton and Skeels, Architects, Miami, FL.

    Proceedings of a conference on the present and future uses of computer techniques in the air conditioning field. The recommendation of this report is, for the most part, negative insofar as it applies to the use of computers for design by the small office. However, there should be an awareness of their usefulness in controlling the environmental…

  6. Developing a Working Code of Ethics for Human Resource Personnel.

    ERIC Educational Resources Information Center

    Rampal, Kuldip R.

    1991-01-01

    To develop codes of ethics for their profession, college human resources personnel must first understand their primary job-related responsibilities. These include being alert to evolving organizational needs; coordinating needed training of employees; appreciating the nuances of psychology, communication, and motivation; and observing employee…

  7. The Facial Expression Coding System (FACES): Development, Validation, and Utility

    ERIC Educational Resources Information Center

    Kring, Ann M.; Sloan, Denise M.

    2007-01-01

    This article presents information on the development and validation of the Facial Expression Coding System (FACES; A. M. Kring & D. Sloan, 1991). Grounded in a dimensional model of emotion, FACES provides information on the valence (positive, negative) of facial expressive behavior. In 5 studies, reliability and validity data from 13 diverse…

  8. Development of a CFD Code for Analysis of Fluid Dynamic Forces in Seals

    NASA Technical Reports Server (NTRS)

    Athavale, Mahesh M.; Przekwas, Andrzej J.; Singhal, Ashok K.

    1991-01-01

    The aim is to develop a 3-D computational fluid dynamics (CFD) code for the analysis of fluid flow in cylindrical seals and evaluation of the dynamic forces on the seals. This code is expected to serve as a scientific tool for detailed flow analysis as well as a check for the accuracy of the 2D industrial codes. The features necessary in the CFD code are outlined. The initial focus was to develop or modify and implement new techniques and physical models. These include collocated grid formulation, rotating coordinate frames and moving grid formulation. Other advanced numerical techniques include higher order spatial and temporal differencing and an efficient linear equation solver. These techniques were implemented in a 2D flow solver for initial testing. Several benchmark test cases were computed using the 2D code, and the results of these were compared to analytical solutions or experimental data to check the accuracy. Tests presented here include planar wedge flow, flow due to an enclosed rotor, and flow in a 2D seal with a whirling rotor. Comparisons between numerical and experimental results for an annular seal and a 7-cavity labyrinth seal are also included.

  9. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1994-01-01

    Aeroelastic tests require extensive cost and risk. An aeroelastic wind-tunnel experiment is an order of magnitude more expensive than a parallel experiment involving only aerodynamics. By complementing the wind-tunnel experiments with numerical simulations, the overall cost of the development of aircraft can be considerably reduced. In order to accurately compute aeroelastic phenomenon it is necessary to solve the unsteady Euler/Navier-Stokes equations simultaneously with the structural equations of motion. These equations accurately describe the flow phenomena for aeroelastic applications. At ARC a code, ENSAERO, is being developed for computing the unsteady aerodynamics and aeroelasticity of aircraft, and it solves the Euler/Navier-Stokes equations. The purpose of this cooperative agreement was to enhance ENSAERO in both algorithm and geometric capabilities. During the last five years, the algorithms of the code have been enhanced extensively by using high-resolution upwind algorithms and efficient implicit solvers. The zonal capability of the code has been extended from a one-to-one grid interface to a mismatching unsteady zonal interface. The geometric capability of the code has been extended from a single oscillating wing case to a full-span wing-body configuration with oscillating control surfaces. Each time a new capability was added, a proper validation case was simulated, and the capability of the code was demonstrated.

  10. Non-coding RNAs in Mammary Gland Development and Disease.

    PubMed

    Sandhu, Gurveen K; Milevskiy, Michael J G; Wilson, Wesley; Shewan, Annette M; Brown, Melissa A

    2016-01-01

    Non-coding RNAs (ncRNAs) are untranslated RNA molecules that function to regulate the expression of numerous genes and associated biochemical pathways and cellular functions. NcRNAs include small interfering RNAs (siRNAs), microRNAs (miRNAs), PIWI-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs) and long non-coding RNAs (lncRNAs). They participate in the regulation of all developmental processes and are frequently aberrantly expressed or functionally defective in disease. This Chapter will focus on the role of ncRNAs, in particular miRNAs and lncRNAs, in mammary gland development and disease.

  11. Non-coding RNAs in Mammary Gland Development and Disease.

    PubMed

    Sandhu, Gurveen K; Milevskiy, Michael J G; Wilson, Wesley; Shewan, Annette M; Brown, Melissa A

    2016-01-01

    Non-coding RNAs (ncRNAs) are untranslated RNA molecules that function to regulate the expression of numerous genes and associated biochemical pathways and cellular functions. NcRNAs include small interfering RNAs (siRNAs), microRNAs (miRNAs), PIWI-interacting RNAs (piRNAs), small nucleolar RNAs (snoRNAs) and long non-coding RNAs (lncRNAs). They participate in the regulation of all developmental processes and are frequently aberrantly expressed or functionally defective in disease. This Chapter will focus on the role of ncRNAs, in particular miRNAs and lncRNAs, in mammary gland development and disease. PMID:26659490

  12. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    SciTech Connect

    Trent, D.S.; Eyler, L.L.

    1982-09-01

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  13. VTLOGANL: A Computer Program for Coding and Analyzing Data Gathered on Video Tape.

    ERIC Educational Resources Information Center

    Hecht, Jeffrey B.; And Others

    To code and analyze research data on videotape, a methodology is needed that allows the researcher to code directly and then analyze the observed degree of intensity of the observed events. The establishment of such a methodology is the next logical step in the development of the use of video recorded data in research. The Technological…

  14. Users manual for updated computer code for axial-flow compressor conceptual design

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.

  15. Computer code for space-time diagnostics of nuclear safety parameters

    SciTech Connect

    Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.

    2012-07-01

    The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)

  16. Development of a fusion fuel cycle systems code

    SciTech Connect

    Brereton, S.J.

    1991-12-31

    The tritium inventory in a D-T fusion experiment, like ITER, may be the major hazard onsite. This tritium is distributed throughout various systems and components. A major thrust of safety work has been aimed at reducing these tritium inventories, or at least at minimizing the amount of tritium that could be mobilized. I have developed models for a time-dependent fuel cycle systems code, which will aid in directing designers towards safer, lower inventory designs. The code will provide a self-consistent picture of system interactions and system interdependencies, and provide a better understanding of how tritium inventories are influenced. A ``systems`` approach is valuable in that a wide range of parameters can be studied, and more promising regions of parameter space can be identified. Ultimately, designers can use this information to specify a machine with minimum tritium inventory, given various constraints. Here, I discuss the models that describe tritium inventory in various components as a function of system parameters, and the unique capabilities of a code that will implement them. The models are time dependent and reflect a level of detail consistent with a systems type of analysis. The models support both a stand-alone Tritium Systems Code, and a module for the SUPERCODE, a time-dependent tokamak systems code. Through both versions, we should gain a better understanding of the interactions among the various components of the fuel cycle systems.

  17. Development of a fusion fuel cycle systems code

    SciTech Connect

    Brereton, S.J. )

    1991-01-01

    The tritium inventory in a D-T fusion experiment, like ITER, may be the major hazard onsite. This tritium is distributed throughout various systems and components. A major thrust of safety work has been aimed at reducing these tritium inventories, or at least at minimizing the amount of tritium that could be mobilized. I have developed models for a time-dependent fuel cycle systems code, which will aid in directing designers towards safer, lower inventory designs. The code will provide a self-consistent picture of system interactions and system interdependencies, and provide a better understanding of how tritium inventories are influenced. A systems'' approach is valuable in that a wide range of parameters can be studied, and more promising regions of parameter space can be identified. Ultimately, designers can use this information to specify a machine with minimum tritium inventory, given various constraints. Here, I discuss the models that describe tritium inventory in various components as a function of system parameters, and the unique capabilities of a code that will implement them. The models are time dependent and reflect a level of detail consistent with a systems type of analysis. The models support both a stand-alone Tritium Systems Code, and a module for the SUPERCODE, a time-dependent tokamak systems code. Through both versions, we should gain a better understanding of the interactions among the various components of the fuel cycle systems.

  18. The COSIMA experiments and their verification, a data base for the validation of two phase flow computer codes

    NASA Astrophysics Data System (ADS)

    Class, G.; Meyder, R.; Stratmanns, E.

    1985-12-01

    The large data base for validation and development of computer codes for two-phase flow, generated at the COSIMA facility, is reviewed. The aim of COSIMA is to simulate the hydraulic, thermal, and mechanical conditions in the subchannel and the cladding of fuel rods in pressurized water reactors during the blowout phase of a loss of coolant accident. In terms of fuel rod behavior, it is found that during blowout under realistic conditions only small strains are reached. For cladding rupture extremely high rod internal pressures are necessary. The behavior of fuel rod simulators and the effect of thermocouples attached to the cladding outer surface are clarified. Calculations performed with the codes RELAP and DRUFAN show satisfactory agreement with experiments. This can be improved by updating the phase separation models in the codes.

  19. Computer codes for the evaluation of thermodynamic properties, transport properties, and equilibrium constants of an 11-species air model

    NASA Technical Reports Server (NTRS)

    Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.

    1990-01-01

    The computer codes developed provide data to 30000 K for the thermodynamic and transport properties of individual species and reaction rates for the prominent reactions occurring in an 11-species nonequilibrium air model. These properties and the reaction-rate data are computed through the use of curve-fit relations which are functions of temperature (and number density for the equilibrium constant). The curve fits were made using the most accurate data believed available. A detailed review and discussion of the sources and accuracy of the curve-fitted data used herein are given in NASA RP 1232.

  20. Development of three-dimensional hydrodynamical and MHD codes using Adaptive Mesh Refinement scheme with TVD

    NASA Astrophysics Data System (ADS)

    den, M.; Yamashita, K.; Ogawa, T.

    A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.

  1. New developments of the CARTE thermochemical code: Calculation of detonation properties of high explosives

    NASA Astrophysics Data System (ADS)

    Dubois, Vincent; Desbiens, Nicolas; Auroux, Eric

    2010-07-01

    We present the improvements of the CARTE thermochemical code which provides thermodynamic properties and chemical compositions of CHON systems over a large range of temperature and pressure with a very small computational cost. The detonation products are split in one or two fluid phase (s), treated with the MCRSR equation of state (EOS), and one condensed phase of carbon, modeled with a multiphase EOS which evolves with the chemical composition of the explosives. We have developed a new optimization procedure to obtain an accurate multicomponents EOS. We show here that the results of CARTE code are in good agreement with the specific data of molecular systems and measured detonation properties for several explosives.

  2. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    1988-12-01

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs.

  3. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  4. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    NASA Technical Reports Server (NTRS)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  5. Enhancement of the CAVE computer code. [aerodynamic heating package for nose cones and scramjet engine sidewalls

    NASA Technical Reports Server (NTRS)

    Rathjen, K. A.; Burk, H. O.

    1983-01-01

    The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.

  6. Users' Manual for Computer Code SPIRALI Incompressible, Turbulent Spiral Grooved Cylindrical and Face Seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.; Shapiro, Wilbur

    2005-01-01

    The SPIRALI code predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures. A derivation of the equations governing the performance of turbulent, incompressible, spiral groove cylindrical and face seals along with a description of their solution is given. The computer codes are described, including an input description, sample cases, and comparisons with results of other codes.

  7. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  8. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  9. A New Package of Computer Codes for Analyzing Light Curves of Eclipsing Pre-Cataclysmic Binaries

    NASA Astrophysics Data System (ADS)

    Pustynski, V.-V.; Pustylnik, I. B.

    2005-04-01

    Using the new package of computer codes for analyzing light curves of the two eclipsing pre-cataclysmic binary systems (PCBs) UU Sge and V471 Lyr we find updated values of the physical parameters and discuss the evolutionary state of these PCBs.

  10. Automated computer software development standards enforcement

    SciTech Connect

    Yule, H.P.; Formento, J.W.

    1991-01-01

    The Uniform Development Environment (UDE) is being investigated as a means of enforcing software engineering standards. For the programmer, it provides an environment containing the tools and utilities necessary for orderly and controlled development and maintenance of code according to requirements. In addition, it provides DoD management and developer management the tools needed for all phases of software life cycle management and control, from project planning and management, to code development, configuration management, version control, and change control. This paper reports the status of UDE development and field testing. 5 refs.

  11. Computational Code to Determinate the Optical Constants of Materials with Atrophysical Importance

    NASA Astrophysics Data System (ADS)

    Robson Rocha, Will; Pilling, Sergio

    Several environments in the interstellar medium (ISM) are composed by dust grains (e.g. silicates), that in somewhere can be covered by astrophysical ices (frozen molecular species). The presence of this materials inside dense and cold regions in space such as molecular clouds and circumstellar disks around young stars is proven by space telescopes (e. g. Herschel, Spitzer, ISO) using infrared spectroscopy. In such environments, molecules such as H _{2}O, CO, CO _{2}, NH _{3}, CH _{3}OH among others, may exist in the solid phase and constitute what we call as the interstellar ices. In this work we present a code called NKABS (acronym for “N and K determination from ABSorbance data”) to calculate the optical constants of materials with astrophysical importance directly from absorbance data in the infrared. It is a free code, developed in Python Programing Language, available for Windows (®) operating system. The parameters obtained using the NKABS code are essentials to perform studies involving computational modeling of star forming regions in the infrared. The experimental data have been obtained using a high vacuum portable chamber from the Laboratorio de Astroquímica e Astrobiologia (LASA/UNIVAP). The samples used to calculate the optical constants presented here, were obtained from the condensation of pure gases (e.g. CO, CO _{2} , NH _{3} , SO _{2}), from the sublimation in vacuum of pure liquids (e.g. water, acetone, acetonitrile, acetic acid, formic acid, ethanol and methanol) and from mixtures of different species (e.g. H _{2}O:CO _{2}, H _{2}O:CO:NH _{3}, H _{2}O:CO _{2}:NH _{3}:CH _{4}). Additionally films of solid biomolecules samples of astrochemistry/astrobiology interest (e.g. glycine, adenine) were probed. The NKABS code may also calculate the optical constants of materials processed by the radiation, a scenario very common in star forming regions. Authors would like to thanks the agencies FAPESP (JP#2009/18304-0 and PHD#2013/07657-5), FVE

  12. An examination of Sandia`s phenomenological computer codes and the use of intelligent searching in risk assessments

    SciTech Connect

    Benjamin, A.S.

    1996-07-01

    Because many of the phenomenologically based codes used to support risk assessments require lone execution times, it is important to have a rationally based means for optimizing the choice of parameter values that are input to the code calculations. For this reason, we have developed a method for intelligently searching the space of parameter values to deduce, with as few computations as possible, the values that are most likely to lead to high risk. We have applied the method to a problem involving electrical initiation of an explosive due to the response of the system to fires. We have shown that our method can locate potential risk vulnerabilities with far fewer time-consuming physical response computations than would be necessary using standard sampling approaches.

  13. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    SciTech Connect

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  14. PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady

    1990-01-01

    A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.

  15. New Paradigms for Developing Peta-scalable Codes Workshop - May 3-4, 2004

    SciTech Connect

    Michael Levine

    2005-04-30

    On May 3 & 4, 2004, sixty-two of North America's finest computational scientists gathered in Pittsburgh, Pennsylvania to discuss the future of high-performance computing. Sponsored by the National Science Foundation, the Department of Energy, the Department of Defense and the Hewlett-Packard Corporation, New Methods for Developing Peta-scalable Codes introduced the tools and techniques that will be required to efficiently exploit the next generation of supercomputers. This workshop provided an opportunity for computational scientists to consider parallel programming methods other than the currently prevalent one in which they explicitly and directly manage all parallelism via MPI. Specifically, the question is how best to program the upcoming generation of computer systems that will use massive parallelism and complex memory hierarchies to reach from the terascale into the petascale regime over the next five years. The presentations, by leading computer scientists, focused on languages, runtimes and libraries, tool collections and I/O methods.

  16. Software Development Processes Applied to Computational Icing Simulation

    NASA Technical Reports Server (NTRS)

    Levinson, Laurie H.; Potapezuk, Mark G.; Mellor, Pamela A.

    1999-01-01

    The development of computational icing simulation methods is making the transition form the research to common place use in design and certification efforts. As such, standards of code management, design validation, and documentation must be adjusted to accommodate the increased expectations of the user community with respect to accuracy, reliability, capability, and usability. This paper discusses these concepts with regard to current and future icing simulation code development efforts as implemented by the Icing Branch of the NASA Lewis Research Center in collaboration with the NASA Lewis Engineering Design and Analysis Division. With the application of the techniques outlined in this paper, the LEWICE ice accretion code has become a more stable and reliable software product.

  17. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    SciTech Connect

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  18. Recent Developments in the CONRAD Code regarding Experimental Corrections

    NASA Astrophysics Data System (ADS)

    Archier, P.; De Saint Jean, C.; Kopecky, S.; Litaize, O.; Noguère, G.; Schillebeeckx, P.; Volev, K.

    2013-03-01

    The CONRAD code is an object-oriented software tool developed at CEA Cadarache since 2005 to deal with problems arising during the evaluation process (data assimilation and analysis, physical modelling, propagation of uncertainties…). This paper will present recent developments concerning the experimental corrections, which are required when a neutron resonance shape analysis is performed. Several experimental aspects are detailed in this work: the possibility to use spectra in energy as well as in time, the implementation of both analytical (Chi-Square) and Monte-Carlo resolution functions, the sample homogeneity corrections using log-normal distributions. Each development aspect is illustrated with several examples and comparisons with other resonance analysis codes (SAMMY, REFIT).

  19. Development of a multi-grid FDTD code for three-dimensional simulation of large microwave sintering experiments

    SciTech Connect

    White, M.J.; Iskander, M.F.; Kimrey, H.D.

    1996-12-31

    The Finite-Difference Time-Domain (FDTD) code available at the University of Utah has been used to simulate sintering of ceramics in single and multimode cavities, and many useful results have been reported in literature. More detailed and accurate results, specifically around and including the ceramic sample, are often desired to help evaluate the adequacy of the heating procedure. In electrically large multimode cavities, however, computer memory requirements limit the number of the mathematical cells, and the desired resolution is impractical to achieve due to limited computer resources. Therefore, an FDTD algorithm which incorporates multiple-grid regions with variable-grid sizes is required to adequately perform the desired simulations. In this paper the authors describe the development of a three-dimensional multi-grid FDTD code to help focus a large number of cells around the desired region. Test geometries were solved using a uniform-grid and the developed multi-grid code to help validate the results from the developed code. Results from these comparisons, as well as the results of comparisons between the developed FDTD code and other available variable-grid codes are presented. In addition, results from the simulation of realistic microwave sintering experiments showed improved resolution in critical sites inside the three-dimensional sintering cavity. With the validation of the FDTD code, simulations were performed for electrically large, multimode, microwave sintering cavities to fully demonstrate the advantages of the developed multi-grid FDTD code.

  20. Assessment of three-dimensional inviscid codes and loss calculations for turbine aerodynamic computations

    NASA Technical Reports Server (NTRS)

    Povinelli, L. A.

    1984-01-01

    An assessment of several three dimensional inviscid turbine aerodynamic computer codes and loss models used at the NASA Lewis Research Center is presented. Five flow situations are examined, for which both experimental data and computational results are available. The five flows form a basis for the evaluation of the computational procedures. It was concluded that stator flows may be calculated with a high degree of accuracy, whereas, rotor flow fields are less accurately determined. Exploitation of contouring, learning, bowing, and sweeping will require a three dimensional viscous analysis technique.