Science.gov

Sample records for computer code physical

  1. An integrated radiation physics computer code system.

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  2. Theoretical Atomic Physics code development IV: LINES, A code for computing atomic line spectra

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-12-01

    A new computer program, LINES, has been developed for simulating atomic line emission and absorption spectra using the accurate fine structure energy levels and transition strengths calculated by the (CATS) Cowan Atomic Structure code. Population distributions for the ion stages are obtained in LINES by using the Local Thermodynamic Equilibrium (LTE) model. LINES is also useful for displaying the pertinent atomic data generated by CATS. This report describes the use of LINES. Both CATS and LINES are part of the Theoretical Atomic PhysicS (TAPS) code development effort at Los Alamos. 11 refs., 9 figs., 1 tab.

  3. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    ERIC Educational Resources Information Center

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  4. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    NASA Astrophysics Data System (ADS)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  5. Computational Physics.

    ERIC Educational Resources Information Center

    Borcherds, P. H.

    1986-01-01

    Describes an optional course in "computational physics" offered at the University of Birmingham. Includes an introduction to numerical methods and presents exercises involving fast-Fourier transforms, non-linear least-squares, Monte Carlo methods, and the three-body problem. Recommends adding laboratory work into the course in the future. (TW)

  6. Reeds computer code

    NASA Technical Reports Server (NTRS)

    Bjork, C.

    1981-01-01

    The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.

  7. MELCOR computer code manuals

    SciTech Connect

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  8. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    NASA Astrophysics Data System (ADS)

    Sabchevski, S.; Zhelyazkov, I.; Benova, E.; Atanassov, V.; Dankov, P.; Thumm, M.; Arnold, A.; Jin, J.; Rzesnicki, T.

    2006-07-01

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems.

  9. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    SciTech Connect

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N. Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  10. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  11. Industrial Computer Codes

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1996-01-01

    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  12. Physical sputtering code for fusion applications

    SciTech Connect

    Smith, D.L.; Brooks, J.N.; Post, D.E.

    1981-10-01

    A computer code, DSPUT, has been developed to compute the physical sputtering yields for various plasma particles incident on candidate fusion-reactor first-wall materials. The code, which incorporates the energy and angular-dependence of the sputtering yield, treats both high- and low-Z incident particles bombarding high- and low-Z wall materials. The physical sputtering yield is expressed in terms of the atomic and mass numbers of the incident and target atoms, the surface binding energy of the wall materials, and the incident angle and energy of the particle. An auxiliary code has been written to provide sputtering yields for a Maxwellian-averaged incident particle flux. The code DSPUT has been used as part of a Monte Carlo code for analyzing plasma-wall interactions.

  13. ACCELERATION PHYSICS CODE WEB REPOSITORY.

    SciTech Connect

    WEI, J.

    2006-06-26

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  14. Accelerator Physics Code Web Repository

    SciTech Connect

    Zimmermann, F.; Basset, R.; Bellodi, G.; Benedetto, E.; Dorda, U.; Giovannozzi, M.; Papaphilippou, Y.; Pieloni, T.; Ruggiero, F.; Rumolo, G.; Schmidt, F.; Todesco, E.; Zotter, B.W.; Payet, J.; Bartolini, R.; Farvacque, L.; Sen, T.; Chin, Y.H.; Ohmi, K.; Oide, K.; Furman, M.; /LBL, Berkeley /Oak Ridge /Pohang Accelerator Lab. /SLAC /TRIUMF /Tech-X, Boulder /UC, San Diego /Darmstadt, GSI /Rutherford /Brookhaven

    2006-10-24

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  15. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  16. Computer Code Generates Homotopic Grids

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1992-01-01

    HOMAR is computer code using homotopic procedure to produce two-dimensional grids in cross-sectional planes, which grids then stacked to produce quasi-three-dimensional grid systems for aerospace configurations. Program produces grids for use in both Euler and Navier-Stokes computation of flows. Written in FORTRAN 77.

  17. Computer-Access-Code Matrices

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr.

    1990-01-01

    Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.

  18. electromagnetics, eddy current, computer codes

    2002-03-12

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  19. Using the DEWSBR computer code

    SciTech Connect

    Cable, G.D.

    1989-09-01

    A computer code is described which is designed to determine the fraction of time during which a given ground location is observable from one or more members of a satellite constellation in earth orbit. Ground visibility parameters are determined from the orientation and strength of an appropriate ionized cylinder (used to simulate a beam experiment) at the selected location. Satellite orbits are computed in a simplified two-body approximation computation. A variety of printed and graphical outputs is provided. 9 refs., 50 figs., 2 tabs.

  20. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  1. The Intercomparison of 3D Radiation Codes (I3RC): Showcasing Mathematical and Computational Physics in a Critical Atmospheric Application

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Cahalan, R. F.

    2001-05-01

    The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards

  2. H/sup 0/ precessor computer code

    SciTech Connect

    van Dyck, O.B.; Floyd, R.A.

    1981-05-01

    A spin precessor using H/sup -/ to H/sup 0/ stripping, followed by small precession magnets, has been developed for the LAMPF 800-MeV polarized H/sup -/ beam. The performance of the system was studied with the computer code documented in this report. The report starts from the fundamental physics of a system of spins with hyperfine coupling in a magnetic field and contains many examples of beam behavior as calculated by the program.

  3. GeoPhysical Analysis Code

    SciTech Connect

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.

  4. GeoPhysical Analysis Code

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problemsmore » and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.« less

  5. Optimizing Nuclear Physics Codes on the XT5

    SciTech Connect

    Hartman-Baker, Rebecca J; Nam, Hai Ah

    2011-01-01

    Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.

  6. Computer Code for Nanostructure Simulation

    NASA Technical Reports Server (NTRS)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  7. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  8. Computational capabilities of physical systems.

    PubMed

    Wolpert, David H

    2002-01-01

    In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike

  9. GeoPhysical Analysis Code

    2012-12-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-written components: (i) a set of standard finite, discrete, and discontinuous displacement element physics solvers for resolving Darcy fluid flow, explicit mechanics, implicit mechanics, fault rupture and earthquake nucleation, and fluid-mediated fracturing, including resolution of physcial behaviors both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. Themore » resultant code can be used alone for linearly elastic problems; ploblems involving hydraulic fracturing, where the mesh topology is dynamically changed; fault rupture modeling and seismic risk assessment; and general granular materials behavior. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for , e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release. CPAC's secondary applications include modeling fault evolution for predicting the statistical distribution of earthquake events and to capture granular materials behavior under different load paths.« less

  10. GeoPhysical Analysis Code

    SciTech Connect

    2012-12-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-written components: (i) a set of standard finite, discrete, and discontinuous displacement element physics solvers for resolving Darcy fluid flow, explicit mechanics, implicit mechanics, fault rupture and earthquake nucleation, and fluid-mediated fracturing, including resolution of physcial behaviors both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems; ploblems involving hydraulic fracturing, where the mesh topology is dynamically changed; fault rupture modeling and seismic risk assessment; and general granular materials behavior. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for , e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release. CPAC's secondary applications include modeling fault evolution for predicting the statistical distribution of earthquake events and to capture granular materials behavior under different load paths.

  11. Computational physics: a perspective.

    PubMed

    Stoneham, A M

    2002-06-15

    Computing comprises three distinct strands: hardware, software and the ways they are used in real or imagined worlds. Its use in research is more than writing or running code. Having something significant to compute and deploying judgement in what is attempted and achieved are especially challenging. In science or engineering, one must define a central problem in computable form, run such software as is appropriate and, last but by no means least, convince others that the results are both valid and useful. These several strands are highly interdependent. A major scientific development can transform disparate aspects of information and computer technologies. Computers affect the way we do science, as well as changing our personal worlds. Access to information is being transformed, with consequences beyond research or even science. Creativity in research is usually considered uniquely human, with inspiration a central factor. Scientific and technological needs are major forces in innovation, and these include hardware and software opportunities. One can try to define the scientific needs for established technologies (atomic energy, the early semiconductor industry), for rapidly developing technologies (advanced materials, microelectronics) and for emerging technologies (nanotechnology, novel information technologies). Did these needs define new computing, or was science diverted into applications of then-available codes? Regarding credibility, why is it that engineers accept computer realizations when designing engineered structures, whereas predictive modelling of materials has yet to achieve industrial confidence outside very special cases? The tensions between computing and traditional science are complex, unpredictable and potentially powerful.

  12. Computations in Plasma Physics.

    ERIC Educational Resources Information Center

    Cohen, Bruce I.; Killeen, John

    1983-01-01

    Discusses contributions of computers to research in magnetic and inertial-confinement fusion, charged-particle-beam propogation, and space sciences. Considers use in design/control of laboratory and spacecraft experiments and in data acquisition; and reviews major plasma computational methods and some of the important physics problems they…

  13. Computer-Based Coding of Occupation Codes for Epidemiological Analyses.

    PubMed

    Russ, Daniel E; Ho, Kwan-Yuet; Johnson, Calvin A; Friesen, Melissa C

    2014-05-01

    Mapping job titles to standardized occupation classification (SOC) codes is an important step in evaluating changes in health risks over time as measured in inspection databases. However, manual SOC coding is cost prohibitive for very large studies. Computer based SOC coding systems can improve the efficiency of incorporating occupational risk factors into large-scale epidemiological studies. We present a novel method of mapping verbatim job titles to SOC codes using a large table of prior knowledge available in the public domain that included detailed description of the tasks and activities and their synonyms relevant to each SOC code. Job titles are compared to our knowledge base to find the closest matching SOC code. A soft Jaccard index is used to measure the similarity between a previously unseen job title and the knowledge base. Additional information such as standardized industrial codes can be incorporated to improve the SOC code determination by providing additional context to break ties in matches. PMID:25221787

  14. Computational Physics' Greatest Hits

    NASA Astrophysics Data System (ADS)

    Bug, Amy

    2011-03-01

    The digital computer, has worked its way so effectively into our profession that now, roughly 65 years after its invention, it is virtually impossible to find a field of experimental or theoretical physics unaided by computational innovation. It is tough to think of another device about which one can make that claim. In the session ``What is computational physics?'' speakers will distinguish computation within the field of computational physics from this ubiquitous importance across all subfields of physics. This talk will recap the invited session ``Great Advances...Past, Present and Future'' in which five dramatic areas of discovery (five of our ``greatest hits'') are chronicled: The physics of many-boson systems via Path Integral Monte Carlo, the thermodynamic behavior of a huge number of diverse systems via Monte Carlo Methods, the discovery of new pharmaceutical agents via molecular dynamics, predictive simulations of global climate change via detailed, cross-disciplinary earth system models, and an understanding of the formation of the first structures in our universe via galaxy formation simulations. The talk will also identify ``greatest hits'' in our field from the teaching and research perspectives of other members of DCOMP, including its Executive Committee.

  15. HOTSPOT Health Physics codes for the PC

    SciTech Connect

    Homann, S.G.

    1994-03-01

    The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculation tool for evaluating accidents involving radioactive materials. HOTSPOT codes are a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT codes produce a consistent output for the same input assumptions and minimize the probability of errors associated with reading a graph incorrectly or scaling a universal nomogram during an emergency. The HOTSPOT codes are designed for short-term (less than 24 hours) release durations. Users requiring radiological release consequences for release scenarios over a longer time period, e.g., annual windrose data, are directed to such long-term models as CAPP88-PC (Parks, 1992). Users requiring more sophisticated modeling capabilities, e.g., complex terrain; multi-location real-time wind field data; etc., are directed to such capabilities as the Department of Energy`s ARAC computer codes (Sullivan, 1993). Four general programs -- Plume, Explosion, Fire, and Resuspension -- calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Other programs deal with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. Additional programs estimate the dose commitment from the inhalation of any one of the radionuclides listed in the database of radionuclides; calibrate a radiation survey instrument for ground-survey measurements; and screen plutonium uptake in the lung (see FIDLER Calibration and LUNG Screening sections).

  16. Teaching Physics with Computers

    NASA Astrophysics Data System (ADS)

    Botet, R.; Trizac, E.

    2005-09-01

    Computers are now so common in our everyday life that it is difficult to imagine the computer-free scientific life of the years before the 1980s. And yet, in spite of an unquestionable rise, the use of computers in the realm of education is still in its infancy. This is not a problem with students: for the new generation, the pre-computer age seems as far in the past as the the age of the dinosaurs. It may instead be more a question of teacher attitude. Traditional education is based on centuries of polished concepts and equations, while computers require us to think differently about our method of teaching, and to revise the content accordingly. Our brains do not work in terms of numbers, but use abstract and visual concepts; hence, communication between computer and man boomed when computers escaped the world of numbers to reach a visual interface. From this time on, computers have generated new knowledge and, more importantly for teaching, new ways to grasp concepts. Therefore, just as real experiments were the starting point for theory, virtual experiments can be used to understand theoretical concepts. But there are important differences. Some of them are fundamental: a virtual experiment may allow for the exploration of length and time scales together with a level of microscopic complexity not directly accessible to conventional experiments. Others are practical: numerical experiments are completely safe, unlike some dangerous but essential laboratory experiments, and are often less expensive. Finally, some numerical approaches are suited only to teaching, as the concept necessary for the physical problem, or its solution, lies beyond the scope of traditional methods. For all these reasons, computers open physics courses to novel concepts, bringing education and research closer. In addition, and this is not a minor point, they respond naturally to the basic pedagogical needs of interactivity, feedback, and individualization of instruction. This is why one can

  17. Computer Code Aids Design Of Wings

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.

    1993-01-01

    AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.

  18. Volume accumulator design analysis computer codes

    NASA Technical Reports Server (NTRS)

    Whitaker, W. D.; Shimazaki, T. T.

    1973-01-01

    The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.

  19. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  20. Computational Knowledge for Toroidal Confinement Physics: Part I

    SciTech Connect

    Chang, C. S.

    2009-02-19

    Basic high level computational knowledge for studying the toroidal confinement physics is discussed. Topics include the primacy hierarchy of simulation quantities in statistical plasma physics, importance of the nonlinear-multiscale self-organization phenomena in a computational study, different types of codes for different applications, and different types of computer architectures for different types of codes.

  1. Computational Physics at Haverford College

    NASA Astrophysics Data System (ADS)

    Love, Peter

    2009-03-01

    We will describe two new physics courses at Haverford College: Physics/CS 304, Computational Physics, an upper level elective for Physics, CS and Math Majors, and Physics 412, Research in Theoretical and Computational Physics. These courses are designed to extend students experience of physics using computation. They are also part of an interdisciplinary Concentration in Computational Science mounted jointly by the departments of Computer Science, Economics, Biology Chemistry and Mathematics. These courses make extensive use of Python, Scipy , Numpy and Visual Python, and include extensive independent projects. We will describe some results obtained and lessons learned.

  2. Thermal Hydraulic Computer Code System.

    1999-07-16

    Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less

  3. Computational Accelerator Physics Working Group Summary

    SciTech Connect

    Cary, John R.; Bohn, Courtlandt L.

    2004-08-27

    The working group on computational accelerator physics at the 11th Advanced Accelerator Concepts Workshop held a series of meetings during the Workshop. Verification, i.e., showing that a computational application correctly solves the assumed model, and validation, i.e., showing that the model correctly describes the modeled system, were discussed for a number of systems. In particular, the predictions of the massively parallel codes, OSIRIS and VORPAL, used for modeling advanced accelerator concepts, were compared and shown to agree, thereby establishing some verification of both codes. In addition, a number of talks on the status and frontiers of computational accelerator physics were presented, to include the modeling of ultrahigh-brightness electron photoinjectors and the physics of beam halo production. Finally, talks discussing computational needs were presented.

  4. Computational Accelerator Physics Working Group Summary

    SciTech Connect

    Cary, John R.; Bohn, Courtlandt L.

    2004-12-07

    The working group on computational accelerator physics at the 11th Advanced Accelerator Concepts Workshop held a series of meetings during the Workshop. Verification, i.e., showing that a computational application correctly solves the assumed model, and validation, i.e., showing that the model correctly describes the modeled system, were discussed for a number of systems. In particular, the predictions of the massively parallel codes, OSIRIS and VORPAL, used for modeling advanced accelerator concepts, were compared and shown to agree, thereby establishing some verification of both codes. In addition, a number of talks on the status and frontiers of computational accelerator physics were presented, to include the modeling of ultrahigh-brightness electron photoinjectors and the physics of beam halo production. Finally, talks discussing computational needs were presented.

  5. Physics Division computer facilities

    SciTech Connect

    Cyborski, D.R.; Teh, K.M.

    1995-08-01

    The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.

  6. High-Productivity Computing in Computational Physics Education

    NASA Astrophysics Data System (ADS)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  7. Development of probabilistic multimedia multipathway computer codes.

    SciTech Connect

    Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.

  8. The Computational Physics Program of the national MFE Computer Center

    SciTech Connect

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.

  9. Computational plasma physics and supercomputers

    SciTech Connect

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics.

  10. Establishing confidence in complex physics codes: Art or science?

    SciTech Connect

    Trucano, T.

    1997-12-31

    The ALEGRA shock wave physics code, currently under development at Sandia National Laboratories and partially supported by the US Advanced Strategic Computing Initiative (ASCI), is generic to a certain class of physics codes: large, multi-application, intended to support a broad user community on the latest generation of massively parallel supercomputer, and in a continual state of formal development. To say that the author has ``confidence`` in the results of ALEGRA is to say something different than that he believes that ALEGRA is ``predictive.`` It is the purpose of this talk to illustrate the distinction between these two concepts. The author elects to perform this task in a somewhat historical manner. He will summarize certain older approaches to code validation. He views these methods as aiming to establish the predictive behavior of the code. These methods are distinguished by their emphasis on local information. He will conclude that these approaches are more art than science.

  11. New coding technique for computer generated holograms.

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  12. Secure Computation from Random Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Cramer, Ronald; Goldwasser, Shafi; de Haan, Robbert; Vaikuntanathan, Vinod

    Secure computation consists of protocols for secure arithmetic: secret values are added and multiplied securely by networked processors. The striking feature of secure computation is that security is maintained even in the presence of an adversary who corrupts a quorum of the processors and who exercises full, malicious control over them. One of the fundamental primitives at the heart of secure computation is secret-sharing. Typically, the required secret-sharing techniques build on Shamir's scheme, which can be viewed as a cryptographic twist on the Reed-Solomon error correcting code. In this work we further the connections between secure computation and error correcting codes. We demonstrate that threshold secure computation in the secure channels model can be based on arbitrary codes. For a network of size n, we then show a reduction in communication for secure computation amounting to a multiplicative logarithmic factor (in n) compared to classical methods for small, e.g., constant size fields, while tolerating t < ({1 over 2} - {ɛ}) {n} players to be corrupted, where ɛ> 0 can be arbitrarily small. For large networks this implies considerable savings in communication. Our results hold in the broadcast/negligible error model of Rabin and Ben-Or, and complement results from CRYPTO 2006 for the zero-error model of Ben-Or, Goldwasser and Wigderson (BGW). Our general theory can be extended so as to encompass those results from CRYPTO 2006 as well. We also present a new method for constructing high information rate ramp schemes based on arbitrary codes, and in particular we give a new construction based on algebraic geometry codes.

  13. Computer design code for conical ribbon parachutes

    SciTech Connect

    Waye, D.E.

    1986-01-01

    An interactive computer design code has been developed to aid in the design of conical ribbon parachutes. The program is written to include single conical and polyconical parachute designs. The code determines the pattern length, vent diameter, radial length, ribbon top and bottom lengths, and geometric local and average porosity for the designer with inputs of constructed diameter, ribbon widths, ribbon spacings, radial width, and number of gores. The gores are designed with one mini-radial in the center with an option for the addition of two outer mini-radials. The output provides all of the dimensions necessary for the construction of the parachute. These results could also be used as input into other computer codes used to predict parachute loads.

  14. Thermoelectric pump performance analysis computer code

    NASA Technical Reports Server (NTRS)

    Johnson, J. L.

    1973-01-01

    A computer program is presented that was used to analyze and design dual-throat electromagnetic dc conduction pumps for the 5-kwe ZrH reactor thermoelectric system. In addition to a listing of the code and corresponding identification of symbols, the bases for this analytical model are provided.

  15. COLD-SAT Dynamic Model Computer Code

    NASA Technical Reports Server (NTRS)

    Bollenbacher, G.; Adams, N. S.

    1995-01-01

    COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.

  16. Design of a physical format coding system

    NASA Astrophysics Data System (ADS)

    Hu, Beibei; Pei, Jing; Zhang, Qicheng; Liu, Hailong; Tang, Yi

    2008-12-01

    A novel design of physical format coding system (PFCS) is presented based on Multi-level read-only memory disc (ML ROM) in order to solve the problem of low efficiency and long period of disc testing during system development. The PFCS is composed of four units, which are 'Encode', 'Add Noise', 'Decode', 'Error Rate', and 'Information'. It is developed with MFC under the environment of VC++ 6.0, and capable to visually simulate the procedure of data processing for ML ROM. This system can also be used for developing other optical disc storage system or similar channel coding system.

  17. User's manual for HDR3 computer code

    SciTech Connect

    Arundale, C.J.

    1982-10-01

    A description of the HDR3 computer code and instructions for its use are provided. HDR3 calculates space heating costs for a hot dry rock (HDR) geothermal space heating system. The code also compares these costs to those of a specific oil heating system in use at the National Aeronautics and Space Administration Flight Center at Wallops Island, Virginia. HDR3 allows many HDR system parameters to be varied so that the user may examine various reservoir management schemes and may optimize reservoir design to suit a particular set of geophysical and economic parameters.

  18. Neural coding: computational and biophysical perspectives

    NASA Astrophysics Data System (ADS)

    Kreiman, Gabriel

    2004-07-01

    While recognizing a face or kicking a ball may seem to be easy tasks for us, they still constitute challenging problems for even the most sophisticated computer algorithms available nowadays. The brain has evolved complex mechanisms to encode behaviorally relevant information. Here we review the types of codes used by the brain, what their constraints are and how they map the sensory environment or the motor output. We start by defining neural codes and briefly describing some of the current tools available to record activity from the brain. We give several examples of coding strategies used by different systems and multiple organisms and discuss how spiking patterns can be read out. Going beyond correlations between physiology and stimuli, we show what is currently known about the direct causal link between neuronal responses and behavioral output or sensory input. Finally, we identify what we consider to be some of the pressing questions in the field.

  19. Present state of the SOURCES computer code

    SciTech Connect

    Shores, E. F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  20. A Spectral Verification of the HELIOS-2 Lattice Physics Code

    SciTech Connect

    D. S. Crawford; B. D. Ganapol; D. W. Nigg

    2012-11-01

    Core modeling of the Advanced Test Reactor (ATR) at INL is currently undergoing a significant update through the Core Modeling Update Project1. The intent of the project is to bring ATR core modeling in line with today’s standard of computational efficiency and verification and validation practices. The HELIOS-2 lattice physics code2 is the lead code of several reactor physics codes to be dedicated to modernize ATR core analysis. This presentation is concerned with an independent verification of the HELIOS-2 spectral representation including the slowing down and thermalization algorithm and its data dependency. Here, we will describe and demonstrate a recently developed simple cross section generation algorithm based entirely on analytical multigroup parameters for both the slowing down and thermal spectrum. The new capability features fine group detail to assess the flux and multiplication factor dependencies on cross section data sets using the fundamental infinite medium as an example.

  1. GMRES acceleration of computational fluid dynamics codes

    NASA Technical Reports Server (NTRS)

    Wigton, L. B.; Yu, N. J.; Young, D. P.

    1985-01-01

    The generalized minimal residual algorithm (GMRES) is a conjugate-gradient like method that applies directly to nonsymmetric linear systems of equations. In this paper, GMRES is modified to handle nonlinear equations characteristic of computational fluid dynamics. Attention is devoted to the concept of preconditioning and the role it plays in assuring rapid convergence. A formulation is developed that allows GMRES to be preconditioned by the solution procedures already built into existing computer codes. Examples are provided that demonstrate the ability of GMRES to greatly improve the robustness and rate of convergence of current state-of-the-art fluid dynamics codes. Theoretical aspects of GMRES are presented that explain why it works. Finally, the advantage GMRES enjoys over related methods such as conjugate gradients are discussed.

  2. Probabilistic structural analysis computer code (NESSUS)

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.

    1988-01-01

    Probabilistic structural analysis has been developed to analyze the effects of fluctuating loads, variable material properties, and uncertain analytical models especially for high performance structures such as SSME turbopump blades. The computer code NESSUS (Numerical Evaluation of Stochastic Structure Under Stress) was developed to serve as a primary computation tool for the characterization of the probabilistic structural response due to the stochastic environments by statistical description. The code consists of three major modules NESSUS/PRE, NESSUS/FEM, and NESSUS/FPI. NESSUS/PRE is a preprocessor which decomposes the spatially correlated random variables into a set of uncorrelated random variables using a modal analysis method. NESSUS/FEM is a finite element module which provides structural sensitivities to all the random variables considered. NESSUS/FPI is Fast Probability Integration method by which a cumulative distribution function or a probability density function is calculated.

  3. HotSpot Health Physics Codes

    SciTech Connect

    Homann, S. G.

    2013-04-18

    The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating insidents involving redioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.

  4. HotSpot Health Physics Codes

    2010-03-02

    The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.

  5. TAIR: A transonic airfoil analysis computer code

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.

    1981-01-01

    The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.

  6. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  7. Theoretical atomic physics code development at Los Alamos

    SciTech Connect

    Clark, R.E.H.; Abdallah, J. Jr.

    1989-01-01

    We have developed a set of computer codes for atomic physics calculations at Los Alamos. These codes can calculate a large variety of data with a minimum of effort on the part of the user. In particular, differential cross sections and electron impact coherence parameters can be readily obtained for arbitrary ions or atoms. Currently, the theory consists of non-relativistic Hartree-Fock structure calculations and non relativistic distorted wave approximation or first order many body theory collisional calculations. 12 refs., 2 figs., 5 tabs.

  8. Computational physics program of the National MFE Computer Center

    SciTech Connect

    Mirin, A.A.

    1980-08-01

    The computational physics group is involved in several areas of fusion research. One main area is the application of multidimensional Fokker-Planck, transport and combined Fokker-Planck/transport codes to both toroidal and mirror devices. Another major area is the investigation of linear and nonlinear resistive magnetohydrodynamics in two and three dimensions, with applications to all types of fusion studies, investigations of more efficient numerical algorithms are being carried out.

  9. The computational physics program of the National MFE Computer Center

    SciTech Connect

    Mirin, A.A.

    1988-01-01

    The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.

  10. New developments in the Saphire computer codes

    SciTech Connect

    Russell, K.D.; Wood, S.T.; Kvarfordt, K.J.

    1996-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.

  11. MAGNUM-2D computer code: user's guide

    SciTech Connect

    England, R.L.; Kline, N.W.; Ekblad, K.J.; Baca, R.G.

    1985-01-01

    Information relevant to the general use of the MAGNUM-2D computer code is presented. This computer code was developed for the purpose of modeling (i.e., simulating) the thermal and hydraulic conditions in the vicinity of a waste package emplaced in a deep geologic repository. The MAGNUM-2D computer computes (1) the temperature field surrounding the waste package as a function of the heat generation rate of the nuclear waste and thermal properties of the basalt and (2) the hydraulic head distribution and associated groundwater flow fields as a function of the temperature gradients and hydraulic properties of the basalt. MAGNUM-2D is a two-dimensional numerical model for transient or steady-state analysis of coupled heat transfer and groundwater flow in a fractured porous medium. The governing equations consist of a set of coupled, quasi-linear partial differential equations that are solved using a Galerkin finite-element technique. A Newton-Raphson algorithm is embedded in the Galerkin functional to formulate the problem in terms of the incremental changes in the dependent variables. Both triangular and quadrilateral finite elements are used to represent the continuum portions of the spatial domain. Line elements may be used to represent discrete conduits. 18 refs., 4 figs., 1 tab.

  12. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1994-01-01

    Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

  13. Majorana Fermion Surface Code for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Vijay, Sagar; Hsieh, Tim; Fu, Liang

    We introduce an exactly solvable model of interacting Majorana fermions realizing Z2 topological order with a Z2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid state systems, including topological insulators, nanowires or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physical ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.

  14. Majorana Fermion Surface Code for Universal Quantum Computation

    NASA Astrophysics Data System (ADS)

    Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang

    2015-10-01

    We introduce an exactly solvable model of interacting Majorana fermions realizing Z2 topological order with a Z2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s -wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physical ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.

  15. Quantum computing classical physics.

    PubMed

    Meyer, David A

    2002-03-15

    In the past decade, quantum algorithms have been found which outperform the best classical solutions known for certain classical problems as well as the best classical methods known for simulation of certain quantum systems. This suggests that they may also speed up the simulation of some classical systems. I describe one class of discrete quantum algorithms which do so--quantum lattice-gas automata--and show how to implement them efficiently on standard quantum computers.

  16. The Physics of Quantum Computation

    NASA Astrophysics Data System (ADS)

    Falci, Giuseppe; Paladino, Elisabette

    2015-10-01

    Quantum Computation has emerged in the past decades as a consequence of down-scaling of electronic devices to the mesoscopic regime and of advances in the ability of controlling and measuring microscopic quantum systems. QC has many interdisciplinary aspects, ranging from physics and chemistry to mathematics and computer science. In these lecture notes we focus on physical hardware, present day challenges and future directions for design of quantum architectures.

  17. Developing Computational Physics in Nigeria

    NASA Astrophysics Data System (ADS)

    Akpojotor, Godfrey; Enukpere, Emmanuel; Akpojotor, Famous; Ojobor, Sunny

    2009-03-01

    Computer based instruction is permeating the educational curricula of many countries oweing to the realization that computational physics which involves computer modeling, enhances the teaching/learning process when combined with theory and experiment. For the students, it gives them more insight and understanding in the learning process and thereby equips them with scientific and computing skills to excel in the industrial and commercial environments as well as at the Masters and doctoral levels. And for the teachers, among others benefits, the availability of open access sites on both instructional and evaluation materials can improve their performances. With a growing population of students and new challenges to meet developmental goals, this paper examine the challenges and prospects of current drive to develop Computational physics as a university undergraduate programme or as a choice of specialized modules or laboratories within the mainstream physics programme in Nigeria institutions. In particular, the current effort of the Nigerian Computational Physics Working Group to design computational physics programmes to meet the developmental goals of the country is discussed.

  18. COPE: Computer Organized Physical Education.

    ERIC Educational Resources Information Center

    Lambdin, Dolly

    1997-01-01

    Reviews the need for and appropriate use of individual assessment in physical education and explains how computerized data management can combat the logistical difficulties of using the data. Describes project COPE (Computer Organized Physical Education), a computerized data management system for improving recordkeeping, planning, and…

  19. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  20. Spiking network simulation code for petascale computers

    PubMed Central

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  1. Spiking network simulation code for petascale computers.

    PubMed

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  2. Spiking network simulation code for petascale computers.

    PubMed

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  3. Computational Physics Across the Disciplines

    NASA Astrophysics Data System (ADS)

    Crespi, Vincent; Lammert, Paul; Engstrom, Tyler; Owen, Ben

    2011-03-01

    In this informal talk, I will present two case studies of the unexpected convergence of computational techniques across disciplines. First, the marriage of neutron star astrophysics and the materials theory of the mechanical and thermal response of crystalline solids. Although the lower reaches of a neutron star host exotic nuclear physics, the upper few meters of the crust exist in a regime that is surprisingly amenable to standard molecular dynamics simulation, albeit in a physical regime of density order of magnitude of orders of magnitude different from those familiar to most condensed matter folk. Computational results on shear strength, thermal conductivity, and other properties here are very relevant to possible gravitational wave signals from these sources. The second example connects not two disciplines of computational physics, but experimental and computational physics, and not from the traditional direction of computational progressively approaching experiment. Instead, experiment is approaching computation: regular lattices of single-domain magnetic islands whose magnetic microstates can be exhaustively enumerated by magnetic force microscopy. There resulting images of island magnetization patterns look essentially like the results of Monte Carlo simulations of Ising systems... statistical physics with the microstate revealed.

  4. Methodology for computational fluid dynamics code verification/validation

    SciTech Connect

    Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.

    1995-07-01

    The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.

  5. Recommended documentation plan for the FLAG and CHEMFLUB computer codes

    SciTech Connect

    1983-09-02

    Reviews have been conducted on both FLAG and CHEMFLUB's documentation and computer codes. The documentation of both models is: (1) incomplete, (2) confusing, (3) not helpful to the reader, (4) filled with extraneous information and (5) lack claimed versatility in analyzing coal gasifier systems. The documentation is such that the computer coding itself must be used as a reference to complete the documentation. Once the codes are set up they are relatively easy to run. We have exercised both of them. Most of our efforts thus far have been concentrated on FLAG because of its importance and complexity. FLAG in its present form can not be expected to yield meaningful data applicable to coal gasifier systems. The reasons for this are twofold. First, the model is incorrect in describing some aspects of fluid particle behavior in coal gasifier systems. Second, the numerical formulation/solution methodology is incorrectly implemented and introduces spurious numerical effects, thereby obscuring the physics of the model. In brief, this means that resulting calculations are not correctly related to the physics. CHEMFLUB, while less extensively exercised, shows that it should be no surprise that CHEMFLUB is best utilized as a tool for generating first approximations. We have concluded from these reviews that we cannot perform meaningful comparisons as required under tasks 3.3, 3.4, and 3.5 without first reconstructing and correcting when necessary the physical/numerical models. A plan is presented for accomplishing this reconstruction/modification.

  6. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  7. Computer codes for evaluation of control room habitability (HABIT)

    SciTech Connect

    Stage, S.A.

    1996-06-01

    This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.

  8. TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.

    1994-01-01

    The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters

  9. Hanford Meteorological Station computer codes: Volume 2, The PROD computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1987-09-01

    At the end of each work shift (day, swing, and graveyard), the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, issues a forecast of the 200-ft-level wind speed and direction and the weather for use at B Plant and PUREX. These forecasts are called production forecasts. The PROD computer code is used to archive these production forecasts and apply quality assurance checks to the forecasts. The code accesses an input file, which contains the previous forecast's date and shift number, and an output file, which contains the production forecasts for the current month. A data entry form consisting of 20 fields is included in the program. The fields must be filled in by the user. The information entered is appended to the current production monthly forecast file, which provides an archive for the production forecasts. This volume describes the implementation and operation of the PROD computer code at the HMS.

  10. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  11. Enhanced Verification Test Suite for Physics Simulation Codes

    SciTech Connect

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater

  12. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  13. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  14. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  15. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  16. Relevance of Computational Rock Physics

    NASA Astrophysics Data System (ADS)

    Dvorkin, J. P.

    2014-12-01

    The advent of computational rock physics has brought to light the often ignored question: How applicable are controlled-experiment data acquired at one scale to interpreting measurements obtained at a different scale? An answer is not to use a single data point or even a few data points but rather find a trend that links two or more rock properties to each other in a selected rock type. In the physical laboratory, these trends are generated by measuring a significant number of samples. In contrast, in the computational laboratory, these trends are hidden inside a very small digital sample and can be derived by subsampling it. Often, the internal heterogeneity of measurable properties inside a small sample mimics the large-scale heterogeneity, making the tend applicable in a range of scales. Computational rock physics is uniquely tooled for finding such trends: Although it is virtually impossible to subsample a physical sample and consistently conduct the same laboratory experiments on each of the subsamples, it is straightforward to accomplish this task in the computer.

  17. PREWATE: An interactive preprocessing computer code to the Weight Analysis of Turbine Engines (WATE) computer code

    NASA Technical Reports Server (NTRS)

    Fishbach, L. H.

    1983-01-01

    The Weight Analysis of Turbine Engines (WATE) computer code was developed by Boeing under contract to NASA Lewis. It was designed to function as an adjunct to the Navy/NASA Engine Program (NNEP). NNEP calculates the design and off-design thrust and sfc performance of User defined engine cycles. The thermodynamic parameters throughout the engine as generated by NNEP are then combined with input parameters defining the component characteristics in WATE to calculate the bare engine weight of this User defined engine. Preprocessor programs for NNEP were previously developed to simplify the task of creating input datasets. This report describes a similar preprocessor for the WATE code.

  18. Computational physics of the mind

    NASA Astrophysics Data System (ADS)

    Duch, Włodzisław

    1996-08-01

    In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.

  19. Numerical uncertainty in computational engineering and physics

    SciTech Connect

    Hemez, Francois M

    2009-01-01

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.

  20. An Object-Oriented Approach to Writing Computational Electromagnetics Codes

    NASA Technical Reports Server (NTRS)

    Zimmerman, Martin; Mallasch, Paul G.

    1996-01-01

    Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.

  1. Computational Physics of Small Meteors

    NASA Astrophysics Data System (ADS)

    Surzhikov, S. T.

    2015-10-01

    This paper is dedicated to application of the modern computational aero physical models, which were developed for mathematical modeling of aerothermodynamics and radiative gasdynamics of space vehicles, for investigation of meteoric phenomena. Short analysis of modern problems of meteoric physics is presented. The typical chemical compositions of meteoric bodies are discussed. Considerable attention is given to investigation of the non-equilibrium physical-chemical processes accompanying a meteor with relatively small size at altitude of 70 km, in the conditions, when the vibrational relaxation zone exceeds the size of meteoric body. Two-dimensional numerical simulation radiative gas dynamics model of physically and chemically nonequilibrium flow field around the meteoroid bodies entering Earth atmosphere is presented.

  2. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    SciTech Connect

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  3. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    SciTech Connect

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  4. A theory manual for multi-physics code coupling in LIME.

    SciTech Connect

    Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-03-01

    The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

  5. Computational Methods for Collisional Plasma Physics

    SciTech Connect

    Lasinski, B F; Larson, D J; Hewett, D W; Langdon, A B; Still, C H

    2004-02-18

    Modeling the high density, high temperature plasmas produced by intense laser or particle beams requires accurate simulation of a large range of plasma collisionality. Current simulation algorithms accurately and efficiently model collisionless and collision-dominated plasmas. The important parameter regime between these extremes, semi-collisional plasmas, has been inadequately addressed to date. LLNL efforts to understand and harness high energy-density physics phenomena for stockpile stewardship require accurate simulation of such plasmas. We have made significant progress towards our goal: building a new modeling capability to accurately simulate the full range of collisional plasma physics phenomena. Our project has developed a computer model using a two-pronged approach that involves a new adaptive-resolution, ''smart'' particle-in-cell algorithm: complex particle kinetics (CPK); and developing a robust 3D massively parallel plasma production code Z3 with collisional extensions. Our new CPK algorithms expand the function of point particles in traditional plasma PIC models by including finite size and internal dynamics. This project has enhanced LLNL's competency in computational plasma physics and contributed to LLNL's expertise and forefront position in plasma modeling. The computational models developed will be applied to plasma problems of interest to LLNL's stockpile stewardship mission. Such problems include semi-collisional behavior in hohlraums, high-energy-density physics experiments, and the physics of high altitude nuclear explosions (HANE). Over the course of this LDRD project, the world's largest fully electromagnetic PIC calculation was run, enabled by the adaptation of Z3 to the Advanced Simulation and Computing (ASCI) White system. This milestone calculation simulated an entire laser illumination speckle, brought new realism to laser-plasma interaction simulations, and was directly applicable to laser target physics. For the first time, magnetic

  6. Hanford Meteorological Station computer codes: Volume 4, The SUM computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1987-09-01

    At the end of each swing shift, the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, archives a set of daily weather observations. These weather observations are a summary of the maximum and minimum temperature, total precipitation, maximum and minimum relative humidity, total snowfall, total snow depth at 1200 Greenwich Mean Time (GMT), and maximum wind speed plus the direction from which the wind occurred and the time it occurred. This summary also indicates the occurrence of rain, snow, and other weather phenomena. The SUM computer code is used to archive the summary and apply quality assurance checks to the data. This code accesses an input file that contains the date of the previous archive and an output file that contains a daily weather summary for the current month. As part of the program, a data entry form consisting of 21 fields must be filled in by the user. The information on the form is appended to the monthly file, which provides an archive for the daily weather summary. This volume describes the implementation and operation of the SUM computer code at the HMS.

  7. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    SciTech Connect

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.

  8. Hanford Meteorological Station computer codes: Volume 6, The SFC computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1987-11-01

    Each hour the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, records and archives weather observations. Hourly surface weather observations consist of weather phenomena such as cloud type and coverage; dry bulb, wet bulb, and dew point temperatures; relative humidity; atmospheric pressure; and wind speed and direction. The SFC computer code is used to archive those weather observations and apply quality assurance checks to the data. This code accesses an input file, which contains the previous archive's date and hour and an output file, which contains surface observations for the current day. As part of the program, a data entry form consisting of 24 fields must be filled in. The information on the form is appended to the daily file, which provides an archive for the hourly surface observations.

  9. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  10. Proceduracy: Computer Code Writing in the Continuum of Literacy

    ERIC Educational Resources Information Center

    Vee, Annette

    2010-01-01

    This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…

  11. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes....

  12. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes....

  13. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes....

  14. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes....

  15. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes....

  16. Liquid rocket combustor computer code development

    NASA Technical Reports Server (NTRS)

    Liang, P. Y.

    1985-01-01

    The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.

  17. Computational space physics in the undergraduate physics curriculum

    NASA Astrophysics Data System (ADS)

    Martin, R. F.

    2006-12-01

    Computational physics education is a significant aspect of the undergraduate physics curriculum at a growing number of colleges and universities as exhibited, for example, by special sessions on this topic at recent APS (March 2004) and AAPT (Summer 2006) conferences. Since computational space physics has been a forefront research area for decades, it is only natural that examples from our discipline have a presence in this educational trend. The Illinois State University physics department has integrated computing throughout its physics curriculum and has developed a targeted undergraduate major sequence in computational physics. Several examples from magnetospheric physics have been integrated as computational projects in courses such as electromagnetism, advanced computational physics, nonlinear dynamics, and a capstone research course. I will discuss the movement toward more computer simulation in the undergraduate curriculum and give some specific examples of space physics course projects.

  18. Literature review of United States utilities computer codes for calculating actinide isotope content in irradiated fuel

    SciTech Connect

    Horak, W.C.; Lu, Ming-Shih

    1991-12-01

    This paper reviews the accuracy and precision of methods used by United States electric utilities to determine the actinide isotopic and element content of irradiated fuel. After an extensive literature search, three key code suites were selected for review. Two suites of computer codes, CASMO and ARMP, are used for reactor physics calculations; the ORIGEN code is used for spent fuel calculations. They are also the most widely used codes in the nuclear industry throughout the world. Although none of these codes calculate actinide isotopics as their primary variables intended for safeguards applications, accurate calculation of actinide isotopic content is necessary to fulfill their function.

  19. Hanford Meteorological Station computer codes: Volume 7, The RIVER computer code

    SciTech Connect

    Andrews, G.L.; Buck, J.W.

    1988-03-01

    The RIVER computer code is used to archive Columbia River data measured at the 100N reactor. The data are recorded every other hour starting at 0100 Pacific Standard Time (12 observations in a day), and consists of river elevation, temperature, and flow rate. The program prompts the user for river data by using a data entry form. After the data have been enetered and verified, the program appends each hour of river data to the end of each corresponding surface observation record for the current day. The appended data are then stored in the current month's surface observation file.

  20. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  1. Tuning Complex Computer Codes to Data and Optimal Designs

    NASA Astrophysics Data System (ADS)

    Park, Jeong Soo

    Modern scientific researchers often use complex computer simulation codes for theoretical investigations. We model the response of computer simulation code as the realization of a stochastic process. This approach, design and analysis of computer experiments (DACE), provides a statistical basis for analysing computer data, for designing experiments for efficient prediction and for comparing computer-encoded theory to experiments. An objective of research in a large class of dynamic systems is to determine any unknown coefficients in a theory. The coefficients can be determined by "tuning" the computer model to the real data so that the tuned code gives a good match to the real experimental data. Three design strategies for computer experiments are considered: data-adaptive sequential A-optimal design, maximum entropy design and optimal Latin-hypercube design. The following "code tuning" methodologies are proposed: nonlinear least squares, joint MLE, "separated" joint MLE and Bayesian method. The performance of these methods have been studied in several toy models. In the application to nuclear fusion devices, a cheaper emulator of the simulation code (BALDUR) has been constructed, and the transport coefficients were estimated from data of two tokamaks (ASDEX and PDX). Tuning complex computer codes to data using some statistical estimation methods and a cheap emulator of the code along with careful designs of computer experiments, with applications to nuclear fusion devices, is the topic of this thesis.

  2. Computer Tensor Codes to Design the War Drive

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.

  3. Computational Physics and Evolutionary Dynamics

    NASA Astrophysics Data System (ADS)

    Fontana, Walter

    2000-03-01

    One aspect of computational physics deals with the characterization of statistical regularities in materials. Computational physics meets biology when these materials can evolve. RNA molecules are a case in point. The folding of RNA sequences into secondary structures (shapes) inspires a simple biophysically grounded genotype-phenotype map that can be explored computationally and in the laboratory. We have identified some statistical regularities of this map and begin to understand their evolutionary consequences. (1) ``typical shapes'': Only a small subset of shapes realized by the RNA folding map is typical, in the sense of containing shapes that are realized significantly more often than others. Consequence: evolutionary histories mostly involve typical shapes, and thus exhibit generic properties. (2) ``neutral networks'': Sequences folding into the same shape are mutationally connected into a network that reaches across sequence space. Consequence: Evolutionary transitions between shapes reflect the fraction of boundary shared by the corresponding neutral networks in sequence space. The notion of a (dis)continuous transition can be made rigorous. (3) ``shape space covering'': Given a random sequence, a modest number of mutations suffices to reach a sequence realizing any typical shape. Consequence: The effective search space for evolutionary optimization is greatly reduced, and adaptive success is less dependent on initial conditions. (4) ``plasticity mirrors variability'': The repertoire of low energy shapes of a sequence is an indicator of how much and in which ways its energetically optimal shape can be altered by a single point mutation. Consequence: (i) Thermodynamic shape stability and mutational robustness are intimately linked. (ii) When natural selection favors the increase of stability, extreme mutational robustness -- to the point of an evolutionary dead-end -- is produced as a side effect. (iii) The hallmark of robust shapes is modularity.

  4. Application of computational fluid dynamics methods to improve thermal hydraulic code analysis

    NASA Astrophysics Data System (ADS)

    Sentell, Dennis Shannon, Jr.

    A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.

  5. The r-Java 2.0 code: nuclear physics

    NASA Astrophysics Data System (ADS)

    Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.

    2014-08-01

    Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.

  6. Hanford Meteorological Station computer codes: Volume 8, The REVIEW computer code

    SciTech Connect

    Andrews, G.L.; Burk, K.W.

    1988-08-01

    The Hanford Meteorological Station (HMS) routinely collects meteorological data from sources on and off the Hanford Site. The data are averaged over both 15 minutes and 1 hour and are maintained in separate databases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS. The databases are transferred to the Emergency Management System (EMS) DEC VAX 11/750 computer. The EMS is part of the Unified Dose Assessment Center, which is located on on the ground-level floor of the Federal building in Richland and operated by Pacific Northwest Laboratory. The computer program REVIEW is used to display meteorological data in graphical and alphanumeric form from either the 15-minute or hourly database. The code is available on the HMS and EMS computer. The REVIEW program helps maintain a high level of quality assurance on the instruments that collect the data and provides a convenient mechanism for analyzing meteorological data on a routine basis and during emergency response situations.

  7. Physics and numerics of the tensor code (incomplete preliminary documentation)

    SciTech Connect

    Burton, D.E.; Lettis, L.A. Jr.; Bryan, J.B.; Frary, N.R.

    1982-07-15

    The present TENSOR code is a descendant of a code originally conceived by Maenchen and Sack and later adapted by Cherry. Originally, the code was a two-dimensional Lagrangian explicit finite difference code which solved the equations of continuum mechanics. Since then, implicit and arbitrary Lagrange-Euler (ALE) algorithms have been added. The code has been used principally to solve problems involving the propagation of stress waves through earth materials, and considerable development of rock and soil constitutive relations has been done. The code has been applied extensively to the containment of underground nuclear tests, nuclear and high explosive surface and subsurface cratering, and energy and resource recovery. TENSOR is supported by a substantial array of ancillary routines. The initial conditions are set up by a generator code TENGEN. ZON is a multipurpose code which can be used for zoning, rezoning, overlaying, and linking from other codes. Linking from some codes is facilitated by another code RADTEN. TENPLT is a fixed time graphics code which provides a wide variety of plotting options and output devices, and which is capable of producing computer movies by postprocessing problem dumps. Time history graphics are provided by the TIMPLT code from temporal dumps produced during production runs. While TENSOR can be run as a stand-alone controllee, a special controller code TCON is available to better interface the code with the LLNL computer system during production jobs. In order to standardize compilation procedures and provide quality control, a special compiler code BC is used. A number of equation of state generators are available among them ROC and PMUGEN.

  8. Computational physics with PetaFlops computers

    NASA Astrophysics Data System (ADS)

    Attig, Norbert

    2009-04-01

    Driven by technology, Scientific Computing is rapidly entering the PetaFlops era. The Jülich Supercomputing Centre (JSC), one of three German national supercomputing centres, is focusing on the IBM Blue Gene architecture to provide computer resources of this class to its users, the majority of whom are computational physicists. Details of the system will be discussed and applications will be described which significantly benefit from this new architecture.

  9. Optimization of KINETICS Chemical Computation Code

    NASA Technical Reports Server (NTRS)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  10. Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses

    ERIC Educational Resources Information Center

    Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan

    2013-01-01

    Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…

  11. Computer code for charge-exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Kaufman, H. R.

    1981-01-01

    The propagation of the charge-exchange plasma from an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ASNI Standard FORTRAN.

  12. Para: a computer simulation code for plasma driven electromagnetic launchers

    SciTech Connect

    Thio, Y.-C.

    1983-03-01

    A computer code for simulation of rail-type accelerators utilizing a plasma armature has been developed and is described in detail. Some time varying properties of the plasma are taken into account in this code thus allowing the development of a dynamical model of the behavior of a plasma in a rail-type electromagnetic launcher. The code is being successfully used to predict and analyse experiments on small calibre rail-gun launchers.

  13. Computer Code Systems for Use with Meteorological Data.

    1983-09-14

    Version 00 The staff of the Nuclear Regulatory Commission uses the computer codes in this collection to examine, assess, and utilize the hourly values of meteorological data which are received on magnetic tapes in a specified format.

  14. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    SciTech Connect

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed.

  15. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    SciTech Connect

    Sullivan, T.M.

    1992-04-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed.

  16. Independent peer review of nuclear safety computer codes

    SciTech Connect

    Boyack, B.E.; Jenks, R.P.

    1993-02-01

    A structured process of independent computer code peer review has been developed to assist the US Nuclear Regulatory Commission (NRC) and the US Department of Energy in their nuclear safety missions. This paper focuses on the process that evolved during recent reviews of NRC codes.

  17. Computer-assisted coding and clinical documentation: first things first.

    PubMed

    Tully, Melinda; Carmichael, Angela

    2012-10-01

    Computer-assisted coding tools have the potential to drive improvements in seven areas: Transparency of coding. Productivity (generally by 20 to 25 percent for inpatient claims). Accuracy (by improving specificity of documentation). Cost containment (by reducing overtime expenses, audit fees, and denials). Compliance. Efficiency. Consistency.

  18. Code 672 observational science branch computer networks

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Shirk, H. G.

    1988-01-01

    In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.

  19. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  20. Hanford Meteorological Station computer codes: Volume 10, The ARCHIVE computer code

    SciTech Connect

    Andrews, G.L.; Burk, K.W.

    1989-08-01

    The purpose of the ARCHIVE computer program is twofold: (1) convert selected hourly binary data into formatted ASCII data, and (2) organize the converted data into monthly files. Formatted ASCII files are easier to access on a routine basis. The program is executed once a day and is initiated from a command file that submits itself to the SYS$BATCH queue on a daily basis. The monthly files are stored on the HMS computer's fixed hard disk and are merged into yearly files (located on removable disk packs) at the end of each year. This report describes the data bases maintained at the HMS, gives an overview of the ARCHIVE program, describes input and output files accessed by the ARCHIVE program, provides a description of program initiation, and discusses the limitations of the ARCHIVE program. A section on trouble-shooting is included. In addition, the appendixes contain flow charts, detailed descriptions, and source code listings for the ARCHIVE program and related subroutines. A description of the ARCHIVE command file and the data input and output files completes the report. 3 refs., 1 fig.

  1. Displaying Computer Simulations Of Physical Phenomena

    NASA Technical Reports Server (NTRS)

    Watson, Val

    1991-01-01

    Paper discusses computer simulation as means of experiencing and learning to understand physical phenomena. Covers both present simulation capabilities and major advances expected in near future. Visual, aural, tactile, and kinesthetic effects used to teach such physical sciences as dynamics of fluids. Recommends classrooms in universities, government, and industry be linked to advanced computing centers so computer simulations integrated into education process.

  2. Health Physics Code System for Evaluating Accidents Involving Radioactive Materials.

    SciTech Connect

    2014-10-01

    Version 03 The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculational tool for evaluating accidents involving radioactive materials. HOTSPOT codes provide a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. The developer's website is: http://www.llnl.gov/nhi/hotspot/. Four general programs, PLUME, EXPLOSION, FIRE, and RESUSPENSION, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Additional programs deal specifically with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. The FIDLER program can calibrate radiation survey instruments for ground survey measurements and initial screening of personnel for possible plutonium uptake in the lung. The HOTSPOT codes are fast, portable, easy to use, and fully documented in electronic help files. HOTSPOT supports color high resolution monitors and printers for concentration plots and contours. The codes have been extensively used by the DOS community since 1985. Tables and graphical output can be directed to the computer screen, printer, or a disk file. The graphical output consists of dose and ground contamination as a function of plume centerline downwind distance, and radiation dose and ground contamination contours. Users have the option of displaying scenario text on the plots. HOTSPOT 3.0.1 fixes three significant Windows 7 issues: � Executable installed properly under "Program Files/HotSpot 3.0". Installation package now smaller: removed dependency on older Windows DLL files which previously needed to \\ � Forms now properly scale based on DPI instead of font for users who change their screen resolution to something other than 100%. This is a more common feature in Windows 7.

  3. Health Physics Code System for Evaluating Accidents Involving Radioactive Materials.

    2014-10-01

    Version 03 The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculational tool for evaluating accidents involving radioactive materials. HOTSPOT codes provide a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. The developer's website is: http://www.llnl.gov/nhi/hotspot/. Four general programs, PLUME, EXPLOSION, FIRE, and RESUSPENSION, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosivemore » release, fuel fire, or an area contamination event. Additional programs deal specifically with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. The FIDLER program can calibrate radiation survey instruments for ground survey measurements and initial screening of personnel for possible plutonium uptake in the lung. The HOTSPOT codes are fast, portable, easy to use, and fully documented in electronic help files. HOTSPOT supports color high resolution monitors and printers for concentration plots and contours. The codes have been extensively used by the DOS community since 1985. Tables and graphical output can be directed to the computer screen, printer, or a disk file. The graphical output consists of dose and ground contamination as a function of plume centerline downwind distance, and radiation dose and ground contamination contours. Users have the option of displaying scenario text on the plots. HOTSPOT 3.0.1 fixes three significant Windows 7 issues: � Executable installed properly under "Program Files/HotSpot 3.0". Installation package now smaller: removed dependency on older Windows DLL files which previously needed to \\ � Forms now properly scale based on DPI instead of font for users who change their screen resolution to something other than 100%. This is a more common feature in Windows 7

  4. Enhancements to the STAGS computer code

    NASA Technical Reports Server (NTRS)

    Rankin, C. C.; Stehlin, P.; Brogan, F. A.

    1986-01-01

    The power of the STAGS family of programs was greatly enhanced. Members of the family include STAGS-C1 and RRSYS. As a result of improvements implemented, it is now possible to address the full collapse of a structural system, up to and beyond critical points where its resistance to the applied loads vanishes or suddenly changes. This also includes the important class of problems where a multiplicity of solutions exists at a given point (bifurcation), and where until now no solution could be obtained along any alternate (secondary) load path with any standard production finite element code.

  5. NASA Lewis Stirling engine computer code evaluation

    NASA Technical Reports Server (NTRS)

    Sullivan, Timothy J.

    1989-01-01

    In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.

  6. NASA Lewis Stirling engine computer code evaluation

    SciTech Connect

    Sullivan, T.J.

    1989-01-01

    In support of the US Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was /minus/11 percent for the P-40 and 12 percent for the RE-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvement to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions. 13 refs., 26 figs., 3 tabs.

  7. Proposed standards for peer-reviewed publication of computer code

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...

  8. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  9. RESRAD-CHEM: A computer code for chemical risk assessment

    SciTech Connect

    Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.

    1993-10-01

    RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.

  10. Reasoning with Computer Code: a new Mathematical Logic

    NASA Astrophysics Data System (ADS)

    Pissanetzky, Sergio

    2013-01-01

    A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self

  11. Computer code for intraply hybrid composite design

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1981-01-01

    A computer program is described for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories, and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.

  12. Physics/computer science. Passing messages between disciplines.

    PubMed

    Mézard, Marc

    2003-09-19

    Problems in computer science, such as error correction in information transfer and "satisfiability" in optimization, show phase transitions familiar from solid-state physics. In his Perspective, Mézard explains how recent advances in these three fields originate in similar "message passing" procedures. The exchange of elaborate messages between different variables and constraints, used in the study of phase transitions in physical systems, helps to make error correction and satisfiability codes more efficient.

  13. A Computer Code for TRIGA Type Reactors.

    SciTech Connect

    1992-04-09

    Version 00 TRIGAP was developed for reactor physics calculations of the 250 kW TRIGA reactor. The program can be used for criticality predictions, power peaking predictions, fuel element burn-up calculations and data logging, and in-core fuel management and fuel utilization improvement.

  14. Statistical physics, optimization and source coding

    NASA Astrophysics Data System (ADS)

    Zechhina, Riccardo

    2005-06-01

    The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?'' may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms -- the survey propagation (SP) algorithms -- that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.

  15. An algorithm for computing the distance spectrum of trellis codes

    NASA Technical Reports Server (NTRS)

    Rouanne, Marc; Costello, Daniel J., Jr.

    1989-01-01

    A class of quasiregular codes is defined for which the distance spectrum can be calculated from the codeword corresponding to the all-zero information sequence. Convolutional codes and regular codes are both quasiregular, as well as most of the best known trellis codes. An algorithm to compute the distance spectrum of linear, regular, and quasiregular trellis codes is presented. In particular, it can calculate the weight spectrum of convolutional (linear trellis) codes and the distance spectrum of most of the best known trellis codes. The codes do not have to be linear or regular, and the signals do not have to be used with equal probabilities. The algorithm is derived from a bidirectional stack algorithm, although it could also be based on the Viterbi algorithm. The algorithm is used to calculate the beginning of the distance spectrum of some of the best known trellis codes and to compute tight estimates on the first-event-error probability and on the bit-error probability.

  16. Physical Computing and Its Scope--Towards a Constructionist Computer Science Curriculum with Physical Computing

    ERIC Educational Resources Information Center

    Przybylla, Mareen; Romeike, Ralf

    2014-01-01

    Physical computing covers the design and realization of interactive objects and installations and allows students to develop concrete, tangible products of the real world, which arise from the learners' imagination. This can be used in computer science education to provide students with interesting and motivating access to the different topic…

  17. Computer Code For Turbocompounded Adiabatic Diesel Engine

    NASA Technical Reports Server (NTRS)

    Assanis, D. N.; Heywood, J. B.

    1988-01-01

    Computer simulation developed to study advantages of increased exhaust enthalpy in adiabatic turbocompounded diesel engine. Subsytems of conceptual engine include compressor, reciprocator, turbocharger turbine, compounded turbine, ducting, and heat exchangers. Focus of simulation of total system is to define transfers of mass and energy, including release and transfer of heat and transfer of work in each subsystem, and relationship among subsystems. Written in FORTRAN IV.

  18. Computer vision cracks the leaf code

    PubMed Central

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A.; Wing, Scott L.; Serre, Thomas

    2016-01-01

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  19. Computer vision cracks the leaf code.

    PubMed

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664

  20. Computer vision cracks the leaf code.

    PubMed

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  1. Utility subroutine package used by Applied Physics Division export codes. [LMFBR

    SciTech Connect

    Adams, C.H.; Derstine, K.L.; Henryson, H. II; Hosteny, R.P.; Toppel, B.J.

    1983-04-01

    This report describes the current state of the utility subroutine package used with codes being developed by the staff of the Applied Physics Division. The package provides a variety of useful functions for BCD input processing, dynamic core-storage allocation and managemnt, binary I/0 and data manipulation. The routines were written to conform to coding standards which facilitate the exchange of programs between different computers.

  2. HUDU: The Hanford Unified Dose Utility computer code

    SciTech Connect

    Scherpelz, R.I.

    1991-02-01

    The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs.

  3. Analyzing Pulse-Code Modulation On A Small Computer

    NASA Technical Reports Server (NTRS)

    Massey, David E.

    1988-01-01

    System for analysis pulse-code modulation (PCM) comprises personal computer, computer program, and peripheral interface adapter on circuit board that plugs into expansion bus of computer. Functions essentially as "snapshot" PCM decommutator, which accepts and stores thousands of frames of PCM data, sifts through them repeatedly to process according to routines specified by operator. Enables faster testing and involves less equipment than older testing systems.

  4. Experimental methodology for computational fluid dynamics code validation

    SciTech Connect

    Aeschliman, D.P.; Oberkampf, W.L.

    1997-09-01

    Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.

  5. Computer code for space-time diagnostics of nuclear safety parameters

    SciTech Connect

    Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.

    2012-07-01

    The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)

  6. Preliminary blade design using integrated computer codes

    NASA Astrophysics Data System (ADS)

    Ryan, Arve

    1988-12-01

    Loads on the root of a horizontal axis wind turbine (HAWT) rotor blade were analyzed. A design solution for the root area is presented. The loads on the blades are given by different load cases that are specified. To get a clear picture of the influence of different parameters, the whole blade is designed from scratch. This is only a preliminary design study and the blade should not be looked upon as a construction reference. The use of computer programs for the design and optimization is extensive. After the external geometry is set and the aerodynamic loads calculated, parameters like design stresses and laminate thicknesses are run through the available programs, and a blade design optimized on basis of facts and estimates used is shown.

  7. A three-dimensional magnetostatics computer code for insertion devices.

    PubMed

    Chubar, O; Elleaume, P; Chavanne, J

    1998-05-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.

  8. Recent applications of the transonic wing analysis computer code, TWING

    NASA Technical Reports Server (NTRS)

    Subramanian, N. R.; Holst, T. L.; Thomas, S. D.

    1982-01-01

    An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.

  9. A New Package of Computer Codes for Analyzing Light Curves of Eclipsing Pre-Cataclysmic Binaries

    NASA Astrophysics Data System (ADS)

    Pustynski, V.-V.; Pustylnik, I. B.

    2005-04-01

    Using the new package of computer codes for analyzing light curves of the two eclipsing pre-cataclysmic binary systems (PCBs) UU Sge and V471 Lyr we find updated values of the physical parameters and discuss the evolutionary state of these PCBs.

  10. FLASH: A finite element computer code for variably saturated flow

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  11. Nuclear Physics computer networking: Report of the Nuclear Physics Panel on Computer Networking

    SciTech Connect

    Bemis, C. ); Erskine, J. ); Franey, M. ); Greiner, D. ); Hoehn, M. ); Kaletka, M. ); LeVine, M. ); Roberson, R. (Duke Univ., Durham, NC (U

    1990-05-01

    This paper discusses: the state of computer networking within nuclear physics program; network requirements for nuclear physics; management structure; and issues of special interest to the nuclear physics program office.

  12. Osiris: A Modern, High-Performance, Coupled, Multi-Physics Code For Nuclear Reactor Core Analysis

    SciTech Connect

    Procassini, R J; Chand, K K; Clouse, C J; Ferencz, R M; Grandy, J M; Henshaw, W D; Kramer, K J; Parsons, I D

    2007-02-26

    To meet the simulation needs of the GNEP program, LLNL is leveraging a suite of high-performance codes to be used in the development of a multi-physics tool for modeling nuclear reactor cores. The Osiris code project, which began last summer, is employing modern computational science techniques in the development of the individual physics modules and the coupling framework. Initial development is focused on coupling thermal-hydraulics and neutral-particle transport, while later phases of the project will add thermal-structural mechanics and isotope depletion. Osiris will be applicable to the design of existing and future reactor systems through the use of first-principles, coupled physics models with fine-scale spatial resolution in three dimensions and fine-scale particle-energy resolution. Our intent is to replace an existing set of legacy, serial codes which require significant approximations and assumptions, with an integrated, coupled code that permits the design of a reactor core using a first-principles physics approach on a wide range of computing platforms, including the world's most powerful parallel computers. A key research activity of this effort deals with the efficient and scalable coupling of physics modules which utilize rather disparate mesh topologies. Our approach allows each code module to use a mesh topology and resolution that is optimal for the physics being solved, and employs a mesh-mapping and data-transfer module to effect the coupling. Additional research is planned in the area of scalable, parallel thermal-hydraulics, high-spatial-accuracy depletion and coupled-physics simulation using Monte Carlo transport.

  13. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  14. Users manual for CAFE-3D : a computational fluid dynamics fire code.

    SciTech Connect

    Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma

    2005-03-01

    The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.

  15. RESRAD: A computer code for evaluating radioactively contaminated sites

    SciTech Connect

    Yu, C.; Zielen, A.J.; Cheng, J.J.

    1993-12-31

    This document briefly describes the uses of the RESRAD computer code in calculating site-specific residual radioactive material guidelines and radiation dose-risk to an on-site individual (worker or resident) at a radioactively contaminated site. The adoption by the DOE in order 5400.5, pathway analysis methods, computer requirements, data display, the inclusion of chemical contaminants, benchmarking efforts, and supplemental information sources are all described. (GHH)

  16. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  17. Recent improvements of reactor physics codes in MHI

    SciTech Connect

    Kosaka, Shinya Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  18. Upgrades of Two Computer Codes for Analysis of Turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Liou, Meng-Sing

    2005-01-01

    Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.

  19. A proposed framework for computational fluid dynamics code calibration/validation

    SciTech Connect

    Oberkampf, W.L.

    1993-12-31

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.

  20. Physical Model for the Evolution of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Yamashita, Tatsuro; Narikiyo, Osamu

    2011-12-01

    Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.

  1. Connecting Neural Coding to Number Cognition: A Computational Account

    ERIC Educational Resources Information Center

    Prather, Richard W.

    2012-01-01

    The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…

  2. User's manual for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.

    1980-07-01

    This report describes how to use a revised version of the ORIGEN computer code, designated ORIGEN2. Included are a description of the input data, input deck organization, and sample input and output. ORIGEN2 can be obtained from the Radiation Shielding Information Center at ORNL.

  3. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    ERIC Educational Resources Information Center

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  4. Computer code for double beta decay QRPA based calculations

    SciTech Connect

    Barbero, C. A.; Mariano, A.; Krmpotić, F.; Samana, A. R.; Ferreira, V. dos Santos; Bertulani, C. A.

    2014-11-11

    The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.

  5. Computational Physics as a Path for Physics Education

    NASA Astrophysics Data System (ADS)

    Landau, Rubin H.

    2008-04-01

    Evidence and arguments will be presented that modifications in the undergraduate physics curriculum are necessary to maintain the long-term relevance of physics. Suggested will a balance of analytic, experimental, computational, and communication skills, that in many cases will require an increased inclusion of computation and its associated skill set into the undergraduate physics curriculum. The general arguments will be followed by a detailed enumeration of suggested subjects and student learning outcomes, many of which have already been adopted or advocated by the computational science community, and which permit high performance computing and communication. Several alternative models for how these computational topics can be incorporated into the undergraduate curriculum will be discussed. This includes enhanced topics in the standard existing courses, as well as stand-alone courses. Applications and demonstrations will be presented throughout the talk, as well as prototype video-based materials and electronic books.

  6. Computer code for determination of thermally perfect gas properties

    NASA Technical Reports Server (NTRS)

    Witte, David W.; Tatum, Kenneth E.

    1994-01-01

    A set of one-dimensional compressible flow relations for a thermally perfect, calorically imperfect gas is derived for the specific heat c(sub p), expressed as a polynomial function of temperature, and developed into the thermally perfect gas (TPG) computer code. The code produces tables of compressible flow properties similar to those of NACA Rep. 1135. Unlike the tables of NACA Rep. 1135 which are valid only in the calorically perfect temperature regime, the TPG code results are also valid in the thermally perfect calorically imperfect temperature regime which considerably extends the range of temperature application. Accuracy of the TPG code in the calorically perfect temperature regime is verified by comparisons with the tables of NACA Rep. 1135. In the thermally perfect, calorically imperfect temperature regime, the TPG code is validated by comparisons with results obtained from the method of NACA Rep. 1135 for calculating the thermally perfect calorically imperfect compressible flow properties. The temperature limits for application of the TPG code are also examined. The advantage of the TPG code is its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture thereof, whereas the method of NACA Rep. 1135 is restricted to only diatomic gases.

  7. Validation of Numerical Codes to Compute Tsunami Runup And Inundation

    NASA Astrophysics Data System (ADS)

    Velioğlu, Deniz; Cevdet Yalçıner, Ahmet; Kian, Rozita; Zaytsev, Andrey

    2015-04-01

    FLOW 3D and NAMI DANCE are two numerical codes which can be applied to analysis of flow and motion of long waves. Flow 3D simulates linear and nonlinear propagating surface waves as well as irregular waves including long waves. NAMI DANCE uses finite difference computational method to solve nonlinear shallow water equations (NSWE) in long wave problems, specifically tsunamis. Both codes can be applied to tsunami simulations and visualization of long waves. Both codes are capable of solving flooding problems. However, FLOW 3D is designed mainly to solve flooding problem from land and NAMI DANCE is designed to solve flooding problem from the sea. These numerical codes are applied to some benchmark problems for validation and verification. One useful benchmark problem is the runup of solitary waves which is investigated analytically and experimentally by Synolakis (1987). Since 1970s, solitary waves have commonly been used to model tsunamis especially in experimental and numerical studies. In this respect, a benchmark problem on runup of solitary waves is a relevant choice to assess the capability and validity of the numerical codes on amplification of tsunamis. In this study both codes have been tested, compared and validated by applying to the analytical benchmark problem of solitary wave runup on a sloping beach. Comparison of the results showed that both codes are in good agreement with the analytical and experimental results and thus can be proposed to be used in inundation of long waves and tsunami hazard analysis.

  8. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 1B: user's manual - input instructions. Computer code manual. [PWR; BWR

    SciTech Connect

    Hofmann, R.

    1981-11-01

    A useful computer simulation method based on the explicit finite difference technique can be used to address transient dynamic situations associated with nuclear reactor design and analysis. This volume is divided into two parts. Part A contains the theoretical background (physical and numerical) and the numerical equations for the STEALTH 1D, 2D, and 3D computer codes. Part B contains input instructions for all three codes. The STEALTH codes are based entirely on the published technology of the Lawrence Livermore National Laboratory, Livermore, California, and Sandia National Laboratories, Albuquerque, New Mexico.

  9. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 1A: user's manual - theoretical background and numerical equations. Computer code manual. [PWR; BWR

    SciTech Connect

    Hofmann, R.

    1981-11-01

    A useful computer simulation method based on the explicit finite difference technique can be used to address transient dynamic situations associated with nuclear reactor design and analysis. This volume is divided into two parts. Part A contains the theoretical background (physical and numerical) and the numerical equations for the STEALTH 1D, 2D, and 3D computer codes. Part B contains input instructions for all three codes. The STEALTH codes are based entirely on the published technology of the Lawrence Livermore National Laboratory, Livermore, California, and Sandia National Laboratories, Albuquerque, New Mexico.

  10. Development and application of computational aerothermodynamics flowfield computer codes

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    1993-01-01

    Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.

  11. Effective Computer Use in Physics Education

    ERIC Educational Resources Information Center

    Bork, Alfred M.

    1975-01-01

    Illustrates a sample remedial program in mathematics for physics students. Describes two computer games with successful instructional strategies and programs which help mathematically unsophisticated students to grasp the notion of a differential equation. (GH)

  12. A new computational decoding complexity measure of convolutional codes

    NASA Astrophysics Data System (ADS)

    Benchimol, Isaac B.; Pimentel, Cecilio; Souza, Richard Demo; Uchôa-Filho, Bartolomeu F.

    2014-12-01

    This paper presents a computational complexity measure of convolutional codes well suitable for software implementations of the Viterbi algorithm (VA) operating with hard decision. We investigate the number of arithmetic operations performed by the decoding process over the conventional and minimal trellis modules. A relation between the complexity measure defined in this work and the one defined by McEliece and Lin is investigated. We also conduct a refined computer search for good convolutional codes (in terms of distance spectrum) with respect to two minimal trellis complexity measures. Finally, the computational cost of implementation of each arithmetic operation is determined in terms of machine cycles taken by its execution using a typical digital signal processor widely used for low-power telecommunications applications.

  13. Applications of the ARGUS code in accelerator physics

    SciTech Connect

    Petillo, J.J.; Mankofsky, A.; Krueger, W.A.; Kostas, C.; Mondelli, A.A.; Drobot, A.T.

    1993-12-31

    ARGUS is a three-dimensional, electromagnetic, particle-in-cell (PIC) simulation code that is being distributed to U.S. accelerator laboratories in collaboration between SAIC and the Los Alamos Accelerator Code Group. It uses a modular architecture that allows multiple physics modules to share common utilities for grid and structure input., memory management, disk I/O, and diagnostics, Physics modules are in place for electrostatic and electromagnetic field solutions., frequency-domain (eigenvalue) solutions, time- dependent PIC, and steady-state PIC simulations. All of the modules are implemented with a domain-decomposition architecture that allows large problems to be broken up into pieces that fit in core and that facilitates the adaptation of ARGUS for parallel processing ARGUS operates on either Cray or workstation platforms, and MOTIF-based user interface is available for X-windows terminals. Applications of ARGUS in accelerator physics and design are described in this paper.

  14. Construction of large-scale simulation codes using ALPAL (A Livermore Physics Applications Language)

    SciTech Connect

    Cook, G.

    1990-10-01

    A Livermore Physics Applications Language (ALPAL) is a new computer tool that is designed to leverage the abilities and creativity of computational scientist. Some of the ways that ALPAL provides this leverage are: first, it eliminates many sources of errors; second, it permits building code modules with far greater speed than is otherwise possible; third, it provides a means of specifying almost any numerical algorithm; and fourth, it is a language that is close to a journal-style presentation of physics models and numerical methods for solving them. 13 refs., 9 figs.

  15. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  16. A DOE Computer Code Toolbox: Issues and Opportunities

    SciTech Connect

    Vincent, A.M. III

    2001-06-12

    The initial activities of a Department of Energy (DOE) Safety Analysis Software Group to establish a Safety Analysis Toolbox of computer models are discussed. The toolbox shall be a DOE Complex repository of verified and validated computer models that are configuration-controlled and made available for specific accident analysis applications. The toolbox concept was recommended by the Defense Nuclear Facilities Safety Board staff as a mechanism to partially address Software Quality Assurance issues. Toolbox candidate codes have been identified through review of a DOE Survey of Software practices and processes, and through consideration of earlier findings of the Accident Phenomenology and Consequence Evaluation program sponsored by the DOE National Nuclear Security Agency/Office of Defense Programs. Planning is described to collect these high-use codes, apply tailored SQA specific to the individual codes, and implement the software toolbox concept. While issues exist such as resource allocation and the interface among code developers, code users, and toolbox maintainers, significant benefits can be achieved through a centralized toolbox and subsequent standardized applications.

  17. Compendium of computer codes for the researcher in magnetic fusion energy

    SciTech Connect

    Porter, G.D.

    1989-03-10

    This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs.

  18. New Parallel computing framework for radiation transport codes

    SciTech Connect

    Kostin, M.A.; Mokhov, N.V.; Niita, K.; /JAERI, Tokai

    2010-09-01

    A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  19. Application of computational physics within Northrop

    NASA Technical Reports Server (NTRS)

    George, M. W.; Ling, R. T.; Mangus, J. F.; Thompkins, W. T.

    1987-01-01

    An overview of Northrop programs in computational physics is presented. These programs depend on access to today's supercomputers, such as the Numerical Aerodynamical Simulator (NAS), and future growth on the continuing evolution of computational engines. Descriptions here are concentrated on the following areas: computational fluid dynamics (CFD), computational electromagnetics (CEM), computer architectures, and expert systems. Current efforts and future directions in these areas are presented. The impact of advances in the CFD area is described, and parallels are drawn to analagous developments in CEM. The relationship between advances in these areas and the development of advances (parallel) architectures and expert systems is also presented.

  20. [Use of Computers in Introductory Physics Teaching.

    ERIC Educational Resources Information Center

    Merrill, John R.

    This paper presents some of the preliminary results of Project COEXIST at Dartmouth College, an NSF sponsored project to investigate ways to use computers in introductory physics and mathematics teaching. Students use the computer in a number of ways on homework, on individual projects, and in the laboratory. Students write their own programs,…

  1. Verification and validation plan for reactor analysis computer codes

    SciTech Connect

    Toffer, H.; Crowe, R.D.; Schwinkendorf, K.N.; Pevey, R.E.

    1989-11-01

    This report presents a verification and validation (V&V) plan for reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. This plan fulfills the commitments by Westinghouse Savannah River Company (WSRC) to the Department of Energy Savannah River (DOE-SR) as identified in a letter to R.E. Tiller (Reference 1). The plan stresses verification and validation by demonstrating successful application of the codes to predict reactor data, special measurements, and benchmarks. This is in compliance with the intent of the WSRC quality assurance requirements. Restructuring of software especially to achieve verification compliance is not recommended.

  2. Verification and validation plan for reactor analysis computer codes

    SciTech Connect

    Toffer, H.; Crowe, R.D.; Schwinkendorf, K.N. ); Pevey, R.E. )

    1989-11-01

    This report presents a verification and validation (V V) plan for reactor analysis computer codes used in Technical Specifications development and for other safety and production support calculations. This plan fulfills the commitments by Westinghouse Savannah River Company (WSRC) to the Department of Energy Savannah River (DOE-SR) as identified in a letter to R.E. Tiller (Reference 1). The plan stresses verification and validation by demonstrating successful application of the codes to predict reactor data, special measurements, and benchmarks. This is in compliance with the intent of the WSRC quality assurance requirements. Restructuring of software especially to achieve verification compliance is not recommended.

  3. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  4. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  5. War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?

    PubMed Central

    Rzhetsky, Andrey; Evans, James A.

    2011-01-01

    The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276

  6. VARSKIN MOD 2 and SADDE MOD2: Computer codes for assessing skin dose from skin contamination

    SciTech Connect

    Durham, J.S. )

    1992-12-01

    The computer code VARSKIN has been modified to calculate dose to skin from three-dimensional sources, sources separated from the skin by layers of protective clothing, and gamma dose from certain radionuclides correction for backscatter has also been incorporated for certain geometries. This document describes the new code, VARSKIN Mod 2, including installation and operation instructions, provides detailed descriptions of the models used, and suggests methods for avoiding misuse of the code. The input data file for VARSKIN Mod 2 has been modified to reflect current physical data, to include the contribution to dose from internal conversion and Auger electrons, and to reflect a correction for low-energy electrons. In addition, the computer code SADDE: Scaled Absorbed Dose Distribution Evaluator has been modified to allow the generation of scaled absorbed dose distributions for mixtures of radionuclides and intereat conversion and Auger electrons. This new code, SADDE Mod 2, is also described in this document. Instructions for installation and operation of the code and detailed descriptions of the models used in the code are provided.

  7. Computing in high-energy physics

    DOE PAGESBeta

    Mount, Richard P.

    2016-05-31

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  8. The Entangled Histories of Physics and Computation

    NASA Astrophysics Data System (ADS)

    Rodriguez, Cesar

    2007-03-01

    The history of physics and computation intertwine in a fascinating manner that is relevant to the field of quantum computation. This talk focuses of the interconnections between both by examining their rhyming philosophies, recurrent characters and common themes. Leibniz not only was one of the lead figures of calculus, but also left his footprint in physics and invented the concept of a universal computational language. This last idea was further developed by Boole, Russell, Hilbert and G"odel. Physicists such as Boltzmann and Maxwell also established the foundation of the field of information theory later developed by Shannon. The war efforts of von Neumann and Turing can be juxtaposed to the Manhattan Project. Professional and personal connections of these characters to the development of physics will be emphasized. Recently, new cryptographic developments lead to a reexamination of the fundamentals of quantum mechanics, while quantum computation is discovering a new perspective on the nature of information itself.

  9. Computing in the Introductory Physics Course

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth; Sherwood, Bruce

    2004-03-01

    In the Matter & Interactions version of the calculus-based introductory physics course (http://www4.ncsu.edu/ ˜rwchabay/mi) , students write programs in VPython (http://vpython.org) to model physical systems and to calculate and visualize electric and magnetic fields. VPython is unusually easy to learn, produces navigable 3D animations as a side effect of physics computations, and supports full vector calculations. The high speed of current computers makes sophisticated numerical analysis techniques unnecessary. Students can use simple first-order Euler integration, cutting the step size until the behavior of the system no longer changes. In mechanics, iterative application of the momentum principle gives students a sense of the time-evolution character of Newton's second law which is usually missing from the standard course. In E, students calculate electric and magnetic fields numerically and display them in 3D. We are currently studying the impact of introducing computational physics into the introductory course.

  10. The solar physics FORWARD codes: Now with widgets!

    NASA Astrophysics Data System (ADS)

    Forland, Blake; Gibson, Sarah; Dove, James; Kucera, Therese

    2014-01-01

    We have developed a suite of forward-modeling IDL codes (FORWARD) to convert analytic models or simulation data cubes into coronal observables, allowing a direct comparison with observations. Observables such as extreme ultraviolet, soft X-ray, white light, and polarization images from the Coronal Multichannel Polarimeter (CoMP) can be reproduced. The observer's viewpoint is also incorporated in the FORWARD analysis and the codes can output the results in a variety of forms in order to easily create movies, Carrington maps, or simply observable information at a particular point in the plane of the sky. We present a newly developed front end to the FORWARD codes which utilizes IDL widgets to facilitate ease of use by the solar physics community. Our ultimate goal is to provide as useful a tool as possible for a broad range of scientific applications.

  11. Computational Physics at a Liberal Arts College

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    1997-11-01

    Since students have different skills, computational physics at an undergraduate liberal arts college must be flexible. Some students write well; other students have good graphical design skills; and other students have mathematical ability. Most students will not major in physics and many will not major in science. We believe, however, that Computational Physics has broad appeal since it is an effective way to develop problem solving skills and to become computer literate. Students perceive that they are not well educated without a good understanding of a computer's power and its limitations. Learning to write and to design an interface that communicates an idea is part of our program. So is downloading information via the World Wide Web, FTP-ing homework, getting help from Computer Services, and emailing other students or the instructor. We have adopted a web-based approach throughout the curriculum and have added Computational Physics as a required course for majors. It is our intent (following a philosophy pioneered by the M.U.P.P.E.T. team at the University of Maryland) that students use the computer to explore real scientific problems early in their undergraduate career. Examples of student work will be presented.

  12. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  13. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  14. Benchmarking of computer codes and approaches for modeling exposure scenarios

    SciTech Connect

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  15. RESRAD-ECORISK: A computer code for ecological risk assessment

    SciTech Connect

    Cheng, J.J.

    1995-12-01

    RESRAD-ECORISK is a PC-based computer code developed by Argonne National Laboratory (ANL) to estimate risks from exposure of ecological receptors at sites contaminated with potentially hazardous chemicals. The code is based on and is consistent with the methodologies of RESRAD-CHEM, an ANL-developed computer code for assessments of human health risk. RESRAD-ECORISK uses environmental fate and transport models to estimate contaminant concentrations in environmental media from an initial contaminated soil source and food-web uptake models to estimate contaminant doses to ecological receptors. The dose estimates are then used to estimate a risk for the ecological receptor and to calculate preliminary soil guidelines for reducing risks to acceptable levels. Specifically, RESRAD-ECORISK calculates (1) a species-specific applied daily dose for each contaminant (using species-specific life history information and site-specific environmental media concentrations), (2) an ecological hazard quotient (EHQ) for each contaminant and species, and (3) preliminary soil cleanup criteria for each contaminant and receptor. RESRAD-ECORISK incorporates a user-friendly menu-driven interface, databases and default values for a variety of ecological and chemical parameters, and on-line help for easy operation. The code is sufficiently flexible to simulate different contaminated sites and incorporate site-specific ecological data.

  16. Inlet-Compressor Analysis Performed Using Coupled Computational Fluid Dynamics Codes

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Suresh, Ambady; Townsend, Scott

    1999-01-01

    A thorough understanding of dynamic interactions between inlets and compressors is extremely important to the design and development of propulsion control systems, particularly for supersonic aircraft such as the High-Speed Civil Transport (HSCT). Computational fluid dynamics (CFD) codes are routinely used to analyze individual propulsion components. By coupling the appropriate CFD component codes, it is possible to investigate inlet-compressor interactions. The objectives of this work were to gain a better understanding of inlet-compressor interaction physics, formulate a more realistic compressor-face boundary condition for time-accurate CFD simulations of inlets, and to take a first step toward the CFD simulation of an entire engine by coupling multidimensional component codes. This work was conducted at the NASA Lewis Research Center by a team of civil servants and support service contractors as part of the High Performance Computing and Communications Program (HPCCP).

  17. Optimization of Russian roulette parameters for the KENO computer code

    SciTech Connect

    Hoffman, T.J.

    1982-10-01

    Proper specification of the (statistical) weight standards for Monte Carlo calculations can lead to a substantial reduction in computer time. Frequently these weights are set intuitively. When optimization is performed, it is usually based on a simplified model (to enable mathematical analysis) and involves minimization of the sample variance. In this report, weight standards are optimized through consideration of the actual implementation of Russian roulette in the KENO computer code. The goal is minimization of computer time rather than minimization of sample variance. Verification of the development and assumptions is obtained from Monte Carlo simulations. The results indicate that the current default weight standards are appropriate for most problems in which thermal neutron transport is not a major consumer of computer time. For thermal systems, the optimization technique described in this report should be used.

  18. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  19. Development of non-linear finite element computer code

    NASA Technical Reports Server (NTRS)

    Becker, E. B.; Miller, T.

    1985-01-01

    Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.

  20. Computer code to interchange CDS and wave-drag geometry formats

    NASA Technical Reports Server (NTRS)

    Johnson, V. S.; Turnock, D. L.

    1986-01-01

    A computer program has been developed on the PRIME minicomputer to provide an interface for the passage of aircraft configuration geometry data between the Rockwell Configuration Development System (CDS) and a wireframe geometry format used by aerodynamic design and analysis codes. The interface program allows aircraft geometry which has been developed in CDS to be directly converted to the wireframe geometry format for analysis. Geometry which has been modified in the analysis codes can be transformed back to a CDS geometry file and examined for physical viability. Previously created wireframe geometry files may also be converted into CDS geometry files. The program provides a useful link between a geometry creation and manipulation code and analysis codes by providing rapid and accurate geometry conversion.

  1. Computer Code For Calculation Of The Mutual Coherence Function

    NASA Astrophysics Data System (ADS)

    Bugnolo, Dimitri S.

    1986-05-01

    We present a computer code in FORTRAN 77 for the calculation of the mutual coherence function (MCF) of a plane wave normally incident on a stochastic half-space. This is an exact result. The user need only input the path length, the wavelength, the outer scale size, and the structure constant. This program may be used to calculate the MCF of a well-collimated laser beam in the atmosphere.

  2. Bragg optics computer codes for neutron scattering instrument design

    SciTech Connect

    Popovici, M.; Yelon, W.B.; Berliner, R.R.; Stoica, A.D.

    1997-09-01

    Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.

  3. Extreme Scale Computing for First-Principles Plasma Physics Research

    SciTech Connect

    Chang, Choogn-Seock

    2011-10-12

    World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.

  4. Computer code for the prediction of nozzle admittance

    NASA Technical Reports Server (NTRS)

    Nguyen, Thong V.

    1988-01-01

    A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.

  5. Heat pipe design handbook, part 2. [digital computer code specifications

    NASA Technical Reports Server (NTRS)

    Skrabek, E. A.

    1972-01-01

    The utilization of a digital computer code for heat pipe analysis and design (HPAD) is described which calculates the steady state hydrodynamic heat transport capability of a heat pipe with a particular wick configuration, the working fluid being a function of wick cross-sectional area. Heat load, orientation, operating temperature, and heat pipe geometry are specified. Both one 'g' and zero 'g' environments are considered, and, at the user's option, the code will also perform a weight analysis and will calculate heat pipe temperature drops. The central porous slab, circumferential porous wick, arterial wick, annular wick, and axial rectangular grooves are the wick configurations which HPAD has the capability of analyzing. For Vol. 1, see N74-22569.

  6. Wind tunnel requirements for computational fluid dynamics code verification

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1987-01-01

    The role of experiment in the development of Computational Fluid Dynamics (CFD) for aerodynamic flow field prediction is discussed. Requirements for code verification from two sources that pace the development of CFD are described for: (1) development of adequate flow modeling, and (2) establishment of confidence in the use of CFD to predict complex flows. The types of data needed and their accuracy differs in detail and scope and leads to definite wind tunnel requirements. Examples of testing to assess and develop turbulence models, and to verify code development, are used to establish future wind tunnel testing requirements. Versatility, appropriate scale and speed range, accessibility for nonintrusive instrumentation, computerized data systems, and dedicated use for verification were among the more important requirements identified.

  7. Computer-Based Physics: An Anthology.

    ERIC Educational Resources Information Center

    Blum, Ronald, Ed.

    Designed to serve as a guide for integrating interactive problem-solving or simulating computers into a college-level physics course, this anthology contains nine articles each of which includes an introduction, a student manual, and a teacher's guide. Among areas covered in the articles are the computerized reduction of data to a Gaussian…

  8. Statistical and computational challenges in physical mapping

    SciTech Connect

    Nelson, D.O.; Speed, T.P.

    1994-06-01

    One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like Huntington`s disease, cystic fibrosis, and myotonic dystrophy. Instrumental in these efforts has been the construction of so-called {open_quotes}physical maps{close_quotes} of large regions of human chromosomes. Constructing a physical map of a chromosome presents a number of interesting challenges to the computational statistician. In addition to the general ill-posedness of the problem, complications include the size of the data sets, computational complexity, and the pervasiveness of experimental error. The nature of the problem and the presence of many levels of experimental uncertainty make statistical approaches to map construction appealing. Simultaneously, however, the size and combinatorial complexity of the problem make such approaches computationally demanding. In this paper we discuss what physical maps are and describe three different kinds of physical maps, outlining issues which arise in constructing them. In addition, we describe our experience with powerful, interactive statistical computing environments. We found that the ability to create high-level specifications of proposed algorithms which could then be directly executed provided a flexible rapid prototyping facility for developing new statistical models and methods. The ability to check the implementation of an algorithm by comparing its results to that of an executable specification enabled us to rapidly debug both specification and implementation in an environment of changing needs.

  9. The Computer in Second Semester Introductory Physics.

    ERIC Educational Resources Information Center

    Merrill, John R.

    This supplementary text material is meant to suggest ways in which the computer can increase students' intuitive understanding of fields and waves. The first way allows the student to produce a number of examples of the physics discussed in the text. For example, more complicated field and potential maps, or intensity patterns, can be drawn from…

  10. The Fundamental Physical Limits of Computation.

    ERIC Educational Resources Information Center

    Bennett, Charles H.; Landauer, Rolf

    1985-01-01

    Examines what constraints govern the physical process of computation, considering such areas as whether a minimum amount of energy is required per logic step. Indicates that although there seems to be no minimum, answers to other questions are unresolved. Examples used include DNA/RNA, a Brownian clockwork turning machine, and others. (JN)

  11. Computer Network Resources for Physical Geography Instruction.

    ERIC Educational Resources Information Center

    Bishop, Michael P.; And Others

    1993-01-01

    Asserts that the use of computer networks provides an important and effective resource for geography instruction. Describes the use of the Internet network in physical geography instruction. Provides an example of the use of Internet resources in a climatology/meteorology course. (CFR)

  12. The development and performance of a message-passing version of the PAGOSA shock-wave physics code

    SciTech Connect

    Gardner, D.R.; Vaughan, C.T.

    1997-10-01

    A message-passing version of the PAGOSA shock-wave physics code has been developed at Sandia National Laboratories for multiple-instruction, multiple-data stream (MIMD) computers. PAGOSA is an explicit, Eulerian code for modeling the three-dimensional, high-speed hydrodynamic flow of fluids and the dynamic deformation of solids under high rates of strain. It was originally developed at Los Alamos National Laboratory for the single-instruction, multiple-data (SIMD) Connection Machine parallel computers. The performance of Sandia`s message-passing version of PAGOSA has been measured on two MIMD machines, the nCUBE 2 and the Intel Paragon XP/S. No special efforts were made to optimize the code for either machine. The measured scaled speedup (computational time for a single computational node divided by the computational time per node for fixed computational load) and grind time (computational time per cell per time step) show that the MIMD PAGOSA code scales linearly with the number of computational nodes used on a variety of problems, including the simulation of shaped-charge jets perforating an oil well casing. Scaled parallel efficiencies for MIMD PAGOSA are greater than 0.70 when the available memory per node is filled (or nearly filled) on hundreds to a thousand or more computational nodes on these two machines, indicating that the code scales very well. Thus good parallel performance can be achieved for complex and realistic applications when they are first implemented on MIMD parallel computers.

  13. Multicode comparison of selected source-term computer codes

    SciTech Connect

    Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.

    1989-04-01

    This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.

  14. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    SciTech Connect

    Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

  15. Teaching Computational Physics to High School Teachers

    NASA Astrophysics Data System (ADS)

    Cancio, Antonio C.

    2007-10-01

    This talk describes my experience in developing and giving an experimental workshop to expose high school teachers to basic concepts in computer modeling and give them tools to make simple 3D simulations for class demos and student projects. Teachers learned basic techniques of simulating dynamics using high school and introductory college level physics and basic elements of programming. High quality graphics were implemented in an easy to use, open source software package, VPython, currently in use in college introductory courses. Simulations covered areas of everyday physics accessible to computational approaches which would otherwise be hard to treat at introductory level, such as the physics of sports, realistic planetary motion and chaotic motion. The challenges and successes of teaching this subject in an experimental one-week-long workshop format, and to an audience completely new to the subject will be discussed.

  16. Nyx: A MASSIVELY PARALLEL AMR CODE FOR COMPUTATIONAL COSMOLOGY

    SciTech Connect

    Almgren, Ann S.; Bell, John B.; Lijewski, Mike J.; Lukic, Zarija; Van Andel, Ethan

    2013-03-01

    We present a new N-body and gas dynamics code, called Nyx, for large-scale cosmological simulations. Nyx follows the temporal evolution of a system of discrete dark matter particles gravitationally coupled to an inviscid ideal fluid in an expanding universe. The gas is advanced in an Eulerian framework with block-structured adaptive mesh refinement; a particle-mesh scheme using the same grid hierarchy is used to solve for self-gravity and advance the particles. Computational results demonstrating the validation of Nyx on standard cosmological test problems, and the scaling behavior of Nyx to 50,000 cores, are presented.

  17. A computer code for performance of spur gears

    NASA Technical Reports Server (NTRS)

    Wang, K. L.; Cheng, H. S.

    1983-01-01

    In spur gears both performance and failure predictions are known to be strongly dependent on the variation of load, lubricant film thickness, and total flash or contact temperature of the contacting point as it moves along the contact path. The need of an accurate tool for predicting these variables has prompted the development of a computer code based on recent findings in EHL and on finite element methods. The analyses and some typical results which to illustrate effects of gear geometry, velocity, load, lubricant viscosity, and surface convective heat transfer coefficient on the performance of spur gears are analyzed.

  18. HIBRA: A computer code for heavy ion binary reaction analysis employing ion track detectors

    NASA Astrophysics Data System (ADS)

    Jamil, Khalid; Ahmad, Siraj-ul-Islam; Manzoor, Shahid

    2016-01-01

    Collisions of heavy ions many times result in production of only two reaction products. Study of heavy ions using ion track detectors allows experimentalists to observe the track length in the plane of the detector, depth of the tracks in the volume of the detector and angles between the tracks on the detector surface, all known as track parameters. How to convert these into useful physics parameters such as masses, energies, momenta of the reaction products and the Q-values of the reaction? This paper describes the (a) model used to analyze binary reactions in terms of measured etched track parameters of the reaction products recorded in ion track detectors, and (b) the code developed for computing useful physics parameters for fast and accurate analysis of a large number of binary events. A computer code, HIBRA (Heavy Ion Binary Reaction Analysis) has been developed both in C++ and FORTRAN programming languages. It has been tested on the binary reactions from 12.5 MeV/u 84Kr ions incident upon U (natural) target deposited on mica ion track detector. The HIBRA code can be employed with any ion track detector for which range-velocity relation is available including the widely used CR-39 ion track detectors. This paper provides the source code of HIBRA in C++ language along with input and output data to test the program.

  19. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    SciTech Connect

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  20. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    ERIC Educational Resources Information Center

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  1. Theoretical atomic physics code development I: CATS: Cowan Atomic Structure Code

    SciTech Connect

    Abdallah, J. Jr.; Clark, R.E.H.; Cowan, R.D.

    1988-12-01

    An adaptation of R.D. Cowan's Atomic Structure program, CATS, has been developed as part of the Theoretical Atomic Physics (TAPS) code development effort at Los Alamos. CATS has been designed to be easy to run and to produce data files that can interface with other programs easily. The CATS produced data files currently include wave functions, energy levels, oscillator strengths, plane-wave-Born electron-ion collision strengths, photoionization cross sections, and a variety of other quantities. This paper describes the use of CATS. 10 refs.

  2. Status of Continuum Edge Gyrokinetic Code Physics Development

    SciTech Connect

    Xu, X Q; Xiong, Z; Dorr, M R; Hittinger, J A; Kerbel, G D; Nevins, W M; Cohen, B I; Cohen, R H

    2005-05-31

    We are developing an edge gyro-kinetic continuum simulation code to study the boundary plasma over a region extending from inside the H-mode pedestal across the separatrix to the divertor plates. A 4-D ({psi}, {theta}, {epsilon}, {mu}) version of this code is presently being implemented, en route to a full 5-D version. A set of gyrokinetic equations[1] are discretized on computational grid which incorporates X-point divertor geometry. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. A fourth order upwinding algorithm is used for particle cross-field drifts, parallel streaming, and acceleration. Boundary conditions at conducting material surfaces are implemented on the plasma side of the sheath. The Poisson-like equation is solved using GMRES with multi-grid preconditioner from HYPRE. A nonlinear Fokker-Planck collision operator from STELLA[2] in ({nu}{sub {parallel}},{nu}{sub {perpendicular}}) has been streamlined and integrated into the gyro-kinetic package using the same implicit Newton-Krylov solver and interpolating F and dF/dt|{sub coll} to/from ({epsilon}, {mu}) space. With our 4D code we compute the ion thermal flux, ion parallel velocity, self-consistent electric field, and geo-acoustic oscillations, which we compare with standard neoclassical theory for core plasma parameters; and we study the transition from collisional to collisionless end-loss. In the real X-point geometry, we find that the particles are trapped near outside midplane and in the X-point regions due to the magnetic configurations. The sizes of banana orbits are comparable to the pedestal width and/or the SOL width for energetic trapped particles. The effect of the real X-point geometry and edge plasma conditions on standard neoclassical theory will be evaluated, including a comparison of our 4D code with other kinetic

  3. An accurate Fortran code for computing hydrogenic continuum wave functions at a wide range of parameters

    NASA Astrophysics Data System (ADS)

    Peng, Liang-You; Gong, Qihuang

    2010-12-01

    The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for

  4. Temporal codes and computations for sensory representation and scene analysis.

    PubMed

    Cariani, Peter A

    2004-09-01

    This paper considers a space of possible temporal codes, surveys neurophysiological and psychological evidence for their use in nervous systems, and presents examples of neural timing networks that operate in the time-domain. Sensory qualities can be encoded temporally by means of two broad strategies: stimulus-driven temporal correlations (phase-locking) and stimulus-triggering of endogenous temporal response patterns. Evidence for stimulus-related spike timing patterns exists in nearly every sensory modality, and such information can be potentially utilized for representation of stimulus qualities, localization of sources, and perceptual grouping. Multiple strategies for temporal (time, frequency, and code-division) multiplexing of information for transmission and grouping are outlined. Using delays and multiplications (coincidences), neural timing networks perform time-domain signal processing operations to compare, extract and separate temporal patterns. Separation of synthetic double vowels by a recurrent neural timing network is used to illustrate how coherences in temporal fine structure can be exploited to build up and separate periodic signals with different fundamentals. Timing nets constitute a time-domain scene analysis strategy based on temporal pattern invariance rather than feature-based labeling, segregation and binding of channels. Further potential implications of temporal codes and computations for new kinds of neural networks are explored.

  5. IMPLEMENTING SCIENTIFIC SIMULATION CODES HIGHLY TAILORED FOR VECTOR ARCHITECTURES USING CUSTOM CONFIGURABLE COMPUTING MACHINES

    NASA Technical Reports Server (NTRS)

    Rutishauser, David K.

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  6. Lattice physics capabilities of the SCALE code system using TRITON

    SciTech Connect

    DeHart, M. D.

    2006-07-01

    This paper describes ongoing calculations used to validate the TRITON depletion module in SCALE for light water reactor (LWR) fuel lattices. TRITON has been developed to provide improved resolution for lattice physics mixed-oxide fuel assemblies as programs to burn such fuel in the United States begin to come online. Results are provided for coupled TRITON/PARCS analyses of an LWR core in which TRITON was employed for generation of appropriately weighted few-group nodal cross-sectional sets for use in core-level calculations using PARCS. Additional results are provided for code-to-code comparisons for TRITON and a suite of other depletion packages in the modeling of a conceptual next-generation boiling water reactor fuel assembly design. Results indicate that the set of SCALE functional modules used within TRITON provide an accurate means for lattice physics calculations. Because the transport solution within TRITON provides a generalized-geometry capability, this capability is extensible to a wide variety of non-traditional and advanced fuel assembly designs. (authors)

  7. The physics of the FLUKA code: recent developments

    NASA Astrophysics Data System (ADS)

    Sala, P. R.; Fluka Collaboration

    FLUKA is a Monte Carlo code able to simulate interaction and transport of hadrons heavy ions and electromagnetic particles from few keV or thermal neutron to cosmic ray energies in whichever material It has proven capabilities in accelerator design and shielding ADS studies and experiments dosimetry and hadrontherapy space radiation and cosmic ray shower studies in the atmosphere The highest priority in the design and development of the code has always been the implementation and improvement of sound and modern physical models A summary of the FLUKA physical models is given while recent developments are described in detail among the others extensions of the intermediate energy hadronic interaction generator improvements in the equilibrium stage of hadronic interactions refinements in photon cross sections and interaction models analytical on-line evolution of radio-activation and remnant dose In particular new developments in the nucleus-nucleus interaction models are discussed Comparisons with experimental data and examples of applications of relevance for space radition are also provided

  8. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    NASA Astrophysics Data System (ADS)

    Williams, M. L.; Wright, R. Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) were extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. Several improvements in the original methodology resulted. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares methods to compute best estimate values for the Bell factor and group dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems.

  9. Development and application of the GIM code for the Cyber 203 computer

    NASA Technical Reports Server (NTRS)

    Stainaker, J. F.; Robinson, M. A.; Rawlinson, E. G.; Anderson, P. G.; Mayne, A. W.; Spradley, L. W.

    1982-01-01

    The GIM computer code for fluid dynamics research was developed. Enhancement of the computer code, implicit algorithm development, turbulence model implementation, chemistry model development, interactive input module coding and wing/body flowfield computation are described. The GIM quasi-parabolic code development was completed, and the code used to compute a number of example cases. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and implicit finite difference scheme were also added. Development was completed on the interactive module for generating the input data for GIM. Solutions for inviscid hypersonic flow over a wing/body configuration are also presented.

  10. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  11. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  12. High frame rate photoacoustic computed tomography using coded excitation

    NASA Astrophysics Data System (ADS)

    Azuma, Masataka; Zhang, Haichong K.; Kondo, Kengo; Namita, Takeshi; Yamakawa, Makoto; Shiina, Tsuyoshi

    2015-03-01

    Photoacoustic Computed Tomography (PACT) records signals from a wide range of angles to achieve uniform, highresolution images. A high-power laser is generally used for PACT, but the long acquisition time with a single probe is a problem due to the low pulse-repetition frequency (PRF). For PACT, this degrades image resolution and contrast because it is hard to scan with a small step interval. Moreover, in vivo measurement requires a fast image acquisition system to avoid motion artifacts. The problem can be resolved by using a high PRF laser, which provides only weak energy. Averaging measured signals many times can mitigate the low signal-to-noise issue, but the PRF is restricted by the acoustic time of flight, so this is a new source of measurement time increase. Here, we present the coded-excitation approach, which we previously proposed for linear scanning, to increase the PACT frame rate. Coded excitation irradiates temporally encoded pulses and enhances the signal amplitude through decoding. The PRF is thus not restricted to acoustic time of flight. Consequently, acquisition time can be shortened by increasing PRF, and the SNR increases for the same measurement time. To validate the proposed idea, we conducted experiments using a high PRF laser with a revolving motor and compared the performance of coded excitation to that of averaging. Results demonstrated that the contamination of a signal acquired from different angles was negligible, and that the scanning pitch was remarkably improved because the start point of decoding can be set in any code in the periodic sequence.

  13. High-performance computational condensed-matter physics in the cloud

    NASA Astrophysics Data System (ADS)

    Rehr, J. J.; Svec, L.; Gardner, J. P.; Prange, M. P.

    2009-03-01

    We demonstrate the feasibility of high performance scientific computation in condensed-matter physics using cloud computers as an alternative to traditional computational tools. The availability of these large, virtualized pools of compute resources raises the possibility of a new compute paradigm for scientific research with many advantages. For research groups, cloud computing provides convenient access to reliable, high performance clusters and storage, without the need to purchase and maintain sophisticated hardware. For developers, virtualization allows scientific codes to be pre-installed on machine images, facilitating control over the computational environment. Detailed tests are presented for the parallelized versions of the electronic structure code SIESTA ootnotetextJ. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002). and for the x-ray spectroscopy code FEFF ootnotetextA. Ankudinov et al., Phys. Rev. B 65, 104107 (2002). including CPU, network, and I/O performance, using the the Amazon EC2 Elastic Cloud.

  14. Neural coding of computational factors affecting decision making.

    PubMed

    Dreher, Jean-Claude

    2013-01-01

    We constantly need to make decisions that can result in rewards of different amounts with different probabilities and at different timing. To characterize the neural coding of such computational factors affecting value-based decision making, we have investigated how reward information processing is influenced by parameters such as reward magnitude, probability, delay, effort, and uncertainty using either fMRI in healthy humans or intracranial recordings in patients with epilepsy. We decomposed brain signals modulated by these computational factors, showing that prediction error (PE), salient PE, and uncertainty signals are computed in partially overlapping brain circuits and that both transient and sustained uncertainty signals coexist in the brain. When investigating the neural representation of primary and secondary rewards, we found both a common brain network, including the ventromedial prefrontal cortex and ventral striatum, and a functional organization of the orbitofrontal cortex according to reward type. Moreover, separate valuation systems were engaged for delay and effort costs when deciding between options. Finally, genetic variations in dopamine-related genes influenced the response of the reward system and may contribute to individual differences in reward-seeking behavior and in predisposition to neuropsychiatric disorders.

  15. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 3, Validation assessments

    SciTech Connect

    Lombardo, N.J.; Cuta, J.M.; Michener, T.E.; Rector, D.R.; Wheeler, C.L.

    1986-12-01

    This report presents the results of the COBRA-SFS (Spent Fuel Storage) computer code validation effort. COBRA-SFS, while refined and specialized for spent fuel storage system analyses, is a lumped-volume thermal-hydraulic analysis computer code that predicts temperature and velocity distributions in a wide variety of systems. Through comparisons of code predictions with spent fuel storage system test data, the code's mathematical, physical, and mechanistic models are assessed, and empirical relations defined. The six test cases used to validate the code and code models include single-assembly and multiassembly storage systems under a variety of fill media and system orientations and include unconsolidated and consolidated spent fuel. In its entirety, the test matrix investigates the contributions of convection, conduction, and radiation heat transfer in spent fuel storage systems. To demonstrate the code's performance for a wide variety of storage systems and conditions, comparisons of code predictions with data are made for 14 runs from the experimental data base. The cases selected exercise the important code models and code logic pathways and are representative of the types of simulations required for spent fuel storage system design and licensing safety analyses. For each test, a test description, a summary of the COBRA-SFS computational model, assumptions, and correlations employed are presented. For the cases selected, axial and radial temperature profile comparisons of code predictions with test data are provided, and conclusions drawn concerning the code models and the ability to predict the data and data trends. Comparisons of code predictions with test data demonstrate the ability of COBRA-SFS to successfully predict temperature distributions in unconsolidated or consolidated single and multiassembly spent fuel storage systems.

  16. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  17. Exascale computing and what it means for shock physics

    NASA Astrophysics Data System (ADS)

    Germann, Timothy

    2015-06-01

    The U.S. Department of Energy is preparing to launch an Exascale Computing Initiative, to address the myriad challenges required to deploy and effectively utilize an exascale-class supercomputer (i.e., one capable of performing 1018 operations per second) in the 2023 timeframe. Since physical (power dissipation) requirements limit clock rates to at most a few GHz, this will necessitate the coordination of on the order of a billion concurrent operations, requiring sophisticated system and application software, and underlying mathematical algorithms, that may differ radically from traditional approaches. Even at the smaller workstation or cluster level of computation, the massive concurrency and heterogeneity within each processor will impact computational scientists. Through the multi-institutional, multi-disciplinary Exascale Co-design Center for Materials in Extreme Environments (ExMatEx), we have initiated an early and deep collaboration between domain (computational materials) scientists, applied mathematicians, computer scientists, and hardware architects, in order to establish the relationships between algorithms, software stacks, and architectures needed to enable exascale-ready materials science application codes within the next decade. In my talk, I will discuss these challenges, and what it will mean for exascale-era electronic structure, molecular dynamics, and engineering-scale simulations of shock-compressed condensed matter. In particular, we anticipate that the emerging hierarchical, heterogeneous architectures can be exploited to achieve higher physical fidelity simulations using adaptive physics refinement. This work is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research.

  18. "SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Sapar, A.; Poolamäe, R.

    2003-01-01

    A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As

  19. Interface design of VSOP'94 computer code for safety analysis

    SciTech Connect

    Natsir, Khairina Andiwijayakusuma, D.; Wahanani, Nursinta Adi; Yazid, Putranto Ilham

    2014-09-30

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  20. Interface design of VSOP'94 computer code for safety analysis

    NASA Astrophysics Data System (ADS)

    Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi

    2014-09-01

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  1. Issues in computational fluid dynamics code verification and validation

    SciTech Connect

    Oberkampf, W.L.; Blottner, F.G.

    1997-09-01

    A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.

  2. Modeling of Ionization Physics with the PIC Code OSIRIS

    SciTech Connect

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O'Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC

    2005-09-27

    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  3. Performance of a parallel code for the Euler equations on hypercube computers

    NASA Technical Reports Server (NTRS)

    Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.

    1990-01-01

    The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.

  4. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  5. Fire aerosol experiment and comparisons with computer code predictions

    NASA Astrophysics Data System (ADS)

    Gregory, W. S.; Nichols, B. D.; White, B. W.; Smith, P. R.; Leslie, I. H.; Corkran, J. R.

    1988-08-01

    Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the U.S. Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300 C. To date, we have used quartz aerosol with a median diameter of about 10 microns as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 cu ft/min) and three nominal gas temperatures (ambient, 150 C, and 300 C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data.

  6. External exposure model in the RESRAD computer code.

    SciTech Connect

    Kamboj, S.; Yu, C.; Environmental Assessment

    2002-06-01

    An external exposure model has been developed for the RESRAD computer code that provides flexibility in modeling soil contamination configurations for calculating external doses to exposed individuals. This model is based on the dose coefficients given in the U.S. Environmental Protection Agency's Federal Guidance Report No. 12 (FGR-12) and the point kernel method. It extends the applicability of FGR-12 data to include the effects of different source geometries, such as cover thickness, source thickness, source area, and shape of contaminated area of a specific site. A depth factor function was developed to express the dependence of the dose on the source thickness. A cover-and-depth factor function, derived from this depth factor function, takes into account the dependence of dose on the thickness of the source region and the thickness of the cover above the source region. To further extend the model for realistic geometries, area and shape factors were derived that depend not only on the lateral extent of the contamination, but also on source thickness, cover thickness, and radionuclides present. Results obtained with the model generally compare well with those from the Monte Carlo N-Particle transport code.

  7. Application of the RESRAD computer code to VAMP scenario S

    SciTech Connect

    Gnanapragasam, E.K.; Yu, C.

    1997-03-01

    The RESRAD computer code developed at Argonne National Laboratory was among 11 models from 11 countries participating in the international Scenario S validation of radiological assessment models with Chernobyl fallout data from southern Finland. The validation test was conducted by the Multiple Pathways Assessment Working Group of the Validation of Environmental Model Predictions (VAMP) program coordinated by the International Atomic Energy Agency. RESRAD was enhanced to provide an output of contaminant concentrations in environmental media and in food products to compare with measured data from southern Finland. Probability distributions for inputs that were judged to be most uncertain were obtained from the literature and from information provided in the scenario description prepared by the Finnish Centre for Radiation and Nuclear Safety. The deterministic version of RESRAD was run repeatedly to generate probability distributions for the required predictions. These predictions were used later to verify the probabilistic RESRAD code. The RESRAD predictions of radionuclide concentrations are compared with measured concentrations in selected food products. The radiological doses predicted by RESRAD are also compared with those estimated by the Finnish Centre for Radiation and Nuclear Safety.

  8. Additional extensions to the NASCAP computer code, volume 2

    NASA Technical Reports Server (NTRS)

    Stannard, P. R.; Katz, I.; Mandell, M. J.

    1982-01-01

    Particular attention is given to comparison of the actural response of the SCATHA (Spacecraft Charging AT High Altitudes) P78-2 satellite with theoretical (NASCAP) predictions. Extensive comparisons for a variety of environmental conditions confirm the validity of the NASCAP model. A summary of the capabilities and range of validity of NASCAP is presented, with extensive reference to previously published applications. It is shown that NASCAP is capable of providing quantitatively accurate results when the object and environment are adequately represented and fall within the range of conditions for which NASCAP was intended. Three dimensional electric field affects play an important role in determining the potential of dielectric surfaces and electrically isolated conducting surfaces, particularly in the presence of artificially imposed high voltages. A theory for such phenomena is presented and applied to the active control experiments carried out in SCATHA, as well as other space and laboratory experiments. Finally, some preliminary work toward modeling large spacecraft in polar Earth orbit is presented. An initial physical model is presented including charge emission. A simple code based upon the model is described along with code test results.

  9. Comparison of computer codes for calculating dynamic loads in wind turbines

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1978-01-01

    The development of computer codes for calculating dynamic loads in horizontal axis wind turbines was examined, and a brief overview of each code was given. The performance of individual codes was compared against two sets of test data measured on a 100 KW Mod-0 wind turbine. All codes are aeroelastic and include loads which are gravitational, inertial and aerodynamic in origin.

  10. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    SciTech Connect

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.

  11. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    SciTech Connect

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  12. HYDRA-II: A hydrothermal analysis computer code: Volume 1, Equations and numerics

    SciTech Connect

    McCann, R.A.

    1987-04-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in Cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the Cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits of modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. This volume, Volume I - Equations and Numerics, describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. The final volume, Volume III - Verification/Validation Assessments, presents results of numerical simulations of single- and multiassembly storage systems and comparisons with experimental data. 4 refs.

  13. Computer code for the atomistic simulation of lattice defects and dynamics. [COMENT code

    SciTech Connect

    Schiffgens, J.O.; Graves, N.J.; Oster, C.A.

    1980-04-01

    This document has been prepared to satisfy the need for a detailed, up-to-date description of a computer code that can be used to simulate phenomena on an atomistic level. COMENT was written in FORTRAN IV and COMPASS (CDC assembly language) to solve the classical equations of motion for a large number of atoms interacting according to a given force law, and to perform the desired ancillary analysis of the resulting data. COMENT is a dual-purpose intended to describe static defect configurations as well as the detailed motion of atoms in a crystal lattice. It can be used to simulate the effect of temperature, impurities, and pre-existing defects on radiation-induced defect production mechanisms, defect migration, and defect stability.

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  15. HYDRA, A finite element computational fluid dynamics code: User manual

    SciTech Connect

    Christon, M.A.

    1995-06-01

    HYDRA is a finite element code which has been developed specifically to attack the class of transient, incompressible, viscous, computational fluid dynamics problems which are predominant in the world which surrounds us. The goal for HYDRA has been to achieve high performance across a spectrum of supercomputer architectures without sacrificing any of the aspects of the finite element method which make it so flexible and permit application to a broad class of problems. As supercomputer algorithms evolve, the continuing development of HYDRA will strive to achieve optimal mappings of the most advanced flow solution algorithms onto supercomputer architectures. HYDRA has drawn upon the many years of finite element expertise constituted by DYNA3D and NIKE3D Certain key architectural ideas from both DYNA3D and NIKE3D have been adopted and further improved to fit the advanced dynamic memory management and data structures implemented in HYDRA. The philosophy for HYDRA is to focus on mapping flow algorithms to computer architectures to try and achieve a high level of performance, rather than just performing a port.

  16. Computational and Physical Analysis of Catalytic Compounds

    NASA Astrophysics Data System (ADS)

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  17. ASTRAL Code for Problems of Astrophysics and High Energy Density Physics

    NASA Astrophysics Data System (ADS)

    Chizhkova, N. E.; Ionov, G. V.; Karlykhanov, N. G.; Simonenko, V. A.

    2006-08-01

    The paper gives a brief description of ASTRAL code package for astrophysics simulations, including features in the implementation of basic physical processes and two tests. A sketch of the object code structure is provided.

  18. Selection of a computer code for Hanford low-level waste engineered-system performance assessment. Revision 1

    SciTech Connect

    McGrail, B.P.; Bacon, D.H.

    1998-02-01

    Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites.

  19. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  20. Error threshold in topological quantum-computing models with color codes

    NASA Astrophysics Data System (ADS)

    Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.

    2009-03-01

    Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.

  1. Physics Computing '92: Proceedings of the 4th International Conference

    NASA Astrophysics Data System (ADS)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on

  2. Development and assessment of U.S. Nuclear Regulatory Commission thermal-hydraulic system computer codes

    SciTech Connect

    Shotkin, L.M.

    1996-11-01

    A review is provided of the reasons why the US Nuclear Regulatory Commission needs thermal-hydraulic system computer codes, the assumptions and approximations contained within these codes, and the reasons why test data are required to assess the accuracy of the codes. Specific examples of codes and test programs are given. The use of computer codes assessed against data from scaled test facilities to predict the full-scale plant response is discussed. A method to help focus resources and the need for quantifying code uncertainties are discussed. This paper concentrates on the loss-of-coolant accident (LOCA) because most of the analytical and experimental research has been concentrated in LOCAs.

  3. 2015 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

    SciTech Connect

    Runnels, Scott Robert; Caldwell, Wendy; Brown, Barton Jed; Pederson, Clark; Brown, Justin; Burrill, Daniel; Feinblum, David; Hyde, David; Levick, Nathan; Lyngaas, Isaac; Maeng, Brad; Reed, Richard LeRoy; Sarno-Smith, Lois; Shohet, Gil; Skarda, Jinhie; Stevens, Josey; Zeppetello, Lucas; Grossman-Ponemon, Benjamin; Bottini, Joseph Larkin; Loudon, Tyson Shane; VanGessel, Francis Gilbert; Nagaraj, Sriram; Price, Jacob

    2015-10-15

    The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it. This report includes both the background for the program and the reports from the students.

  4. Computational methods for physical mapping of chromosomes

    SciTech Connect

    Torney, D.C.; Schenk, K.R. ); Whittaker, C.C. Los Alamos National Lab., NM ); White, S.W. )

    1990-01-01

    A standard technique for mapping a chromosome is to randomly select pieces, to use restriction enzymes to cut these pieces into fragments, and then to use the fragments for estimating the probability of overlap of these pieces. Typically, the order of the fragments within a piece is not determined, and the observed fragment data from each pair of pieces must be permuted N1 {times} N2 ways to evaluate the probability of overlap, N1 and N2 being the observed number of fragments in the two selected pieces. We will describe computational approaches used to substantially reduce the computational complexity of the calculation of overlap probability from fragment data. Presently, about 10{sup {minus}4} CPU seconds on one processor of an IBM 3090 is required for calculation of overlap probability from the fragment data of two randomly selected pieces, with an average of ten fragments per piece. A parallel version has been written using IBM clustered FORTRAN. Parallel measurements for 1, 6, and 12 processors will be presented. This approach has proven promising in the mapping of chromosome 16 at Los Alamos National Laboratory. We will also describe other computational challenges presented by physical mapping. 4 refs., 4 figs., 1 tab.

  5. Making FLASH an Open Code for the Academic High-Energy Density Physics Community

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Couch, S. M.; Dubey, A.; Gopal, S.; Graziani, C.; Lee, D.; Weide, K.; Xia, G.

    2010-11-01

    High-energy density physics (HEDP) is an active and growing field of research. DOE has recently decided to make FLASH a code for the academic HEDP community. FLASH is a modular and extensible compressible spatially adaptive hydrodynamics code that incorporates capabilities for a broad range of physical processes, performs well on a wide range of existing advanced computer architectures, and has a broad user base. A rigorous software maintenance process allows the code to operate simultaneously in production and development modes. We summarize the work we are doing to add HEDP capabilities to FLASH. We are adding (1) Spitzer conductivity, (2) super time-stepping to handle the disparity between diffusion and advection time scales, and (3) a description of electrons, ions, and radiation (in the diffusion approximation) by 3 temperatures (3T) to both the hydrodynamics and the MHD solvers. We are also adding (4) ray tracing, (5) laser energy deposition, and (6) a multi-species equation of state incorporating ionization to the hydrodynamics solver; and (7) Hall MHD, and (8) the Biermann battery term to the MHD solver.

  6. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  7. Implementation of a 3D mixing layer code on parallel computers

    NASA Technical Reports Server (NTRS)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.

    1995-01-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  8. Physics models in the toroidal transport code PROCTR

    SciTech Connect

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  9. The Los Alamos suite of relativistic atomic physics codes

    SciTech Connect

    Fontes, C. J.; Zhang, H. L.; Jr, J. Abdallah; Clark, R. E. H.; Kilcrease, D. P.; Colgan, J.; Cunningham, R. T.; Hakel, P.; Magee, N. H.; Sherrill, M. E.

    2015-05-28

    The Los Alamos SuitE of Relativistic (LASER) atomic physics codes is a robust, mature platform that has been used to model highly charged ions in a variety of ways. The suite includes capabilities for calculating data related to fundamental atomic structure, as well as the processes of photoexcitation, electron-impact excitation and ionization, photoionization and autoionization within a consistent framework. These data can be of a basic nature, such as cross sections and collision strengths, which are useful in making predictions that can be compared with experiments to test fundamental theories of highly charged ions, such as quantum electrodynamics. The suite can also be used to generate detailed models of energy levels and rate coefficients, and to apply them in the collisional-radiative modeling of plasmas over a wide range of conditions. Such modeling is useful, for example, in the interpretation of spectra generated by a variety of plasmas. In this work, we provide a brief overview of the capabilities within the Los Alamos relativistic suite along with some examples of its application to the modeling of highly charged ions.

  10. The Los Alamos suite of relativistic atomic physics codes

    DOE PAGESBeta

    Fontes, C. J.; Zhang, H. L.; Jr, J. Abdallah; Clark, R. E. H.; Kilcrease, D. P.; Colgan, J.; Cunningham, R. T.; Hakel, P.; Magee, N. H.; Sherrill, M. E.

    2015-05-28

    The Los Alamos SuitE of Relativistic (LASER) atomic physics codes is a robust, mature platform that has been used to model highly charged ions in a variety of ways. The suite includes capabilities for calculating data related to fundamental atomic structure, as well as the processes of photoexcitation, electron-impact excitation and ionization, photoionization and autoionization within a consistent framework. These data can be of a basic nature, such as cross sections and collision strengths, which are useful in making predictions that can be compared with experiments to test fundamental theories of highly charged ions, such as quantum electrodynamics. The suitemore » can also be used to generate detailed models of energy levels and rate coefficients, and to apply them in the collisional-radiative modeling of plasmas over a wide range of conditions. Such modeling is useful, for example, in the interpretation of spectra generated by a variety of plasmas. In this work, we provide a brief overview of the capabilities within the Los Alamos relativistic suite along with some examples of its application to the modeling of highly charged ions.« less

  11. The modification and application of RAMS computer code. Final report

    SciTech Connect

    McKee, T.B.

    1995-01-17

    The Regional Atmospheric Modeling System (RAMS) has been utilized in its most updated form, version 3a, to simulate a case night from the Atmospheric Studies in COmplex Terrain (ASCOT) experimental program. ASCOT held a wintertime observational campaign during February, 1991 to observe the often strong drainage flows which form on the Great Plains and in the canyons embedded within the slope from the Continental Divide to the Great Plains. A high resolution (500 m grid spacing) simulation of the 4-5 February 1991 case night using the more advanced turbulence closure now available in RAMS 3a allowed greater analysis of the physical processes governing the drainage flows. It is found that shear interaction above and within the drainage flow are important, and are overpredicted with the new scheme at small grid spacing (< {approximately}1000 m). The implication is that contaminants trapped in nighttime stable flows such as these, will be mixed too strongly in the vertical reducing predicted ground concentrations. The HYPACT code has been added to the capability at LANL, although due to the reduced scope of work, no simulations with HYPACT were performed.

  12. Additions and Improvements to the FLASH Code for Simulating High Energy Density Physics Experiments

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Daley, C.; Dubey, A.; Fatenejad, M.; Flocke, N.; Graziani, C.; Lee, D.; Tzeferacos, P.; Weide, K.

    2015-11-01

    FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation hydrodynamics and magnetohydrodynamics code that incorporates capabilities for a broad range of physical processes, performs well on a wide range of computer architectures, and has a broad user base. Extensive capabilities have been added to FLASH to make it an open toolset for the academic high energy density physics (HEDP) community. We summarize these capabilities, with particular emphasis on recent additions and improvements. These include advancements in the optical ray tracing laser package, with methods such as bi-cubic 2D and tri-cubic 3D interpolation of electron number density, adaptive stepping and 2nd-, 3rd-, and 4th-order Runge-Kutta integration methods. Moreover, we showcase the simulated magnetic field diagnostic capabilities of the code, including induction coils, Faraday rotation, and proton radiography. We also describe several collaborations with the National Laboratories and the academic community in which FLASH has been used to simulate HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under grant PHY-0903997.

  13. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  14. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 7: implicit hydrodynamics. Computer code manual. [PWR; BWR

    SciTech Connect

    McKay, M.W.

    1982-06-01

    STEALTH is a family of computer codes that solve the equations of motion for a general continuum. These codes can be used to calculate a variety of physical processes in which the dynamic behavior of a continuum is involved. The versions of STEALTH described in this volume were designed for the calculation of problems involving low-speed fluid flow. They employ an implicit finite difference technique to solve the one- and two-dimensional equations of motion, written for an arbitrary coordinate system, for both incompressible and compressible fluids. The solution technique involves an iterative solution of the implicit, Lagrangian finite difference equations. Convection terms that result from the use of an arbitrarily-moving coordinate system are calculated separately. This volume provides the theoretical background, the finite difference equations, and the input instructions for the one- and two-dimensional codes; a discussion of several sample problems; and a listing of the input decks required to run those problems.

  15. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  16. T-Matrix: Codes for Computing Electromagnetic Scattering by Nonspherical and Aggregated Particles

    NASA Astrophysics Data System (ADS)

    Waterman, Peter; Mishchenko, Michael I.; Travis, Larry D.; Mackowski, Daniel W.

    2015-11-01

    The T-Matrix package includes codes to compute electromagnetic scattering by homogeneous, rotationally symmetric nonspherical particles in fixed and random orientations, randomly oriented two-sphere clusters with touching or separated components, and multi-sphere clusters in fixed and random orientations. All codes are written in Fortran-77. LAPACK-based, extended-precision, Gauss-elimination- and NAG-based, and superposition codes are available, as are double-precision superposition, parallelized double-precision, double-precision Lorenz-Mie codes, and codes for the computation of the coefficients for the generalized Chebyshev shape.

  17. Physics Computing '92: Proceedings of the 4th International Conference

    NASA Astrophysics Data System (ADS)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    * Ordered Particle Simulations for Serial and MIMD Parallel Computers * "NOLP" -- Program Package for Laser Plasma Nonlinear Optics * Algorithms to Solve Nonlinear Least Square Problems * Distribution of Hydrogen Atoms in Pd-H Computed by Molecular Dynamics * A Ray Tracing of Optical System for Protein Crystallography Beamline at Storage Ring-SIBERIA-2 * Vibrational Properties of a Pseudobinary Linear Chain with Correlated Substitutional Disorder * Application of the Software Package Mathematica in Generalized Master Equation Method * Linelist: An Interactive Program for Analysing Beam-foil Spectra * GROMACS: A Parallel Computer for Molecular Dynamics Simulations * GROMACS Method of Virial Calculation Using a Single Sum * The Interactive Program for the Solution of the Laplace Equation with the Elimination of Singularities for Boundary Functions * Random-Number Generators: Testing Procedures and Comparison of RNG Algorithms * Micro-TOPIC: A Tokamak Plasma Impurities Code * Rotational Molecular Scattering Calculations * Orthonormal Polynomial Method for Calibrating of Cryogenic Temperature Sensors * Frame-based System Representing Basis of Physics * The Role of Massively Data-parallel Computers in Large Scale Molecular Dynamics Simulations * Short-range Molecular Dynamics on a Network of Processors and Workstations * An Algorithm for Higher-order Perturbation Theory in Radiative Transfer Computations * Hydrostochastics: The Master Equation Formulation of Fluid Dynamics * HPP Lattice Gas on Transputers and Networked Workstations * Study on the Hysteresis Cycle Simulation Using Modeling with Different Functions on Intervals * Refined Pruning Techniques for Feed-forward Neural Networks * Random Walk Simulation of the Motion of Transient Charges in Photoconductors * The Optical Hysteresis in Hydrogenated Amorphous Silicon * Diffusion Monte Carlo Analysis of Modern Interatomic Potentials for He * A Parallel Strategy for Molecular Dynamics Simulations of Polar Liquids on

  18. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  20. Operations analysis (study 2.1). Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1974-01-01

    A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.

  1. Performance analysis of large scale parallel CFD computing based on Code_Saturne

    NASA Astrophysics Data System (ADS)

    Shang, Zhi

    2013-02-01

    In order to run computational fluid dynamics (CFD) codes on large scales, parallel computing has to be employed. For instance, on Petascale computing, general parallel computing without any optimization is not enough, especially for complex industrial issues that employ a large number of mesh cells to capture the details of the geometry. How to distribute these mesh cells among the multi-processors for Terascale and Petascale systems to obtain a good performance on parallel computing is really a challenge. Some mesh partitioning software packages, such as Metis, ParMetis, PT-Scotch and Zoltan, were chosen as the candidates ported into Code_Saturne to test if they can lead Code_Saturne towards Petascale and Exascale parallel CFD computing. Through the studies, it was found that mesh partitioning optimization software packages based on the graph mesh partitioning method can help the CFD code obtain good mesh distributions for high performance computing (HPC).

  2. A Framework for Understanding Physics Students' Computational Modeling Practices

    NASA Astrophysics Data System (ADS)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  3. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 3: Assessment Manual

    SciTech Connect

    Müller, C.; Hughes, E. D.; Niederauer, G. F.; Wilkening, H.; Travis, J. R.; Spore, J. W.; Royl, P.; Baumann, W.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume

  4. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 2: User's Manual

    SciTech Connect

    Nichols, B. D.; Mueller, C.; Necker, G. A.; Travis, J. R.; Spore, J. W.; Lam, K. L.; Royl, P.; Wilson, T. L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III

  5. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    SciTech Connect

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data.

  6. Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL

    SciTech Connect

    Lan, J.S.

    1981-01-01

    In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.

  7. Benchmark and partial validation testing of the FLASH computer code, Version 3.0

    SciTech Connect

    Martian, P.; Smith, C.S.

    1993-09-01

    This document presents methods and results of benchmark testing (i.e., code-to-code comparisons) and partial validation testing (i.e., tests which compare field data to the computer generated solutions) of the FLASH computer code, Version 3.0, which were conducted to determine if the code is ready for performance assessment studies of the Radioactive Waste Management Complex. Three test problems are presented that were designed to check computational efficiency, accuracy of the numerical algorithms, and the capability of the code to simulate diverse hydrological conditions. These test problems were designed to specifically test the code`s ability to simulate, (a) seasonal infiltration in response to meteorological conditions, (b) changing watertable elevations due to a transient areal source of water, (i.e., influx from spreading basins), and (c) infiltration into fractured basalt as a result of seasonal water in drainage ditches. The FLASH simulations generally compared well with the benchmark codes, indicating good stability and acceptable computational efficiency while simulating a wide range of conditions. The code appears operational for modeling both unsaturated and saturated flow in fractured, heterogeneous porous media. However, the code failed to converge when a unsaturated to saturated transition occurred. Consequently, the code should not be used when this condition occurs or is expected to occur, i.e. when perched water is present or when infiltration rates exceed the saturated conductivity of the soil.

  8. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  9. Collaborative Physical Chemistry Projects Involving Computational Chemistry

    NASA Astrophysics Data System (ADS)

    Whisnant, David M.; Howe, Jerry J.; Lever, Lisa S.

    2000-02-01

    The physical chemistry classes from three colleges have collaborated on two computational chemistry projects using Quantum CAChe 3.0 and Gaussian 94W running on Pentium II PCs. Online communication by email and the World Wide Web was an important part of the collaboration. In the first project, students used molecular modeling to predict benzene derivatives that might be possible hair dyes. They used PM3 and ZINDO calculations to predict the electronic spectra of the molecules and tested the predicted spectra by comparing some with experimental measurements. They also did literature searches for real hair dyes and possible health effects. In the final phase of the project they proposed a synthetic pathway for one compound. In the second project the students were asked to predict which isomer of a small carbon cluster (C3, C4, or C5) was responsible for a series of IR lines observed in the spectrum of a carbon star. After preliminary PM3 calculations, they used ab initio calculations at the HF/6-31G(d) and MP2/6-31G(d) level to model the molecules and predict their vibrational frequencies and rotational constants. A comparison of the predictions with the experimental spectra suggested that the linear isomer of the C5 molecule was responsible for the lines.

  10. Computing support for High Energy Physics

    SciTech Connect

    Avery, P.; Yelton, J.

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  11. The role of computational physics in the liberal arts curriculum

    NASA Astrophysics Data System (ADS)

    Dominguez, Rachele; Huff, Benjamin

    2015-09-01

    The role of computational physics education varies dramatically from department to department. We will discuss a new computational physics course at Randolph-Macon College and our attempt to identify where it fits (or should fit) into the larger liberal arts curriculum and why. In doing so, we will describe the goals of the course, and how the liberal arts curriculum conditions the exploration of computational physics.

  12. A generalized one-dimensional computer code for turbomachinery cooling passage flow calculations

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Roelke, Richard J.; Meitner, Peter L.

    1989-01-01

    A generalized one-dimensional computer code for analyzing the flow and heat transfer in the turbomachinery cooling passages was developed. This code is capable of handling rotating cooling passages with turbulators, 180 degree turns, pin fins, finned passages, by-pass flows, tip cap impingement flows, and flow branching. The code is an extension of a one-dimensional code developed by P. Meitner. In the subject code, correlations for both heat transfer coefficient and pressure loss computations were developed to model each of the above mentioned type of coolant passages. The code has the capability of independently computing the friction factor and heat transfer coefficient on each side of a rectangular passage. Either the mass flow at the inlet to the channel or the exit plane pressure can be specified. For a specified inlet total temperature, inlet total pressure, and exit static pressure, the code computers the flow rates through the main branch and the subbranches, flow through tip cap for impingement cooling, in addition to computing the coolant pressure, temperature, and heat transfer coefficient distribution in each coolant flow branch. Predictions from the subject code for both nonrotating and rotating passages agree well with experimental data. The code was used to analyze the cooling passage of a research cooled radial rotor.

  13. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules F1-F8

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with eight of the functional modules in the code. Those are: BONAMI - resonance self-shielding by the Bondarenko method; NITAWL-II - SCALE system module for performing resonance shielding and working library production; XSDRNPM - a one-dimensional discrete-ordinates code for transport analysis; XSDOSE - a module for calculating fluxes and dose rates at points outside a shield; KENO IV/S - an improved monte carlo criticality program; COUPLE; ORIGEN-S - SCALE system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms; ICE.

  14. XXV IUPAP Conference on Computational Physics (CCP2013): Preface

    NASA Astrophysics Data System (ADS)

    2014-05-01

    XXV IUPAP Conference on Computational Physics (CCP2013) was held from 20-24 August 2013 at the Russian Academy of Sciences in Moscow, Russia. The annual Conferences on Computational Physics (CCP) present an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas. The CCP series aims to draw computational scientists from around the world and to stimulate interdisciplinary discussion and collaboration by putting together researchers interested in various fields of computational science. It is organized under the auspices of the International Union of Pure and Applied Physics and has been in existence since 1989. The CCP series alternates between Europe, America and Asia-Pacific. The conferences are traditionally supported by European Physical Society and American Physical Society. This year the Conference host was Landau Institute for Theoretical Physics. The Conference contained 142 presentations, and, in particular, 11 plenary talks with comprehensive reviews from airbursts to many-electron systems. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), European Physical Society (EPS), Division of Computational Physics of American Physical Society (DCOMP/APS), Russian Foundation for Basic Research, Department of Physical Sciences of Russian Academy of Sciences, RSC Group company. Further conference information and images from the conference are available in the pdf.

  15. Muon simulation codes MUSIC and MUSUN for underground physics

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, V. A.

    2009-03-01

    The paper describes two Monte Carlo codes dedicated to muon simulations: MUSIC (MUon SImulation Code) and MUSUN (MUon Simulations UNderground). MUSIC is a package for muon transport through matter. It is particularly useful for propagating muons through large thickness of rock or water, for instance from the surface down to underground/underwater laboratory. MUSUN is designed to use the results of muon transport through rock/water to generate muons in or around underground laboratory taking into account their energy spectrum and angular distribution.

  16. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 3: programmer's manual. Computer code manual. [PWR; BWR

    SciTech Connect

    Hofmann, R.

    1981-11-01

    This volume contains a description of a programming and documentation structure for the STEALTH finite difference computer programs based on general principles applicable to most large scientific computer programs. Program modularization (as well as documentation format) is based entirely on the theoretical elements of analysis of a physical system that were presented in Volume 1. FORTRAN programming and naming conventions are also described. Among the programming formats presented is a FORTRAN manual (Appendix FTN) which can be used as the basis for developing portable codes. STEALTH was developed on a CDC 7600. However, it has been designed so that it can be installed on most large scientific computers. Installation documentation exists for some facilities and can be generated easily for others.

  17. JADAMILU: a software code for computing selected eigenvalues of large sparse symmetric matrices

    NASA Astrophysics Data System (ADS)

    Bollhöfer, Matthias; Notay, Yvan

    2007-12-01

    A new software code for computing selected eigenvalues and associated eigenvectors of a real symmetric matrix is described. The eigenvalues are either the smallest or those closest to some specified target, which may be in the interior of the spectrum. The underlying algorithm combines the Jacobi-Davidson method with efficient multilevel incomplete LU (ILU) preconditioning. Key features are modest memory requirements and robust convergence to accurate solutions. Parameters needed for incomplete LU preconditioning are automatically computed and may be updated at run time depending on the convergence pattern. The software is easy to use by non-experts and its top level routines are written in FORTRAN 77. Its potentialities are demonstrated on a few applications taken from computational physics. Program summaryProgram title: JADAMILU Catalogue identifier: ADZT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 101 359 No. of bytes in distributed program, including test data, etc.: 7 493 144 Distribution format: tar.gz Programming language: Fortran 77 Computer: Intel or AMD with g77 and pgf; Intel EM64T or Itanium with ifort; AMD Opteron with g77, pgf and ifort; Power (IBM) with xlf90. Operating system: Linux, AIX RAM: problem dependent Word size: real:8; integer: 4 or 8, according to user's choice Classification: 4.8 Nature of problem: Any physical problem requiring the computation of a few eigenvalues of a symmetric matrix. Solution method: Jacobi-Davidson combined with multilevel ILU preconditioning. Additional comments: We supply binaries rather than source code because JADAMILU uses the following external packages: MC64. This software is copyrighted software and not freely available. COPYRIGHT (c) 1999

  18. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

    SciTech Connect

    Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included

  19. Two-Phase Flow in Geothermal Wells: Development and Uses of a Good Computer Code

    SciTech Connect

    Ortiz-Ramirez, Jaime

    1983-06-01

    A computer code is developed for vertical two-phase flow in geothermal wellbores. The two-phase correlations used were developed by Orkiszewski (1967) and others and are widely applicable in the oil and gas industry. The computer code is compared to the flowing survey measurements from wells in the East Mesa, Cerro Prieto, and Roosevelt Hot Springs geothermal fields with success. Well data from the Svartsengi field in Iceland are also used. Several applications of the computer code are considered. They range from reservoir analysis to wellbore deposition studies. It is considered that accurate and workable wellbore simulators have an important role to play in geothermal reservoir engineering.

  20. Analysis of airborne antenna systems using geometrical theory of diffraction and moment method computer codes

    NASA Technical Reports Server (NTRS)

    Hartenstein, Richard G., Jr.

    1985-01-01

    Computer codes have been developed to analyze antennas on aircraft and in the presence of scatterers. The purpose of this study is to use these codes to develop accurate computer models of various aircraft and antenna systems. The antenna systems analyzed are a P-3B L-Band antenna, an A-7E UHF relay pod antenna, and traffic advisory antenna system installed on a Bell Long Ranger helicopter. Computer results are compared to measured ones with good agreement. These codes can be used in the design stage of an antenna system to determine the optimum antenna location and save valuable time and costly flight hours.

  1. Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide

    SciTech Connect

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    1983-02-01

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  2. A fast technique for computing syndromes of BCH and RS codes. [deep space network

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    A combination of the Chinese Remainder Theorem and Winograd's algorithm is used to compute transforms of odd length over GF(2 to the m power). Such transforms are used to compute the syndromes needed for decoding CBH and RS codes. The present scheme requires substantially fewer multiplications and additions than the conventional method of computing the syndromes directly.

  3. Second Generation Integrated Composite Analyzer (ICAN) Computer Code

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.

    1993-01-01

    This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.

  4. RTE: A computer code for Rocket Thermal Evaluation

    NASA Technical Reports Server (NTRS)

    Naraghi, Mohammad H. N.

    1995-01-01

    The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.

  5. Computational physics program of the National MFE Computer Center

    SciTech Connect

    Mirin, A.A.

    1985-12-01

    The development of numerical models for plasma phenomena and magnetic confinement devices is discussed. The multidimensional Fokker-Planck and transport codes are applied to toroidal mirror and compact toroid devices. Linear and nonlinear resistive magnetohydrodynamics in two and three dimensions are used in the investigation of various fusion devices. 362 refs., 4 tabs. (WRF)

  6. An evaluation of three two-dimensional computational fluid dynamics codes including low Reynolds numbers and transonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Hicks, Raymond M.; Cliff, Susan E.

    1991-01-01

    Full-potential, Euler, and Navier-Stokes computational fluid dynamics (CFD) codes were evaluated for use in analyzing the flow field about airfoils sections operating at Mach numbers from 0.20 to 0.60 and Reynolds numbers from 500,000 to 2,000,000. The potential code (LBAUER) includes weakly coupled integral boundary layer equations for laminar and turbulent flow with simple transition and separation models. The Navier-Stokes code (ARC2D) uses the thin-layer formulation of the Reynolds-averaged equations with an algebraic turbulence model. The Euler code (ISES) includes strongly coupled integral boundary layer equations and advanced transition and separation calculations with the capability to model laminar separation bubbles and limited zones of turbulent separation. The best experiment/CFD correlation was obtained with the Euler code because its boundary layer equations model the physics of the flow better than the other two codes. An unusual reversal of boundary layer separation with increasing angle of attack, following initial shock formation on the upper surface of the airfoil, was found in the experiment data. This phenomenon was not predicted by the CFD codes evaluated.

  7. Computational Participation: Understanding Coding as an Extension of Literacy Instruction

    ERIC Educational Resources Information Center

    Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.

    2016-01-01

    Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…

  8. Advanced Computing Tools and Models for Accelerator Physics

    SciTech Connect

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  9. A new 3-D integral code for computation of accelerator magnets

    SciTech Connect

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab.

  10. Development and validation of GWHEAD, a three-dimensional groundwater head computer code

    SciTech Connect

    Beckmeyer, R.R.; Root, R.W.; Routt, K.R.

    1980-03-01

    A computer code has been developed to solve the groundwater flow equation in three dimensions. The code has finite-difference approximations solved by the strongly implicit solution procedure. Input parameters to the code include hydraulic conductivity, specific storage, porosity, accretion (recharge), and initial hydralic head. These parameters may be input as varying spatially. The hydraulic conductivity may be input as isotropic or anisotropic. The boundaries either may permit flow across them or may be impermeable. The code has been used to model leaky confined groundwater conditions and spherical flow to a continuous point sink, both of which have exact analytical solutions. The results generated by the computer code compare well with those of the analytical solutions. The code was designed to be used to model groundwater flow beneath fuel reprocessing and waste storage areas at the Savannah River Plant.

  11. Fallout computer codes. A bibliographic perspective. Technical report, 1 November 1992-1 September 1993

    SciTech Connect

    Rowland, R.

    1994-07-01

    This report is a summary overview of the basic features and differences among the major radioactive fallout models and computer codes that are either in current use or that form the basis for more contemporary codes and other computational tools. The DELFIC, WSEG-10, KDFOC2, SEER3, and DNAF-1 codes and the EM-1 model are addressed. The review is based only on the information that is available in the general body of literature. This report describes the fallout process, gives an overview of each code/model, summarizes how each code/model handles the basic fallout parameters (initial cloud, particle distributions, fall mechanics, total activity and activity to dose rate conversion, and transport), cites the literature references used, and provides an annotated bibliography for other fallout code literature that was not cited. Nuclear weapons, Radiation, Radioactivity, Fallout, DELFIC, WSEG, Nuclear weapon effects, KDFOC, SEER, DNAF, EM-1.

  12. Code System for Reactor Physics and Fuel Cycle Simulation.

    1999-04-21

    Version 00 VSOP94 (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation based on neutron flux synthesis with depletion and shut-down features, in-core and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to Pebble Bed HTRs). Various techniques have been employed to accelerate the iterativemore » processes and to optimize the internal data transfer. The code system has been used extensively for comparison studies of reactors, their fuel cycles, and related detailed features. In addition to its use in research and development work for the High Temperature Reactor, the system has been applied successfully to Light Water and Heavy Water Reactors.« less

  13. Code System for Reactor Physics and Fuel Cycle Simulation.

    SciTech Connect

    TEUCHERT, E.

    1999-04-21

    Version 00 VSOP94 (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation based on neutron flux synthesis with depletion and shut-down features, in-core and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to Pebble Bed HTRs). Various techniques have been employed to accelerate the iterative processes and to optimize the internal data transfer. The code system has been used extensively for comparison studies of reactors, their fuel cycles, and related detailed features. In addition to its use in research and development work for the High Temperature Reactor, the system has been applied successfully to Light Water and Heavy Water Reactors.

  14. Atomic Structure Calculations from the Los Alamos Atomic Physics Codes

    DOE Data Explorer

    Cowan, R. D.

    The well known Hartree-Fock method of R.D. Cowan, developed at Los Alamos National Laboratory, is used for the atomic structure calculations. Electron impact excitation cross sections are calculated using either the distorted wave approximation (DWA) or the first order many body theory (FOMBT). Electron impact ionization cross sections can be calculated using the scaled hydrogenic method developed by Sampson and co-workers, the binary encounter method or the distorted wave method. Photoionization cross sections and, where appropriate, autoionizations are also calculated. Original manuals for the atomic structure code, the collisional excitation code, and the ionization code, are available from this website. Using the specialized interface, you will be able to define the ionization stage of an element and pick the initial and final configurations. You will be led through a series of web pages ending with a display of results in the form of cross sections, collision strengths or rates coefficients. Results are available in tabular and graphic form.

  15. Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liu, Nan-Suey

    2005-01-01

    The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.

  16. User's guide for vectorized code EQUIL for calculating equilibrium chemistry on Control Data STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.

    1980-01-01

    A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.

  17. Computer Code System to Assess Skin Dose from Skin Contamination

    2011-07-10

    Version 00 VARSKIN 4 code is designed to operate in both Windows? and MacIntosh? environments and is expected to be significantly easier to learn and use than its predecessors. PC and MAC users will unzip different executable files, but the functionality is identical. Five different predefined source configurations are available in VARSKIN 4 to allow simulations of point, disk, cylinder, sphere, and slab sources.

  18. GATO Code Modification to Compute Plasma Response to External Perturbations

    NASA Astrophysics Data System (ADS)

    Turnbull, A. D.; Chu, M. S.; Ng, E.; Li, X. S.; James, A.

    2006-10-01

    It has become increasingly clear that the plasma response to an external nonaxiymmetric magnetic perturbation cannot be neglected in many situations of interest. This response can be described as a linear combination of the eigenmodes of the ideal MHD operator. The eigenmodes of the system can be obtained numerically with the GATO ideal MHD stability code, which has been modified for this purpose. A key requirement is the removal of inadmissible continuum modes. For Finite Hybrid Element codes such as GATO, a prerequisite for this is their numerical restabilization by addition of small numerical terms to δ,to cancel the analytic numerical destabilization. In addition, robustness of the code was improved and the solution method speeded up by use of the SuperLU package to facilitate calculation of the full set of eigenmodes in a reasonable time. To treat resonant plasma responses, the finite element basis has been extended to include eigenfunctions with finite jumps at rational surfaces. Some preliminary numerical results for DIII-D equilibria will be given.

  19. The Unified English Braille Code: Examination by Science, Mathematics, and Computer Science Technical Expert Braille Readers

    ERIC Educational Resources Information Center

    Holbrook, M. Cay; MacCuspie, P. Ann

    2010-01-01

    Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…

  20. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    ERIC Educational Resources Information Center

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela

    2015-01-01

    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…

  1. Comparing Participants' Rating and Compendium Coding to Estimate Physical Activity Intensities

    ERIC Educational Resources Information Center

    Masse, Louise C.; Eason, Karen E.; Tortolero, Susan R.; Kelder, Steven H.

    2005-01-01

    This study assessed agreement between participants' rating (PMET) and compendium coding (CMET) of estimating physical activity intensity in a population of older minority women. As part of the Women on the Move study, 224 women completed a 7-day activity diary and wore an accelerometer for 7 days. All activities recorded were coded using PMET and…

  2. Benchmark testing and independent verification of the VS2DT computer code

    SciTech Connect

    McCord, J.T.; Goodrich, M.T.

    1994-11-01

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code`s original documentation.

  3. A FORTRAN computer code for calculating flows in multiple-blade-element cascades

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1985-01-01

    A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.

  4. Computational Physics in Africa, its Scope and Challenges

    NASA Astrophysics Data System (ADS)

    Sheth, Chandra

    2002-08-01

    There is a large untapped potential scientific talent available among the estimated 700 million people who live in Africa. Computational physics has the scope of playing a key role in tapping this potential and associated with this are a number of challenges. Since CCP2001, there has been a significant effort in place in addressing some of the problems faced in Africa. The effort of D.Stauffer through website: www.thp-uni-koeln.de under the umbrella of EPS and Forum on Physics in Africa through efforts of researchers of African origin in USA are some of the positive developments. Computational Physics Division of APS can play a significant role in the successful implementation of computational physics in Africa. This article explains the role of Computational Physics in Africa, its need, its scope and challenges and the contributions APS can make. References: 1. C.V.Sheth, Proceedings of CCP2001. 2. E.F. Redish in : Computers in Physics Instruction, Proceedings Aug 1-5,1988,Raleigh,North Carolina, U.S.A., Addison-Wesley, 1990. keywords: computers in physics education, computational physics. PACS: 07-05.-t, 01.50.Ht, 01.50.-i, 01.40Gm,01.40-d

  5. Verification, validation, and predictive capability in computational engineering and physics.

    SciTech Connect

    Oberkampf, William Louis; Hirsch, Charles; Trucano, Timothy Guy

    2003-02-01

    Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

  6. Computer Integrated Manufacturing: Physical Modelling Systems Design. A Personal View.

    ERIC Educational Resources Information Center

    Baker, Richard

    A computer-integrated manufacturing (CIM) Physical Modeling Systems Design project was undertaken in a time of rapid change in the industrial, business, technological, training, and educational areas in Australia. A specification of a manufacturing physical modeling system was drawn up. Physical modeling provides a flexibility and configurability…

  7. Plutonium explosive dispersal modeling using the MACCS2 computer code

    SciTech Connect

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  8. User's manual for airfoil flow field computer code SRAIR

    NASA Technical Reports Server (NTRS)

    Shamroth, S. J.

    1985-01-01

    A two dimensional unsteady Navier-Stokes calculation procedure with specific application to the isolated airfoil problem is presented. The procedure solves the full, ensemble averaged Navier-Stokes equations with turbulence represented by a mixing length model. The equations are solved in a general nonorthogonal coordinate system which is obtained via an external source. Specific Cartesian locations of grid points are required as input for this code. The method of solution is based upon the Briley-McDonald LBI procedure. The manual discusses the analysis, flow of the program, control steam, input and output.

  9. Atomic physics: A milestone in quantum computing

    NASA Astrophysics Data System (ADS)

    Bartlett, Stephen D.

    2016-08-01

    Quantum computers require many quantum bits to perform complex calculations, but devices with more than a few bits are difficult to program. A device based on five atomic quantum bits shows a way forward. See Letter p.63

  10. TPASS: a gamma-ray spectrum analysis and isotope identification computer code

    SciTech Connect

    Dickens, J.K.

    1981-03-01

    The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.

  11. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.

    PubMed

    Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond

  12. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes.

    PubMed

    Pinsky, L S; Wilson, T L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be useful in the design and analysis of experiments such as ACCESS (Advanced Cosmic-ray Composition Experiment for Space Station), which is an Office of Space Science payload currently under evaluation for deployment on the International Space Station (ISS). FLUKA will be significantly improved and tailored for use in simulating space radiation in four ways. First, the additional physics not presently within the code that is necessary to simulate the problems of interest, namely the heavy ion inelastic processes, will be incorporated. Second, the internal geometry package will be replaced with one that will substantially increase the calculation speed as well as simplify the data input task. Third, default incident flux packages that include all of the different space radiation sources of interest will be included. Finally, the user interface and internal data structure will be melded together with ROOT, the object-oriented data analysis infrastructure system. Beyond

  13. An examination of Sandia`s phenomenological computer codes and the use of intelligent searching in risk assessments

    SciTech Connect

    Benjamin, A.S.

    1996-07-01

    Because many of the phenomenologically based codes used to support risk assessments require lone execution times, it is important to have a rationally based means for optimizing the choice of parameter values that are input to the code calculations. For this reason, we have developed a method for intelligently searching the space of parameter values to deduce, with as few computations as possible, the values that are most likely to lead to high risk. We have applied the method to a problem involving electrical initiation of an explosive due to the response of the system to fires. We have shown that our method can locate potential risk vulnerabilities with far fewer time-consuming physical response computations than would be necessary using standard sampling approaches.

  14. What Computational Approaches Should be Taught for Physics?

    NASA Astrophysics Data System (ADS)

    Landau, Rubin

    2005-03-01

    The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.

  15. TVENT1: a computer code for analyzing tornado-induced flow in ventilation systems

    SciTech Connect

    Andrae, R.W.; Tang, P.K.; Gregory, W.S.

    1983-07-01

    TVENT1 is a new version of the TVENT computer code, which was designed to predict the flows and pressures in a ventilation system subjected to a tornado. TVENT1 is essentially the same code but has added features for turning blowers off and on, changing blower speeds, and changing the resistance of dampers and filters. These features make it possible to depict a sequence of events during a single run. Other features also have been added to make the code more versatile. Example problems are included to demonstrate the code's applications.

  16. abcpmc: Approximate Bayesian Computation for Population Monte-Carlo code

    NASA Astrophysics Data System (ADS)

    Akeret, Joel

    2015-04-01

    abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.

  17. A Line Source Shielding Code for Personal Computers.

    1990-12-22

    Version 00 LINEDOSE computes the gamma-ray dose from a pipe source modeled as a line. The pipe is assumed to be iron and has a concrete shield of arbitrary thickness. The calculation is made for eight source energies between 0.1 and 3.5 MeV.

  18. Classical Mechanics with Computational Physics in the Undergraduate Curriculum

    NASA Astrophysics Data System (ADS)

    Hasbun, J. E.

    2006-11-01

    Efforts to incorporate computational physics in the undergraduate curriculum have made use of Matlab, IDL, Maple, Mathematica, Fortran, and C^1 as well as Java.^2 The benefits of similar undertakings in our undergraduate curriculum are that students learn ways to go beyond what they learn in the classroom and use computational techniques to explore more realistic physics applications. Students become better prepared to perform research that will be useful throughout their scientific careers.^3 Undergraduate physics in general can benefit by building on such efforts. Recently, I have developed a draft of a textbook for the junior level mechanics physics course with computer applications.^4 The text uses the traditional analytical approach, yet it incorporates computational physics to build on it. The text does not intend to teach students how to program; instead, it makes use of students' abilities to use programming to go beyond the analytical approach and complement their understanding. An in-house computational environment, however, is strongly encouraged. Selected examples of representative lecture problems will be discussed. ^1 ''Computation and Problem Solving in Undergraduate Physics,'' David M. Cook, Lawrence University (2003). ^2 ''Simulations in Physics: Applications to Physical Systems,'' H. Gould, J. Tobochnik, and W Christian. ^3 R. Landau, APS Bull. Vol 50, 1069 (2005) ^4J. E. Hasbun, APS Bull. Vol. 51, 452 (2006)

  19. HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics

    NASA Astrophysics Data System (ADS)

    Wiebusch, Martin

    2015-10-01

    This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.

  20. Proceedings of the conference on computer codes and the linear accelerator community

    SciTech Connect

    Cooper, R.K.

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  1. Comparison of various NLTE codes in computing the charge-state populations of an argon plasma

    SciTech Connect

    Stone, S.R.; Weisheit, J.C.

    1984-11-01

    A comparison among nine computer codes shows surprisingly large differences where it had been believed that the theroy was well understood. Each code treats an argon plasma, optically thin and with no external photon flux; temperatures vary around 1 keV and ion densities vary from 6 x 10/sup 17/ cm/sup -3/ to 6 x 10/sup 21/ cm/sup -3/. At these conditions most ions have three or fewer bound electrons. The calculated populations of 0-, 1-, 2-, and 3-electron ions differ from code to code by typical factors of 2, in some cases by factors greater than 300. These differences depend as sensitively on how may Rydberg states a code allows as they do on variations among computed collision rates. 29 refs., 23 figs.

  2. Integration of the DRAGON5/DONJON5 codes in the SALOME platform for performing multi-physics calculations in nuclear engineering

    NASA Astrophysics Data System (ADS)

    Hébert, Alain

    2014-06-01

    We are presenting the computer science techniques involved in the integration of codes DRAGON5 and DONJON5 in the SALOME platform. This integration brings new capabilities in designing multi-physics computational schemes, with the possibility to couple our reactor physics codes with thermal-hydraulics or thermo-mechanics codes from other organizations. A demonstration is presented where two code components are coupled using the YACS module of SALOME, based on the CORBA protocol. The first component is a full-core 3D steady-state neuronic calculation in a PWR performed using DONJON5. The second component implement a set of 1D thermal-hydraulics calculations, each performed over a single assembly.

  3. TRIO-EF: a general thermal hydraulics computer code applied to the AVLIS process

    NASA Astrophysics Data System (ADS)

    Magnaud, Jean P.; Claveau, Michel; Coulon, Nadia; Yala, Philippe; Guilbaud, Daniel; Mejane, Albert

    1993-05-01

    TRIO-EF is a general purpose Fluid Mechanics 3D Finite Element Code. The system capabilities cover areas such as steady state or transient, laminar or turbulent, isothermal or temperature dependent fluid flows; it is applicable to the study of coupled thermo-fluid problems involving heat conduction and possibly radiative heat transfer. TRIO-EF is developed by the Heat Transfer and Structural Mechanics Department of the French Atomic Energy Commission CEA/DMT. It is widely used for applications in reactor design, safety analysis and final nuclear waster disposal. More recently, it has been used to study the thermal behavior of the AVLIS process separation module. In this process, a linear electron beam impinges the free surface of a uranium ingot, generating a two dimensional curtain emission of vapor. The metal is contained in a water-cooled crucible. The energy transferred to the metal causes its partial melting, forming a pool where strong convective motion increases heat transfer towards the crucible. In the upper part of the Separation Module, the internal structures are devoted to two main functions: vapor containment and reflux, irradiation and physical separation. They are subjected to very high temperature levels and heat transfer occurs mainly by radiation. Moreover, special attention has to be paid to electron backscattering. These two major points have been simulated numerically with TRIO-EF and in this paper, we present and comment the results of such a computation, for each of them. After a brief overview of the computer code, two examples of the TRIO-EF capabilities are given: a crucible thermal hydraulics model, and a thermal analysis of the internal structures.

  4. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  5. Computer code for the calculation of the temperature distribution of cooled turbine blades

    NASA Astrophysics Data System (ADS)

    Tietz, Thomas A.; Koschel, Wolfgang W.

    A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.

  6. Visualization of elastic wavefields computed with a finite difference code

    SciTech Connect

    Larsen, S.; Harris, D.

    1994-11-15

    The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.

  7. Spent fuel management fee methodology and computer code user's manual.

    SciTech Connect

    Engel, R.L.; White, M.K.

    1982-01-01

    The methodology and computer model described here were developed to analyze the cash flows for the federal government taking title to and managing spent nuclear fuel. The methodology has been used by the US Department of Energy (DOE) to estimate the spent fuel disposal fee that will provide full cost recovery. Although the methodology was designed to analyze interim storage followed by spent fuel disposal, it could be used to calculate a fee for reprocessing spent fuel and disposing of the waste. The methodology consists of two phases. The first phase estimates government expenditures for spent fuel management. The second phase determines the fees that will result in revenues such that the government attains full cost recovery assuming various revenue collection philosophies. These two phases are discussed in detail in subsequent sections of this report. Each of the two phases constitute a computer module, called SPADE (SPent fuel Analysis and Disposal Economics) and FEAN (FEe ANalysis), respectively.

  8. High-Performance Java Codes for Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  9. Benchmark testing and independent verification of the VS2DT computer code

    NASA Astrophysics Data System (ADS)

    McCord, James T.; Goodrich, Michael T.

    1994-11-01

    The finite difference flow and transport simulator VS2DT was benchmark tested against several other codes which solve the same equations (Richards equation for flow and the Advection-Dispersion equation for transport). The benchmark problems investigated transient two-dimensional flow in a heterogeneous soil profile with a localized water source at the ground surface. The VS2DT code performed as well as or better than all other codes when considering mass balance characteristics and computational speed. It was also rated highly relative to the other codes with regard to ease-of-use. Following the benchmark study, the code was verified against two analytical solutions, one for two-dimensional flow and one for two-dimensional transport. These independent verifications show reasonable agreement with the analytical solutions, and complement the one-dimensional verification problems published in the code's original documentation.

  10. Computers in the General Physics Laboratory.

    ERIC Educational Resources Information Center

    Preston, Daryl W.; Good, R. H.

    1996-01-01

    Provides ideas and outcomes for nine computer laboratory experiments using a commercial eight-bit analog to digital (ADC) interface. Experiments cover statistics; rotation; harmonic motion; voltage, current, and resistance; ADC conversions; temperature measurement; single slit diffraction; and radioactive decay. Includes necessary schematics. (MVL)

  11. RELATIONSHIPS BETWEEN GIS ENVIRONMENTAL FEATURES AND ADOLESCENT MALE PHYSICAL ACTIVITY: GIS CODING DIFFERENCES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: It is not clear if relationships between GIS obtained environmental features and physical activity differ according to the method used to code GIS data. Methods: Physical activity levels of 210 Boy Scouts were measured by accelerometer. Numbers of parks, trails, gymnasia, bus stops, groc...

  12. High pressure humidification columns: Design equations, algorithm, and computer code

    SciTech Connect

    Enick, R.M.; Klara, S.M.; Marano, J.J.

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  13. Toward Reproducible Computational Research: An Empirical Analysis of Data and Code Policy Adoption by Journals

    PubMed Central

    Stodden, Victoria; Guo, Peixuan; Ma, Zhaokun

    2013-01-01

    Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals. PMID:23805293

  14. GIANT: a computer code for General Interactive ANalysis of Trajectories

    SciTech Connect

    Jaeger, J.; Lee, M.; Servranckx, R.; Shoaee, H.

    1985-04-01

    Many model-driven diagnostic and correction procedures have been developed at SLAC for the on-line computer controlled operation of SPEAR, PEP, the LINAC, and the Electron Damping Ring. In order to facilitate future applications and enhancements, these procedures are being collected into a single program, GIANT. The program allows interactive diagnosis as well as performance optimization of any beam transport line or circular machine. The test systems for GIANT are those of the SLC project. The organization of this program and some of the recent applications of the procedures will be described in this paper.

  15. High-Precision Computation and Mathematical Physics

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  16. Computational mathematics and physics of fusion reactors

    PubMed Central

    Garabedian, Paul R.

    2003-01-01

    Theory has contributed significantly to recent advances in magnetic fusion research. New configurations have been found for a stellarator experiment by computational methods. Solutions of a free-boundary problem are applied to study the performance of the plasma and look for islands in the magnetic surfaces. Mathematical analysis and numerical calculations have been used to study equilibrium, stability, and transport of optimized fusion reactors. PMID:14614129

  17. Python: a language for computational physics

    NASA Astrophysics Data System (ADS)

    Borcherds, P. H.

    2007-07-01

    Python is a relatively new computing language, created by Guido van Rossum [A.S. Tanenbaum, R. van Renesse, H. van Staveren, G.J. Sharp, S.J. Mullender, A.J. Jansen, G. van Rossum, Experiences with the Amoeba distributed operating system, Communications of the ACM 33 (1990) 46-63; also on-line at http://www.cs.vu.nl/pub/amoeba/. [6

  18. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  19. Imaging flow cytometer using computation and spatially coded filter

    NASA Astrophysics Data System (ADS)

    Han, Yuanyuan; Lo, Yu-Hwa

    2016-03-01

    Flow cytometry analyzes multiple physical characteristics of a large population of single cells as cells flow in a fluid stream through an excitation light beam. Flow cytometers measure fluorescence and light scattering from which information about the biological and physical properties of individual cells are obtained. Although flow cytometers have massive statistical power due to their single cell resolution and high throughput, they produce no information about cell morphology or spatial resolution offered by microscopy, which is a much wanted feature missing in almost all flow cytometers. In this paper, we invent a method of spatial-temporal transformation to provide flow cytometers with cell imaging capabilities. The method uses mathematical algorithms and a specially designed spatial filter as the only hardware needed to give flow cytometers imaging capabilities. Instead of CCDs or any megapixel cameras found in any imaging systems, we obtain high quality image of fast moving cells in a flow cytometer using photomultiplier tube (PMT) detectors, thus obtaining high throughput in manners fully compatible with existing cytometers. In fact our approach can be applied to retrofit traditional flow cytometers to become imaging flow cytometers at a minimum cost. To prove the concept, we demonstrate cell imaging for cells travelling at a velocity of 0.2 m/s in a microfluidic channel, corresponding to a throughput of approximately 1,000 cells per second.

  20. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  1. Developing a coding scheme for detecting usability and fun problems in computer games for young children.

    PubMed

    Barendregt, W; Bekker, M M

    2006-08-01

    This article describes the development and assessment of a coding scheme for finding both usability and fun problems through observations of young children playing computer games during user tests. The proposed coding scheme is based on an existing list of breakdown indication types of the detailed video analysis method (DEVAN). This method was developed to detect usability problems in task-based products for adults. However, the new coding scheme for children's computer games takes into account that in games, fun, in addition to usability, is an important factor and that children behave differently from adults. Therefore, the proposed coding scheme uses 8 of the 14 original breakdown indications and has 7 new indications. The article first discusses the development of the new coding scheme. Subsequently, the article describes the reliability assessment of the coding scheme. The any-two agreement measure of 38.5% shows that thresholds for when certain user behavior is worth coding will be different for different evaluators. However, the any-two agreement of .92 for a fixed list of observation points shows that the distinction between the available codes is clear to most evaluators. Finally, a pilot study shows that training can increase any-two agreement considerably by decreasing the number of unique observations, in comparison with the number of agreed upon observations.

  2. The 3D MHD code GOEMHD3 for astrophysical plasmas with large Reynolds numbers. Code description, verification, and computational performance

    NASA Astrophysics Data System (ADS)

    Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.

    2015-08-01

    Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very

  3. Assessment of the 3-D Thermal-Hydraulic Nuclear Core Computer Code FLICA-IV on Rod Bundle Experiments

    SciTech Connect

    Bergeron, Andre; Caruge, Daniel; Clement, Philippe

    2001-04-15

    The physical validation compared with the hydraulic and two-phase flow experiments of the thermal-hydraulic FLICA-IV nuclear core computer code, in the case of a pressurized water reactor is presented. This three-dimensional two-phase flow code is devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores. The four balance equations used by the code and the closure relationships are first presented. Then, the facilities employed for the code validation are described. They are the ones that use either laser velocimetry techniques in the case of hydraulic validation to measure accurately the flow field around rods or isokinetic sampling to carry out the qualities and the axial mass velocities at the outlet of a rod bundle in the case of two-phase flow validation. Comparisons between experimental and computed values are then presented for the axial flow blockage simulation, inlet assemblies flow mixing, axial flow spacer grid disturbance, and an outlet rod bundle map of qualities and axial mass velocities.

  4. Multiplexing Genetic and Nucleosome Positioning Codes: A Computational Approach

    PubMed Central

    Eslami-Mossallam, Behrouz; Schram, Raoul D.; Tompitak, Marco; van Noort, John; Schiessel, Helmut

    2016-01-01

    Eukaryotic DNA is strongly bent inside fundamental packaging units: the nucleosomes. It is known that their positions are strongly influenced by the mechanical properties of the underlying DNA sequence. Here we discuss the possibility that these mechanical properties and the concomitant nucleosome positions are not just a side product of the given DNA sequence, e.g. that of the genes, but that a mechanical evolution of DNA molecules might have taken place. We first demonstrate the possibility of multiplexing classical and mechanical genetic information using a computational nucleosome model. In a second step we give evidence for genome-wide multiplexing in Saccharomyces cerevisiae and Schizosacharomyces pombe. This suggests that the exact positions of nucleosomes play crucial roles in chromatin function. PMID:27272176

  5. Symbolic coding for noninvertible systems: uniform approximation and numerical computation

    NASA Astrophysics Data System (ADS)

    Beyn, Wolf-Jürgen; Hüls, Thorsten; Schenke, Andre

    2016-11-01

    It is well known that the homoclinic theorem, which conjugates a map near a transversal homoclinic orbit to a Bernoulli subshift, extends from invertible to specific noninvertible dynamical systems. In this paper, we provide a unifying approach that combines such a result with a fully discrete analog of the conjugacy for finite but sufficiently long orbit segments. The underlying idea is to solve appropriate discrete boundary value problems in both cases, and to use the theory of exponential dichotomies to control the errors. This leads to a numerical approach that allows us to compute the conjugacy to any prescribed accuracy. The method is demonstrated for several examples where invertibility of the map fails in different ways.

  6. Universal holonomic quantum computing with cat-codes

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Shu, Chi; Krastanov, Stefan; Shen, Chao; Liu, Ren-Bao; Yang, Zhen-Biao; Schoelkopf, Robert J.; Mirrahimi, Mazyar; Devoret, Michel H.; Jiang, Liang

    2016-05-01

    Universal computation of a quantum system consisting of superpositions of well-separated coherent states of multiple harmonic oscillators can be achieved by three families of adiabatic holonomic gates. The first gate consists of moving a coherent state around a closed path in phase space, resulting in a relative Berry phase between that state and the other states. The second gate consists of ``colliding'' two coherent states of the same oscillator, resulting in coherent population transfer between them. The third gate is an effective controlled-phase gate on coherent states of two different oscillators. Such gates should be realizable via reservoir engineering of systems which support tunable nonlinearities, such as trapped ions and circuit QED.

  7. A model-based view of physics for computational activities in the introductory physics course

    NASA Astrophysics Data System (ADS)

    Buffler, Andy; Pillay, Seshini; Lubben, Fred; Fearick, Roger

    2008-04-01

    A model-based view of physics provides a framework within which computational activities may be structured so as to present to students an authentic representation of physics as a discipline. The use of the framework in teaching computation at the introductory physics level is illustrated by a case study based on the simultaneous translation and rotation of a disk-shaped spaceship. Student responses to an interactive worksheet are used to support guidelines for the design of computational tasks to enhance the understanding of physical systems through numerical problem solving.

  8. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  9. STEALTH - a Lagrange explicit finite-difference code for solid, structural, and thermohydraulic analysis. Volume 8A: STEALTH/WHAMSE - a 2-D fluid-structure interaction code. Computer code manual

    SciTech Connect

    Gross, M.B.

    1984-10-01

    STEALTH is a family of computer codes that can be used to calculate a variety of physical processes in which the dynamic behavior of a continuum is involved. The version of STEALTH described in this volume is designed for calculations of fluid-structure interaction. This version of the program consists of a hydrodynamic version of STEALTH which has been coupled to a finite-element code, WHAMSE. STEALTH computes the transient response of the fluid continuum, while WHAMSE computes the transient response of shell and beam structures under external fluid loadings. The coupling between STEALTH and WHAMSE is performed during each cycle or step of a calculation. Separate calculations of fluid response and structural response are avoided, thereby giving a more accurate model of the dynamic coupling between fluid and structure. This volume provides the theoretical background, the finite-difference equations, the finite-element equations, a discussion of several sample problems, a listing of the input decks for the sample problems, a programmer's manual and a description of the input records for the STEALTH/WHAMSE computer program.

  10. Computers in Undergraduate Education: Mathematics, Physics, Statistics, and Chemistry.

    ERIC Educational Resources Information Center

    Lockard, J. David

    This is the report of a conference which was initiated by the National Science Foundation's Office of Computing Activities and which explored and summarized current thinking about the role of the computer for undergraduate curricula in the physical and mathematical sciences. The conference focused on deciding which goals of the existing…

  11. Explore Effective Use of Computer Simulations for Physics Education

    ERIC Educational Resources Information Center

    Lee, Yu-Fen; Guo, Yuying

    2008-01-01

    The dual purpose of this article is to provide a synthesis of the findings related to the use of computer simulations in physics education and to present implications for teachers and researchers in science education. We try to establish a conceptual framework for the utilization of computer simulations as a tool for learning and instruction in…

  12. Computational mechanics and physics at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr.

    1987-01-01

    An overview is given of computational mechanics and physics at NASA Langley Research Center. Computational analysis is a major component and tool in many of Langley's diverse research disciplines, as well as in the interdisciplinary research. Examples are given for algorithm development and advanced applications in aerodynamics, transition to turbulence and turbulence simulation, hypersonics, structures, and interdisciplinary optimization.

  13. PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)

    NASA Astrophysics Data System (ADS)

    Troparevsky, Claudia; Stocks, George Malcolm

    2012-12-01

    Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc

  14. Physical Optics Based Computational Imaging Systems

    NASA Astrophysics Data System (ADS)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  15. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    ERIC Educational Resources Information Center

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  16. Fault-tolerant quantum computation with asymmetric Bacon-Shor codes

    NASA Astrophysics Data System (ADS)

    Brooks, Peter; Preskill, John

    2013-03-01

    We develop a scheme for fault-tolerant quantum computation based on asymmetric Bacon-Shor codes, which works effectively against highly biased noise dominated by dephasing. We find the optimal Bacon-Shor block size as a function of the noise strength and the noise bias, and estimate the logical error rate and overhead cost achieved by this optimal code. Our fault-tolerant gadgets, based on gate teleportation, are well suited for hardware platforms with geometrically local gates in two dimensions.

  17. The development of an intelligent interface to a computational fluid dynamics flow-solver code

    NASA Technical Reports Server (NTRS)

    Williams, Anthony D.

    1988-01-01

    Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, 3-D, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.

  18. HIFI: a computer code for projectile fragmentation accompanied by incomplete fusion

    SciTech Connect

    Wu, J.R.

    1980-07-01

    A brief summary of a model proposed to describe projectile fragmentation accompanied by incomplete fusion and the instructions for the use of the computer code HIFI are given. The code HIFI calculates single inclusive spectra, coincident spectra and excitation functions resulting from particle-induced reactions. It is a multipurpose program which can calculate any type of coincident spectra as long as the reaction is assumed to take place in two steps.

  19. Computer code for controller partitioning with IFPC application: A user's manual

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Yarkhan, Asim

    1994-01-01

    A user's manual for the computer code for partitioning a centralized controller into decentralized subcontrollers with applicability to Integrated Flight/Propulsion Control (IFPC) is presented. Partitioning of a centralized controller into two subcontrollers is described and the algorithm on which the code is based is discussed. The algorithm uses parameter optimization of a cost function which is described. The major data structures and functions are described. Specific instructions are given. The user is led through an example of an IFCP application.

  20. Benchmarking the SPHINX and CTH shock physics codes for three problems in ballistics

    SciTech Connect

    Wilson, L.T.; Hertel, E.; Schwalbe, L.; Wingate, C.

    1998-02-01

    The CTH Eulerian hydrocode, and the SPHINX smooth particle hydrodynamics (SPH) code were used to model a shock tube, two long rod penetrations into semi-infinite steel targets, and a long rod penetration into a spaced plate array. The results were then compared to experimental data. Both SPHINX and CTH modeled the one-dimensional shock tube problem well. Both codes did a reasonable job in modeling the outcome of the axisymmetric rod impact problem. Neither code correctly reproduced the depth of penetration in both experiments. In the 3-D problem, both codes reasonably replicated the penetration of the rod through the first plate. After this, however, the predictions of both codes began to diverge from the results seen in the experiment. In terms of computer resources, the run times are problem dependent, and are discussed in the text.

  1. A Compact Code for Simulations of Quantum Error Correction in Classical Computers

    SciTech Connect

    Nyman, Peter

    2009-03-10

    This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.

  2. Verification of a Viscous Computational Aeroacoustics Code using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  3. Verification of a Viscous Computational Aeroacoustics Code Using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  4. A Multiple Sphere T-Matrix Fortran Code for Use on Parallel Computer Clusters

    NASA Technical Reports Server (NTRS)

    Mackowski, D. W.; Mishchenko, M. I.

    2011-01-01

    A general-purpose Fortran-90 code for calculation of the electromagnetic scattering and absorption properties of multiple sphere clusters is described. The code can calculate the efficiency factors and scattering matrix elements of the cluster for either fixed or random orientation with respect to the incident beam and for plane wave or localized- approximation Gaussian incident fields. In addition, the code can calculate maps of the electric field both interior and exterior to the spheres.The code is written with message passing interface instructions to enable the use on distributed memory compute clusters, and for such platforms the code can make feasible the calculation of absorption, scattering, and general EM characteristics of systems containing several thousand spheres.

  5. User's manual for PELE3D: a computer code for three-dimensional incompressible fluid dynamics

    SciTech Connect

    McMaster, W H

    1982-05-07

    The PELE3D code is a three-dimensional semi-implicit Eulerian hydrodynamics computer program for the solution of incompressible fluid flow coupled to a structure. The fluid and coupling algorithms have been adapted from the previously developed two-dimensional code PELE-IC. The PELE3D code is written in both plane and cylindrical coordinates. The coupling algorithm is general enough to handle a variety of structural shapes. The free surface algorithm is able to accommodate a top surface and several independent bubbles. The code is in a developmental status since all the intended options have not been fully implemented and tested. Development of this code ended in 1980 upon termination of the contract with the Nuclear Regulatory Commission.

  6. Verification of computational aerodynamic predictions for complex hypersonic vehicles using the INCA{trademark} code

    SciTech Connect

    Payne, J.L.; Walker, M.A.

    1995-01-01

    This paper describes a process of combining two state-of-the-art CFD tools, SPRINT and INCA, in a manner which extends the utility of both codes beyond what is possible from either code alone. The speed and efficiency of the PNS code, SPRING, has been combined with the capability of a Navier-Stokes code to model fully elliptic, viscous separated regions on high performance, high speed flight systems. The coupled SPRINT/INCA capability is applicable for design and evaluation of high speed flight vehicles in the supersonic to hypersonic speed regimes. This paper describes the codes involved, the interface process and a few selected test cases which illustrate the SPRINT/INCA coupling process. Results have shown that the combination of SPRINT and INCA produces correct results and can lead to improved computational analyses for complex, three-dimensional problems.

  7. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S[sub 4]), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0[sub 2], H[sub 2]0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  8. FURN3D: A computer code for radiative heat transfer in pulverized coal furnaces

    SciTech Connect

    Ahluwalia, R.K.; Im, K.H.

    1992-08-01

    A computer code FURN3D has been developed for assessing the impact of burning different coals on heat absorption pattern in pulverized coal furnaces. The code is unique in its ability to conduct detailed spectral calculations of radiation transport in furnaces fully accounting for the size distributions of char, soot and ash particles, ash content, and ash composition. The code uses a hybrid technique of solving the three-dimensional radiation transport equation for absorbing, emitting and anisotropically scattering media. The technique achieves an optimal mix of computational speed and accuracy by combining the discrete ordinate method (S{sub 4}), modified differential approximation (MDA) and P, approximation in different range of optical thicknesses. The code uses spectroscopic data for estimating the absorption coefficients of participating gases C0{sub 2}, H{sub 2}0 and CO. It invokes Mie theory for determining the extinction and scattering coefficients of combustion particulates. The optical constants of char, soot and ash are obtained from dispersion relations derived from reflectivity, transmissivity and extinction measurements. A control-volume formulation is adopted for determining the temperature field inside the furnace. A simple char burnout model is employed for estimating heat release and evolution of particle size distribution. The code is written in Fortran 77, has modular form, and is machine-independent. The computer memory required by the code depends upon the number of grid points specified and whether the transport calculations are performed on spectral or gray basis.

  9. FLAME: A finite element computer code for contaminant transport n variably-saturated media

    SciTech Connect

    Baca, R.G.; Magnuson, S.O.

    1992-06-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model referred to as the FLAME computer code, is designed to simulate subsurface contaminant transport in a variably-saturated media. The code can be applied to model two-dimensional contaminant transport in an and site vadose zone or in an unconfined aquifer. In addition, the code has the capability to describe transport processes in a porous media with discrete fractures. This report presents the following: description of the conceptual framework and mathematical theory, derivations of the finite element techniques and algorithms, computational examples that illustrate the capability of the code, and input instructions for the general use of the code. The development of the FLAME computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of energy Order 5820.2A.

  10. CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows

    SciTech Connect

    Winters, W.S.; Evans, G.H.; Moen, C.D.

    1996-10-01

    This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.

  11. ASHMET: a computer code for estimating insolation incident on tilted surfaces

    SciTech Connect

    Elkin, R.F.; Toelle, R.G.

    1980-05-01

    A computer code, ASHMET, has been developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors have been included. Climatological data for 248 US locations are built into the code. This report describes the methodology of the code, and its input and output. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.

  12. Items Supporting the Hanford Internal Dosimetry Program Implementation of the IMBA Computer Code

    SciTech Connect

    Carbaugh, Eugene H.; Bihl, Donald E.

    2008-01-07

    The Hanford Internal Dosimetry Program has adopted the computer code IMBA (Integrated Modules for Bioassay Analysis) as its primary code for bioassay data evaluation and dose assessment using methodologies of ICRP Publications 60, 66, 67, 68, and 78. The adoption of this code was part of the implementation plan for the June 8, 2007 amendments to 10 CFR 835. This information release includes action items unique to IMBA that were required by PNNL quality assurance standards for implementation of safety software. Copie of the IMBA software verification test plan and the outline of the briefing given to new users are also included.

  13. Proton computed tomography from multiple physics processes

    NASA Astrophysics Data System (ADS)

    Bopp, C.; Colin, J.; Cussol, D.; Finck, Ch; Labalme, M.; Rousseau, M.; Brasse, D.

    2013-10-01

    Proton CT (pCT) nowadays aims at improving hadron therapy treatment planning by mapping the relative stopping power (RSP) of materials with respect to water. The RSP depends mainly on the electron density of the materials. The main information used is the energy of the protons. However, during a pCT acquisition, the spatial and angular deviation of each particle is recorded and the information about its transmission is implicitly available. The potential use of those observables in order to get information about the materials is being investigated. Monte Carlo simulations of protons sent into homogeneous materials were performed, and the influence of the chemical composition on the outputs was studied. A pCT acquisition of a head phantom scan was simulated. Brain lesions with the same electron density but different concentrations of oxygen were used to evaluate the different observables. Tomographic images from the different physics processes were reconstructed using a filtered back-projection algorithm. Preliminary results indicate that information is present in the reconstructed images of transmission and angular deviation that may help differentiate tissues. However, the statistical uncertainty on these observables generates further challenge in order to obtain an optimal reconstruction and extract the most pertinent information.

  14. Design geometry and design/off-design performance computer codes for compressors and turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1995-01-01

    This report summarizes some NASA Lewis (i.e., government owned) computer codes capable of being used for airbreathing propulsion system studies to determine the design geometry and to predict the design/off-design performance of compressors and turbines. These are not CFD codes; velocity-diagram energy and continuity computations are performed fore and aft of the blade rows using meanline, spanline, or streamline analyses. Losses are provided by empirical methods. Both axial-flow and radial-flow configurations are included.

  15. Equivalence of computer codes for calculation of coincidence summing correction factors - Part II.

    PubMed

    Vidmar, T; Camp, A; Hurtado, S; Jäderström, H; Kastlander, J; Lépy, M-C; Lutter, G; Ramebäck, H; Sima, O; Vargas, A

    2016-03-01

    The aim of this study was to check for equivalence of computer codes that are capable of performing calculations of true coincidence summing (TCS) correction factors. All calculations were performed for a set of well-defined detector parameters, sample parameters and decay scheme data. The studied geometry was a point source of (133)Ba positioned directly on the detector window of a low-energy (n-type) detector. Good agreement was established between the TCS correction factors computed by the different codes. PMID:26651169

  16. Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1994-01-01

    An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.

  17. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    NASA Technical Reports Server (NTRS)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  18. HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.

  19. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  20. Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study

    PubMed Central

    Siu, Eric; Campitelli, Michael A.; Kwong, Jeffrey C.

    2012-01-01

    Background Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection. Methodology/Principal Findings We conducted a cohort study of Ontario respondents to Statistics Canada’s population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals ≥65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design. Conclusion/Significance Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza. PMID:22737242

  1. Users manual and modeling improvements for axial turbine design and performance computer code TD2-2

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.

  2. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    NASA Astrophysics Data System (ADS)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  3. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    SciTech Connect

    Aeschliman, D.P.; Oberkampf, W.L.; Blottner, F.G.

    1995-07-01

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  4. TERRA: a computer code for simulating the transport of environmentally released radionuclides through agriculture

    SciTech Connect

    Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.; Hermann, O.W.

    1984-11-01

    TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-defined deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.

  5. Independent verification and validation testing of the FLASH computer code, Versiion 3. 0

    SciTech Connect

    Martian, P.; Chung, J.N. . Dept. of Mechanical and Materials Engineering)

    1992-06-01

    Independent testing of the FLASH computer code, Version 3.0, was conducted to determine if the code is ready for use in hydrological and environmental studies at various Department of Energy sites. This report describes the technical basis, approach, and results of this testing. Verification tests, and validation tests, were used to determine the operational status of the FLASH computer code. These tests were specifically designed to test: correctness of the FORTRAN coding, computational accuracy, and suitability to simulating actual hydrologic conditions. This testing was performed using a structured evaluation protocol which consisted of: blind testing, independent applications, and graduated difficulty of test cases. Both quantitative and qualitative testing was performed through evaluating relative root mean square values and graphical comparisons of the numerical, analytical, and experimental data. Four verification test were used to check the computational accuracy and correctness of the FORTRAN coding, and three validation tests were used to check the suitability to simulating actual conditions. These tests cases ranged in complexity from simple 1-D saturated flow to 2-D variably saturated problems. The verification tests showed excellent quantitative agreement between the FLASH results and analytical solutions. The validation tests showed good qualitative agreement with the experimental data. Based on the results of this testing, it was concluded that the FLASH code is a versatile and powerful two-dimensional analysis tool for fluid flow. In conclusion, all aspects of the code that were tested, except for the unit gradient bottom boundary condition, were found to be fully operational and ready for use in hydrological and environmental studies.

  6. Independent verification and validation testing of the FLASH computer code, Versiion 3.0

    SciTech Connect

    Martian, P.; Chung, J.N.

    1992-06-01

    Independent testing of the FLASH computer code, Version 3.0, was conducted to determine if the code is ready for use in hydrological and environmental studies at various Department of Energy sites. This report describes the technical basis, approach, and results of this testing. Verification tests, and validation tests, were used to determine the operational status of the FLASH computer code. These tests were specifically designed to test: correctness of the FORTRAN coding, computational accuracy, and suitability to simulating actual hydrologic conditions. This testing was performed using a structured evaluation protocol which consisted of: blind testing, independent applications, and graduated difficulty of test cases. Both quantitative and qualitative testing was performed through evaluating relative root mean square values and graphical comparisons of the numerical, analytical, and experimental data. Four verification test were used to check the computational accuracy and correctness of the FORTRAN coding, and three validation tests were used to check the suitability to simulating actual conditions. These tests cases ranged in complexity from simple 1-D saturated flow to 2-D variably saturated problems. The verification tests showed excellent quantitative agreement between the FLASH results and analytical solutions. The validation tests showed good qualitative agreement with the experimental data. Based on the results of this testing, it was concluded that the FLASH code is a versatile and powerful two-dimensional analysis tool for fluid flow. In conclusion, all aspects of the code that were tested, except for the unit gradient bottom boundary condition, were found to be fully operational and ready for use in hydrological and environmental studies.

  7. The Accuracy of ICD Codes: Identifying Physical Abuse in 4 Children’s Hospitals

    PubMed Central

    Hooft, Anneka M.; Asnes, Andrea G.; Livingston, Nina; Deutsch, Stephanie; Cahill, Linda; Wood, Joanne N.; Leventhal, John M.

    2016-01-01

    Objective To assess the accuracy of International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), codes in identifying cases of child physical abuse in 4 children’s hospitals. Methods We included all children evaluated by a child abuse pediatrician (CAP) for suspicion of abuse at 4 children’s hospitals from January 1, 2007, to December 31, 2010. Subjects included both patients judged to have injuries from abuse and those judged to have injuries from accidents or to have medical problems. The ICD-9-CM codes entered in the hospital discharge database for each child were compared to the decisions made by the CAPs on the likelihood of abuse. Sensitivity and specificity were calculated. Medical records for discordant cases were abstracted and reviewed to assess factors contributing to coding discrepancies. Results Of 936 cases of suspected physical abuse, 65.8% occurred in children <1 year of age. CAPs rated 32.7% as abuse, 18.2% as unknown cause, and 49.1% as accident/medical cause. Sensitivity and specificity of ICD-9-CM codes for abuse were 73.5% (95% confidence interval 68.2, 78.4), and 92.4% (95% confidence interval 90.0, 94.0), respectively. Among hospitals, sensitivity ranged from 53.8% to 83.8% and specificity from 85.4% to 100%. Analysis of discordant cases revealed variations in coding practices and physicians’ notations among hospitals that contributed to differences in sensitivity and specificity of ICD-9-CM codes in child physical abuse. Conclusions Overall, the sensitivity and specificity of ICD-9-CM codes in identifying cases of child physical abuse were relatively low, suggesting both an under- and overcounting of abuse cases. PMID:26142071

  8. Computational physics in the introductory calculus-based course

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth; Sherwood, Bruce

    2008-04-01

    The integration of computation into the introductory calculus-based physics course can potentially provide significant support for the development of conceptual understanding. Computation can support three-dimensional visualizations of abstract quantities, offer opportunities to construct symbolic rather than numeric solutions to problems, and provide experience with the use of vectors as coordinate-free entities. Computation can also allow students to explore models in a way not possible using the analytical tools available to first-year students. We describe how we have incorporated computer programming into an introductory calculus-based course taken by science and engineering students.

  9. Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes

    NASA Astrophysics Data System (ADS)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana

    2013-03-01

    The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.

  10. A Computer Code for Swirling Turbulent Axisymmetric Recirculating Flows in Practical Isothermal Combustor Geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.; Rhode, D. L.

    1982-01-01

    A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.

  11. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    SciTech Connect

    Trent, D.S.; Eyler, L.L.

    1982-09-01

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  12. Users manual for updated computer code for axial-flow compressor conceptual design

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.

  13. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    NASA Astrophysics Data System (ADS)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  14. EDITORIAL: XXVI IUPAP Conference on Computational Physics (CCP2014)

    NASA Astrophysics Data System (ADS)

    Sandvik, A. W.; Campbell, D. K.; Coker, D. F.; Tang, Y.

    2015-09-01

    The 26th IUPAP Conference on Computational Physics, CCP2014, was held in Boston, Massachusetts, during August 11-14, 2014. Almost 400 participants from 38 countries convened at the George Sherman Union at Boston University for four days of plenary and parallel sessions spanning a broad range of topics in computational physics and related areas. The first meeting in the series that developed into the annual Conference on Computational Physics (CCP) was held in 1989, also on the campus of Boston University and chaired by our colleague Claudio Rebbi. The express purpose of that meeting was to discuss the progress, opportunities and challenges of common interest to physicists engaged in computational research. The conference having returned to the site of its inception, it is interesting to recect on the development of the field during the intervening years. Though 25 years is a short time for mankind, computational physics has taken giant leaps during these years, not only because of the enormous increases in computer power but especially because of the development of new methods and algorithms, and the growing awareness of the opportunities the new technologies and methods can offer. Computational physics now represents a ''third leg'' of research alongside analytical theory and experiments in almost all subfields of physics, and because of this there is also increasing specialization within the community of computational physicists. It is therefore a challenge to organize a meeting such as CCP, which must have suffcient depth in different areas to hold the interest of experts while at the same time being broad and accessible. Still, at a time when computational research continues to gain in importance, the CCP series is critical in the way it fosters cross-fertilization among fields, with many participants specifically attending in order to get exposure to new methods in fields outside their own. As organizers and editors of these Proceedings, we are very pleased

  15. A Modular Computer Code for Simulating Reactive Multi-Species Transport in 3-Dimensional Groundwater Systems

    SciTech Connect

    TP Clement

    1999-06-24

    RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in

  16. Modeling Warm Dense Matter Experiments using the 3D ALE-AMR Code and the Move Toward Exascale Computing

    SciTech Connect

    Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A

    2011-11-04

    The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related

  17. Enhancement of the CAVE computer code. [aerodynamic heating package for nose cones and scramjet engine sidewalls

    NASA Technical Reports Server (NTRS)

    Rathjen, K. A.; Burk, H. O.

    1983-01-01

    The computer code CAVE (Conduction Analysis via Eigenvalues) is a convenient and efficient computer code for predicting two dimensional temperature histories within thermal protection systems for hypersonic vehicles. The capabilities of CAVE were enhanced by incorporation of the following features into the code: real gas effects in the aerodynamic heating predictions, geometry and aerodynamic heating package for analyses of cone shaped bodies, input option to change from laminar to turbulent heating predictions on leading edges, modification to account for reduction in adiabatic wall temperature with increase in leading sweep, geometry package for two dimensional scramjet engine sidewall, with an option for heat transfer to external and internal surfaces, print out modification to provide tables of select temperatures for plotting and storage, and modifications to the radiation calculation procedure to eliminate temperature oscillations induced by high heating rates. These new features are described.

  18. A 3D-PNS computer code for the calculation of supersonic combusting flows

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit; Northam, G. Burton

    1988-01-01

    A computer code has been developed based on the three-dimensional parabolized Navier-Stokes (PNS) equations which govern the supersonic combusting flow of the hydrogen-air system. The finite difference algorithm employed was a hybrid of the Schiff-Steger algorithm and the Vigneron, et al., algorithm which is fully implicit and fully coupled. The combustion of hydrogen and air was modeled by the finite-rate two-step combustion model of Rogers-Chinitz. A new dependent variable vector was introduced to simplify the numerical algorithm. Robustness of the algorithm was considerably enhanced by introducing an adjustable parameter. The computer code was used to solve a premixed shock-induced combustion problem and the results were compared with those of a full Navier-Stokes code. Reasonably good agreement was obtained at a fraction of the cost of the full Navier-Stokes procedure.

  19. Users' Manual for Computer Code SPIRALI Incompressible, Turbulent Spiral Grooved Cylindrical and Face Seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.; Shapiro, Wilbur

    2005-01-01

    The SPIRALI code predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures. A derivation of the equations governing the performance of turbulent, incompressible, spiral groove cylindrical and face seals along with a description of their solution is given. The computer codes are described, including an input description, sample cases, and comparisons with results of other codes.

  20. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  1. XSECT: A computer code for generating fuselage cross sections - user's manual

    NASA Technical Reports Server (NTRS)

    Ames, K. R.

    1982-01-01

    A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.

  2. A Computer-Based Observational Assessment of the Teaching Behaviours that Influence Motivational Climate in Physical Education

    ERIC Educational Resources Information Center

    Morgan, Kevin; Sproule, John; Weigand, Daniel; Carpenter, Paul

    2005-01-01

    The primary purpose of this study was to use an established behavioural taxonomy (Ames, 1992b) as a computer-based observational coding system to assess the teaching behaviours that influence perceptions of the motivational climate in Physical Education (PE). The secondary purpose was to determine the degree of congruence between the behavioural…

  3. User's manual for the vertical axis wind turbine performance computer code darter

    SciTech Connect

    Klimas, P. C.; French, R. E.

    1980-05-01

    The computer code DARTER (DARrieus, Turbine, Elemental Reynolds number) is an aerodynamic performance/loads prediction scheme based upon the conservation of momentum principle. It is the latest evolution in a sequence which began with a model developed by Templin of NRC, Canada and progressed through the Sandia National Laboratories-developed SIMOSS (SSImple MOmentum, Single Streamtube) and DART (SARrieus Turbine) to DARTER.

  4. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  5. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  6. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 5 2011-07-01 2011-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  7. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  8. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144 Section 1910.144 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR OCCUPATIONAL SAFETY AND HEALTH STANDARDS General Environmental...

  9. TEMP: a computer code to calculate fuel pin temperatures during a transient. [LMFBR

    SciTech Connect

    Bard, F E; Christensen, B Y; Gneiting, B C

    1980-04-01

    The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method.

  10. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  11. RISKIND: An enhanced computer code for National Environmental Policy Act transportation consequence analysis

    SciTech Connect

    Biwer, B.M.; LePoire, D.J.; Chen, S.Y.

    1996-03-01

    The RISKIND computer program was developed for the analysis of radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel (SNF) or other radioactive materials. The code is intended to provide scenario-specific analyses when evaluating alternatives for environmental assessment activities, including those for major federal actions involving radioactive material transport as required by the National Environmental Policy Act (NEPA). As such, rigorous procedures have been implemented to enhance the code`s credibility and strenuous efforts have been made to enhance ease of use of the code. To increase the code`s reliability and credibility, a new version of RISKIND was produced under a quality assurance plan that covered code development and testing, and a peer review process was conducted. During development of the new version, the flexibility and ease of use of RISKIND were enhanced through several major changes: (1) a Windows{sup {trademark}} point-and-click interface replaced the old DOS menu system, (2) the remaining model input parameters were added to the interface, (3) databases were updated, (4) the program output was revised, and (5) on-line help has been added. RISKIND has been well received by users and has been established as a key component in radiological transportation risk assessments through its acceptance by the U.S. Department of Energy community in recent environmental impact statements (EISs) and its continued use in the current preparation of several EISs.

  12. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    SciTech Connect

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  13. PREFACE: New trends in Computer Simulations in Physics and not only in physics

    NASA Astrophysics Data System (ADS)

    Shchur, Lev N.; Krashakov, Serge A.

    2016-02-01

    In this volume we have collected papers based on the presentations given at the International Conference on Computer Simulations in Physics and beyond (CSP2015), held in Moscow, September 6-10, 2015. We hope that this volume will be helpful and scientifically interesting for readers. The Conference was organized for the first time with the common efforts of the Moscow Institute for Electronics and Mathematics (MIEM) of the National Research University Higher School of Economics, the Landau Institute for Theoretical Physics, and the Science Center in Chernogolovka. The name of the Conference emphasizes the multidisciplinary nature of computational physics. Its methods are applied to the broad range of current research in science and society. The choice of venue was motivated by the multidisciplinary character of the MIEM. It is a former independent university, which has recently become the part of the National Research University Higher School of Economics. The Conference Computer Simulations in Physics and beyond (CSP) is planned to be organized biannually. This year's Conference featured 99 presentations, including 21 plenary and invited talks ranging from the analysis of Irish myths with recent methods of statistical physics, to computing with novel quantum computers D-Wave and D-Wave2. This volume covers various areas of computational physics and emerging subjects within the computational physics community. Each section was preceded by invited talks presenting the latest algorithms and methods in computational physics, as well as new scientific results. Both parallel and poster sessions paid special attention to numerical methods, applications and results. For all the abstracts presented at the conference please follow the link http://csp2015.ac.ru/files/book5x.pdf

  14. An Object-oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2008-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented

  15. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  16. PREFACE: 25th IUPAP Conference on Computational Physics (CCP2013)

    NASA Astrophysics Data System (ADS)

    Shchur, Lev N.; Barash, Lev Yu

    2014-05-01

    Participants of the XXV IUPAP Conference on Computational physics came to Moscow at the end of the August during a hot period. It was not a hot period because of the summer; in fact, the weather was quite comfortable. It was a hot period for the atmosphere amidst scientific society in Russia, especially for scientists working for the Russian Academy of Sciences. Four years ago, the C20 IUPAP Commission on Computational Physics and Computational Physics Group of the European Physical Society chose Moscow for several reasons. The first reason was connected to the high level and deep traditions of computational physics in Russia. It is known from experience at the former CCP conferences that native participants contribute about half of the presentations which form the solid scientific background of the conference, and the good level of domestic science makes the conference interesting and successful. The second reason was due to the fact that for the last twenty years there were not many IUPAP conferences in Russia, and it was a time to open more places for information exchange and intensify scientific collaboration. Thirdly, it was common opinion four years ago that the situation in Russia had become stable enough after the transition to a modern society, which took almost a quarter of a century. The conference preface is continued in the pdf.

  17. Application of advanced computational procedures for modeling solar-wind interactions with Venus: Theory and computer code

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.

    1980-01-01

    Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

  18. Computer architectures for computational physics work done by Computational Research and Technology Branch and Advanced Computational Concepts Group

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

  19. Assessment of three-dimensional inviscid codes and loss calculations for turbine aerodynamic computations

    NASA Technical Reports Server (NTRS)

    Povinelli, L. A.

    1984-01-01

    An assessment of several three dimensional inviscid turbine aerodynamic computer codes and loss models used at the NASA Lewis Research Center is presented. Five flow situations are examined, for which both experimental data and computational results are available. The five flows form a basis for the evaluation of the computational procedures. It was concluded that stator flows may be calculated with a high degree of accuracy, whereas, rotor flow fields are less accurately determined. Exploitation of contouring, learning, bowing, and sweeping will require a three dimensional viscous analysis technique.

  20. Effect of Physical Education Teachers' Computer Literacy on Technology Use in Physical Education

    ERIC Educational Resources Information Center

    Kretschmann, Rolf

    2015-01-01

    Teachers' computer literacy has been identified as a factor that determines their technology use in class. The aim of this study was to investigate the relationship between physical education (PE) teachers' computer literacy and their technology use in PE. The study group consisted of 57 high school level in-service PE teachers. A survey was used…

  1. Infrared imaging - A validation technique for computational fluid dynamics codes used in STOVL applications

    NASA Technical Reports Server (NTRS)

    Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.

    1991-01-01

    The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.

  2. Problems associated with application of a wellbore heat transmission computer code

    SciTech Connect

    Dash, Z.V.; Zyvoloski, G.A.

    1982-01-01

    An analysis of the discrepancies between actual temperature surveys and results obtained from a wellbore heat transmission computer code are presented for recent workover operations in well EE-2 at the Fenton Hill Hot Dry Rock Geothermal site. Several sources of error in modeling the thermal behavior of wellbores are considered. These are errors in the estimation of in-situ properties, particularly thermal conductivity, the failure to include frictional heating effects when high flow rates are involved, and error in reporting the flow rate history. These errors were also found to have a cumulative effect. A sensitivity analysis of the computed results to each error type is presented for countercurrent flow. It is concluded that all the errors considered can cause temperature discrepancies between measured and computed temperature. Wellbore codes should have provisions for variable thermal properties and frictional heating. In addition, modeling efforts should be coordinated with periodic temperature surveys so cumulative errors can be minimized.

  3. Analytical and experimental studies of ventilation systems subjected to simulated tornado conditions: Verification of the TVENT computer code

    SciTech Connect

    Martin, R.A.; Gregory, W.S.; Ricketts, C.I.; Smith, P.R.; Littleton, P.E.; Talbott, D.V.

    1988-04-01

    Analytical and experimental studies of ventilation systems have been conducted to verify the Los Alamos National Laboratory TVENT accident analysis computer code for simulated tornado conditions. This code was developed to be a user-friendly analysis tool for designers and regulatory personnel and was designed to predict pressure and flow transients in arbitrary ventilation systems. The experimental studies used two relatively simple, yet sensitive, physical systems designed using similitude analysis. These physical models were instrumented end-to-end for pressure and volumetric flow rate and then subjected to the worst credible tornado conditions using a special blowdown apparatus. We verified TVENT by showing that it successfully predicted our experimental results. By comparing experimental results from both physical models with TVENT results, we showed that we have derived the proper similitude relations (governed by compressibility effects) for all sizes of ventilation systems. As a by-product of our studies, we determined the need for fan speed variation modeling in TVENT. This modification was made and resulted in a significant improvement in our comparisons of analytical and experimental results.

  4. Why I think Computational Physics has been the most valuable part of my undergraduate physics education

    NASA Astrophysics Data System (ADS)

    Parsons, Matthew

    2015-04-01

    Computational physics is a rich and vibrant field in its own right, but often not given the attention that it should receive in the typical undergraduate physics curriculum. It appears that the partisan theorist vs. experimentalist view is still pervasive in academia, or at least still portrayed to students, while in fact there is a continuous spectrum of opportunities in between these two extremes. As a case study, I'll give my perspective as a graduating physics student with examples of computational coursework at Drexel University and research opportunities that this experience has led to.

  5. BRYNTRN: A baryon transport computer code, computation procedures and data base

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank

    1988-01-01

    The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).

  6. Integrating Computational Chemistry into the Physical Chemistry Curriculum

    ERIC Educational Resources Information Center

    Johnson, Lewis E.; Engel, Thomas

    2011-01-01

    Relatively few undergraduate physical chemistry programs integrate molecular modeling into their quantum mechanics curriculum owing to concerns about limited access to computational facilities, the cost of software, and concerns about increasing the course material. However, modeling exercises can be integrated into an undergraduate course at a…

  7. Greek Undergraduate Physical Education Students' Basic Computer Skills

    ERIC Educational Resources Information Center

    Adamakis, Manolis; Zounhia, Katerina

    2013-01-01

    The purposes of this study were to determine how undergraduate physical education (PE) students feel about their level of competence concerning basic computer skills and to examine possible differences between groups (gender, specialization, high school graduation type, and high school direction). Although many students and educators believe…

  8. Future of computing technology in physics - the potentials and pitfalls

    SciTech Connect

    Brenner, A.E.

    1984-02-01

    The impact of the developments of modern digital computers is discussed, especially with respect to physics research in the future. The effects of large data processing capability and increasing rates at which data can be acquired and processed are considered. (GHT)

  9. User manual for INVICE 0.1-beta : a computer code for inverse analysis of isentropic compression experiments.

    SciTech Connect

    Davis, Jean-Paul

    2005-03-01

    INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.

  10. GAM-HEAT: A computer code to compute heat transfer in complex enclosures. Revision 2

    SciTech Connect

    Cooper, R.E.; Taylor, J.R.

    1992-12-01

    This report discusses the GAM{underscore}HEAT code which was developed for heat transfer analyses associated with postulated Double Ended Guilliotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re-radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices as discussed below, and also accounts for convective, conductive, and advective heat exchanges. The code is structured such that it is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium.

  11. GAM-HEAT: A computer code to compute heat transfer in complex enclosures

    SciTech Connect

    Cooper, R.E.; Taylor, J.R.

    1992-12-01

    This report discusses the GAM[underscore]HEAT code which was developed for heat transfer analyses associated with postulated Double Ended Guilliotine Break Loss Of Coolant Accidents (DEGB LOCA) resulting in a drained reactor vessel. In these analyses the gamma radiation resulting from fission product decay constitutes the primary source of energy as a function of time. This energy is deposited into the various reactor components and is re-radiated as thermal energy. The code accounts for all radiant heat exchanges within and leaving the reactor enclosure. The SRS reactors constitute complex radiant exchange enclosures since there are many assemblies of various types within the primary enclosure and most of the assemblies themselves constitute enclosures. GAM-HEAT accounts for this complexity by processing externally generated view factors and connectivity matrices as discussed below, and also accounts for convective, conductive, and advective heat exchanges. The code is structured such that it is applicable for many situations involving heat exchange between surfaces within a radiatively passive medium.

  12. European Code against Cancer 4th Edition: Physical activity and cancer.

    PubMed

    Leitzmann, Michael; Powers, Hilary; Anderson, Annie S; Scoccianti, Chiara; Berrino, Franco; Boutron-Ruault, Marie-Christine; Cecchini, Michele; Espina, Carolina; Key, Timothy J; Norat, Teresa; Wiseman, Martin; Romieu, Isabelle

    2015-12-01

    Physical activity is a complex, multidimensional behavior, the precise measurement of which is challenging in free-living individuals. Nonetheless, representative survey data show that 35% of the European adult population is physically inactive. Inadequate levels of physical activity are disconcerting given substantial epidemiologic evidence showing that physical activity is associated with decreased risks of colon, endometrial, and breast cancers. For example, insufficient physical activity levels are thought to cause 9% of breast cancer cases and 10% of colon cancer cases in Europe. By comparison, the evidence for a beneficial effect of physical activity is less consistent for cancers of the lung, pancreas, ovary, prostate, kidney, and stomach. The biologic pathways underlying the association between physical activity and cancer risk are incompletely defined, but potential etiologic pathways include insulin resistance, growth factors, adipocytokines, steroid hormones, and immune function. In recent years, sedentary behavior has emerged as a potential independent determinant of cancer risk. In cancer survivors, physical activity has shown positive effects on body composition, physical fitness, quality of life, anxiety, and self-esteem. Physical activity may also carry benefits regarding cancer survival, but more evidence linking increased physical activity to prolonged cancer survival is needed. Future studies using new technologies - such as accelerometers and e-tools - will contribute to improved assessments of physical activity. Such advancements in physical activity measurement will help clarify the relationship between physical activity and cancer risk and survival. Taking the overall existing evidence into account, the fourth edition of the European Code against Cancer recommends that people be physically active in everyday life and limit the time spent sitting. PMID:26187327

  13. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    SciTech Connect

    Strenge, D.L.; Peloquin, R.A.

    1981-04-01

    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.

  14. The MELTSPREAD-1 computer code for the analysis of transient spreading in containments

    SciTech Connect

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1990-01-01

    A one-dimensional, multicell, Eulerian finite difference computer code (MELTSPREAD-1) has been developed to provide an improved prediction of the gravity driven spreading and thermal interactions of molten corium flowing over a concrete or steel surface. In this paper, the modeling incorporated into the code is described and the spreading models are benchmarked against a simple dam break'' problem as well as water simulant spreading data obtained in a scaled apparatus of the Mk I containment. Results are also presented for a scoping calculation of the spreading behavior and shell thermal response in the full scale Mk I system following vessel meltthrough. 24 refs., 15 figs.

  15. Comparison of computer codes for calculating dynamic loads in wind turbines

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1977-01-01

    Seven computer codes for analyzing performance and loads in large, horizontal-axis wind turbines were used to calculate blade bending moment loads for two operational conditions of the 100 kW Mod-O wind turbine. Results are compared with test data on the basis of cyclic loads, peak loads, and harmonic contents. Four of the seven codes include rotor-tower interaction and three are limited to rotor analysis. With a few exceptions, all calculated loads were within 25% of nominal test data.

  16. Comparison of computer codes for calculating dynamic loads in wind turbines

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1977-01-01

    Seven computer codes for analyzing performance and loads in large, horizontal axis wind turbines were used to calculate blade bending moment loads for two operational conditions of the 100 kW Mod-0 wind turbine. Results were compared with test data on the basis of cyclic loads, peak loads, and harmonic contents. Four of the seven codes include rotor-tower interaction and three were limited to rotor analysis. With a few exceptions, all calculated loads were within 25 percent of nominal test data.

  17. Development of a new generation solid rocket motor ignition computer code

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Jenkins, Rhonald M.; Ciucci, Alessandro; Johnson, Shelby D.

    1994-01-01

    This report presents the results of experimental and numerical investigations of the flow field in the head-end star grain slots of the Space Shuttle Solid Rocket Motor. This work provided the basis for the development of an improved solid rocket motor ignition transient code which is also described in this report. The correlation between the experimental and numerical results is excellent and provides a firm basis for the development of a fully three-dimensional solid rocket motor ignition transient computer code.

  18. Experimental assessment of computer codes used for safety analysis of integral reactors

    SciTech Connect

    Falkov, A.A.; Kuul, V.S.; Samoilov, O.B.

    1995-09-01

    Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.

  19. PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)

    NASA Astrophysics Data System (ADS)

    Troparevsky, Claudia; Stocks, George Malcolm

    2012-12-01

    Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc

  20. CAST2D: A finite element computer code for casting process modeling

    SciTech Connect

    Shapiro, A.B.; Hallquist, J.O.

    1991-10-01

    CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).

  1. Three-dimensional radiation dose mapping with the TORT computer code

    SciTech Connect

    Slater, C.O.; Pace, J.V. III; Childs, R.L.; Haire, M.J. ); Koyama, T. )

    1991-01-01

    The Consolidated Fuel Reprocessing Program (CFRP) at Oak Ridge National Laboratory (ORNL) has performed radiation shielding studies in support of various facility designs for many years. Computer codes employing the point-kernel method have been used, and the accuracy of these codes is within acceptable limits. However, to further improve the accuracy and to calculate dose at a larger number of locations, a higher order method is desired, even for analyses performed in the early stages of facility design. Consequently, the three-dimensional discrete ordinates transport code TORT, developed at ORNL in the mid-1980s, was selected to examine in detail the dose received at equipment locations. The capabilities of the code have been previously reported. Recently, the Power Reactor and Nuclear Fuel Development Corporation in Japan and the US Department of Energy have used the TORT code as part of a collaborative agreement to jointly develop breeder reactor fuel reprocessing technology. In particular, CFRP used the TORT code to estimate radiation dose levels within the main process cell for a conceptual plant design and to establish process equipment lifetimes. The results reported in this paper are for a conceptual plant design that included the mechanical head and (i.e., the disassembly and shear machines), solvent extraction equipment, and miscellaneous process support equipment.

  2. Physical aspects of computing the flow of a viscous fluid

    NASA Technical Reports Server (NTRS)

    Mehta, U. B.

    1984-01-01

    One of the main themes in fluid dynamics at present and in the future is going to be computational fluid dynamics with the primary focus on the determination of drag, flow separation, vortex flows, and unsteady flows. A computation of the flow of a viscous fluid requires an understanding and consideration of the physical aspects of the flow. This is done by identifying the flow regimes and the scales of fluid motion, and the sources of vorticity. Discussions of flow regimes deal with conditions of incompressibility, transitional and turbulent flows, Navier-Stokes and non-Navier-Stokes regimes, shock waves, and strain fields. Discussions of the scales of fluid motion consider transitional and turbulent flows, thin- and slender-shear layers, triple- and four-deck regions, viscous-inviscid interactions, shock waves, strain rates, and temporal scales. In addition, the significance and generation of vorticity are discussed. These physical aspects mainly guide computations of the flow of a viscous fluid.

  3. A uniform algebraically-based approach to computational physics and efficient programming

    NASA Astrophysics Data System (ADS)

    Raynolds, James; Mullin, Lenore

    2007-03-01

    We present an approach to computational physics in which a common formalism is used both to express the physical problem as well as to describe the underlying details of how computation is realized on arbitrary multiprocessor/memory computer architectures. This formalism is the embodiment of a generalized algebra of multi-dimensional arrays (A Mathematics of Arrays) and an efficient computational implementation is obtained through the composition of of array indices (the psi-calculus) of algorithms defined using matrices, tensors, and arrays in general. The power of this approach arises from the fact that multiple computational steps (e.g. Fourier Transform followed by convolution, etc.) can be algebraically composed and reduced to an simplified expression (i.e. Operational Normal Form), that when directly translated into computer code, can be mathematically proven to be the most efficient implementation with the least number of temporary variables, etc. This approach will be illustrated in the context of a cache-optimized FFT that outperforms or is competitive with established library routines: ESSL, FFTW, IMSL, NAG.

  4. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  5. High-performance computational fluid dynamics: a custom-code approach

    NASA Astrophysics Data System (ADS)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  6. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    NASA Technical Reports Server (NTRS)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  7. Verification, Validation, and Solution Quality in Computational Physics: CFD Methods Applied to Ice Sheet Physics

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    Procedures and methods for veri.cation of coding algebra and for validations of models and calculations used in the aerospace computational fluid dynamics (CFD) community would be ef.cacious if used by the glacier dynamics modeling community. This paper presents some of those methods, and how they might be applied to uncertainty management supporting code veri.cation and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modeling are discussed. After establishing sources of uncertainty and methods for code veri.cation, the paper looks at a representative sampling of veri.cation and validation efforts that are underway in the glacier modeling community, and establishes a context for these within an overall solution quality assessment. Finally, a vision of a new information architecture and interactive scienti.c interface is introduced and advocated.

  8. Computer code simulations of the formation of Meteor Crater, Arizona - Calculations MC-1 and MC-2

    NASA Technical Reports Server (NTRS)

    Roddy, D. J.; Schuster, S. H.; Kreyenhagen, K. N.; Orphal, D. L.

    1980-01-01

    It has been widely accepted that hypervelocity impact processes play a major role in the evolution of the terrestrial planets and satellites. In connection with the development of quantitative methods for the description of impact cratering, it was found that the results provided by two-dimensional finite difference, computer codes is greatly improved when initial impact conditions can be defined and when the numerical results can be tested against field and laboratory data. In order to address this problem, a numerical code study of the formation of Meteor (Barringer) Crater, Arizona, has been undertaken. A description is presented of the major results from the first two code calculations, MC-1 and MC-2, that have been completed for Meteor Crater. Both calculations used an iron meteorite with a kinetic energy of 3.8 Megatons. Calculation MC-1 had an impact velocity of 25 km/sec and MC-2 had an impact velocity of 15 km/sec.

  9. Enhancement of the Probabilistic CEramic Matrix Composite ANalyzer (PCEMCAN) Computer Code

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin

    2000-01-01

    This report represents a final technical report for Order No. C-78019-J entitled "Enhancement of the Probabilistic Ceramic Matrix Composite Analyzer (PCEMCAN) Computer Code." The scope of the enhancement relates to including the probabilistic evaluation of the D-Matrix terms in MAT2 and MAT9 material properties card (available in CEMCAN code) for the MSC/NASTRAN. Technical activities performed during the time period of June 1, 1999 through September 3, 1999 have been summarized, and the final version of the enhanced PCEMCAN code and revisions to the User's Manual is delivered along with. Discussions related to the performed activities were made to the NASA Project Manager during the performance period. The enhanced capabilities have been demonstrated using sample problems.

  10. WOLF: a computer code package for the calculation of ion beam trajectories

    SciTech Connect

    Vogel, D.L.

    1985-10-01

    The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.

  11. Role asymmetry and code transmission in signaling games: an experimental and computational investigation.

    PubMed

    Moreno, Maggie; Baggio, Giosuè

    2015-07-01

    In signaling games, a sender has private access to a state of affairs and uses a signal to inform a receiver about that state. If no common association of signals and states is initially available, sender and receiver must coordinate to develop one. How do players divide coordination labor? We show experimentally that, if players switch roles at each communication round, coordination labor is shared. However, in games with fixed roles, coordination labor is divided: Receivers adjust their mappings more frequently, whereas senders maintain the initial code, which is transmitted to receivers and becomes the common code. In a series of computer simulations, player and role asymmetry as observed experimentally were accounted for by a model in which the receiver in the first signaling round has a higher chance of adjusting its code than its partner. From this basic division of labor among players, certain properties of role asymmetry, in particular correlations with game complexity, are seen to follow.

  12. A Sample of NASA Langley Unsteady Pressure Experiments for Computational Aerodynamics Code Evaluation

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Scott, Robert C.; Bartels, Robert E.; Edwards, John W.; Bennett, Robert M.

    2000-01-01

    As computational fluid dynamics methods mature, code development is rapidly transitioning from prediction of steady flowfields to unsteady flows. This change in emphasis offers a number of new challenges to the research community, not the least of which is obtaining detailed, accurate unsteady experimental data with which to evaluate new methods. Researchers at NASA Langley Research Center (LaRC) have been actively measuring unsteady pressure distributions for nearly 40 years. Over the last 20 years, these measurements have focused on developing high-quality datasets for use in code evaluation. This paper provides a sample of unsteady pressure measurements obtained by LaRC and available for government, university, and industry researchers to evaluate new and existing unsteady aerodynamic analysis methods. A number of cases are highlighted and discussed with attention focused on the unique character of the individual datasets and their perceived usefulness for code evaluation. Ongoing LaRC research in this area is also presented.

  13. MULTI-IFE-A one-dimensional computer code for Inertial Fusion Energy (IFE) target simulations

    NASA Astrophysics Data System (ADS)

    Ramis, R.; Meyer-ter-Vehn, J.

    2016-06-01

    The code MULTI-IFE is a numerical tool devoted to the study of Inertial Fusion Energy (IFE) microcapsules. It includes the relevant physics for the implosion and thermonuclear ignition and burning: hydrodynamics of two component plasmas (ions and electrons), three-dimensional laser light ray-tracing, thermal diffusion, multigroup radiation transport, deuterium-tritium burning, and alpha particle diffusion. The corresponding differential equations are discretized in spherical one-dimensional Lagrangian coordinates. Two typical application examples, a high gain laser driven capsule and a low gain radiation driven marginally igniting capsule are discussed. In addition to phenomena relevant for IFE, the code includes also components (planar and cylindrical geometries, transport coefficients at low temperature, explicit treatment of Maxwell's equations) that extend its range of applicability to laser-matter interaction at moderate intensities (<1016 W cm-2). The source code design has been kept simple and structured with the aim to encourage user's modifications for specialized purposes.

  14. An Object-Oriented Network-Centric Software Architecture for Physical Computing

    NASA Astrophysics Data System (ADS)

    Palmer, Richard

    1997-08-01

    Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.

  15. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  16. Validation of the transportation computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND

    SciTech Connect

    Maheras, S.J.; Pippen, H.K.

    1995-05-01

    The computer codes HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND were used to estimate radiation doses from the transportation of radioactive material in the Department of Energy Programmatic Spent Nuclear Fuel Management and Idaho National Engineering Laboratory Environmental Restoration and Waste Management Programs Environmental Impact Statement. HIGHWAY and INTERLINE were used to estimate transportation routes for truck and rail shipments, respectively. RADTRAN 4 was used to estimate collective doses from incident-free transportation and the risk (probability {times} consequence) from transportation accidents. RISKIND was used to estimate incident-free radiation doses for maximally exposed individuals and the consequences from reasonably foreseeable transportation accidents. The purpose of this analysis is to validate the estimates made by these computer codes; critiques of the conceptual models used in RADTRAN 4 are also discussed. Validation is defined as ``the test and evaluation of the completed software to ensure compliance with software requirements.`` In this analysis, validation means that the differences between the estimates generated by these codes and independent observations are small (i.e., within the acceptance criterion established for the validation analysis). In some cases, the independent observations used in the validation were measurements; in other cases, the independent observations used in the validation analysis were generated using hand calculations. The results of the validation analyses performed for HIGHWAY, INTERLINE, RADTRAN 4, and RISKIND show that the differences between the estimates generated using the computer codes and independent observations were small. Based on the acceptance criterion established for the validation analyses, the codes yielded acceptable results; in all cases the estimates met the requirements for successful validation.

  17. Opacity calculations for ICF target physics using the ABAKO/RAPCAL code

    NASA Astrophysics Data System (ADS)

    Mínguez, E.; Florido, R.; Rodriguez, R.; Gil, J. M.; Rubiano, J. G.; Mendoz, M. A.; Suárez, D.; Martel, P.

    2010-08-01

    In this work we present a set of atomic models (called ABAKO/RAPCAL), and its validation with experiments and with other NLTE models. We consider that our code permits the diagnosis and the determination of opacity data. A review of calculations and simulations for the validation of this set is presented.As an interesting product of these calculations, we can obtain accurate analytical formulas for Rosseland and Planck mean opacities. These formulas are useful for the use as input data in hydrodinamic simulations of targets where the computation task is so hard that in line computation with sophisticated opacity codes is prohibitive. Analytical opacities for several Z-plasmas are presented in this work.

  18. Complex network problems in physics, computer science and biology

    NASA Astrophysics Data System (ADS)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  19. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    SciTech Connect

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H

    2016-01-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  20. Computing element evolution towards Exascale and its impact on legacy simulation codes

    NASA Astrophysics Data System (ADS)

    Colin de Verdière, Guillaume J. L.

    2015-12-01

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.