An integrated radiation physics computer code system.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Harris, D. W.
1972-01-01
An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.
NASA Technical Reports Server (NTRS)
1985-01-01
COSMIC MINIVER, a computer code developed by NASA for analyzing aerodynamic heating and heat transfer on the Space Shuttle, has been used by Marquardt Company to analyze heat transfer on Navy/Air Force missile bodies. The code analyzes heat transfer by four different methods which can be compared for accuracy. MINIVER saved Marquardt three months in computer time and $15,000.
ERIC Educational Resources Information Center
Ivanov, Anisoara; Neacsu, Andrei
2011-01-01
This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…
Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes
NASA Astrophysics Data System (ADS)
Piro, Markus Hans Alexander
Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
Coded aperture computed tomography
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Brady, David J.
2009-08-01
Diverse physical measurements can be modeled by X-ray transforms. While X-ray tomography is the canonical example, reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI) are examples of physically unrelated but mathematically equivalent sensor systems. Historically, most x-ray transform based systems sample continuous distributions and apply analytical inversion processes. On the other hand, RST and CASSI generate discrete multiplexed measurements implemented with coded apertures. This multiplexing of coded measurements allows for compression of measurements from a compressed sensing perspective. Compressed sensing (CS) is a revelation that if the object has a sparse representation in some basis, then a certain number, but typically much less than what is prescribed by Shannon's sampling rate, of random projections captures enough information for a highly accurate reconstruction of the object. This paper investigates the role of coded apertures in x-ray transform measurement systems (XTMs) in terms of data efficiency and reconstruction fidelity from a CS perspective. To conduct this, we construct a unified analysis using RST and CASSI measurement models. Also, we propose a novel compressive x-ray tomography measurement scheme which also exploits coding and multiplexing, and hence shares the analysis of the other two XTMs. Using this analysis, we perform a qualitative study on how coded apertures can be exploited to implement physical random projections by "regularizing" the measurement systems. Numerical studies and simulation results demonstrate several examples of the impact of coding.
NASA Technical Reports Server (NTRS)
Bjork, C.
1981-01-01
The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.
NASA Astrophysics Data System (ADS)
Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.
2010-12-01
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.
Zizin, M. N.; Zimin, V. G.; Zizina, S. N. Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.
2010-12-15
The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.
NASA Technical Reports Server (NTRS)
Goerke, W. S.
1972-01-01
A manual is presented as an aid in using the STEEP32 code. The code is the EXEC 8 version of the STEEP code (STEEP is an acronym for shock two-dimensional Eulerian elastic plastic). The major steps in a STEEP32 run are illustrated in a sample problem. There is a detailed discussion of the internal organization of the code, including a description of each subroutine.
Topics in computational physics
NASA Astrophysics Data System (ADS)
Monville, Maura Edelweiss
Computational Physics spans a broad range of applied fields extending beyond the border of traditional physics tracks. Demonstrated flexibility and capability to switch to a new project, and pick up the basics of the new field quickly, are among the essential requirements for a computational physicist. In line with the above mentioned prerequisites, my thesis described the development and results of two computational projects belonging to two different applied science areas. The first project is a Materials Science application. It is a prescription for an innovative nano-fabrication technique that is built out of two other known techniques. The preliminary results of the simulation of this novel nano-patterning fabrication method show an average improvement, roughly equal to 18%, with respect to the single techniques it draws on. The second project is a Homeland Security application aimed at preventing smuggling of nuclear material at ports of entry. It is concerned with a simulation of an active material interrogation system based on the analysis of induced photo-nuclear reactions. This project consists of a preliminary evaluation of the photo-fission implementation in the more robust radiation transport Monte Carlo codes, followed by the customization and extension of MCNPX, a Monte Carlo code developed in Los Alamos National Laboratory, and MCNP-PoliMi. The final stage of the project consists of testing the interrogation system against some real world scenarios, for the purpose of determining the system's reliability, material discrimination power, and limitations.
Accelerator Physics Code Web Repository
Zimmermann, F.; Basset, R.; Bellodi, G.; Benedetto, E.; Dorda, U.; Giovannozzi, M.; Papaphilippou, Y.; Pieloni, T.; Ruggiero, F.; Rumolo, G.; Schmidt, F.; Todesco, E.; Zotter, B.W.; Payet, J.; Bartolini, R.; Farvacque, L.; Sen, T.; Chin, Y.H.; Ohmi, K.; Oide, K.; Furman, M.; /LBL, Berkeley /Oak Ridge /Pohang Accelerator Lab. /SLAC /TRIUMF /Tech-X, Boulder /UC, San Diego /Darmstadt, GSI /Rutherford /Brookhaven
2006-10-24
In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.
ACCELERATION PHYSICS CODE WEB REPOSITORY.
WEI, J.
2006-06-26
In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
Topological Code Architectures for Quantum Computation
NASA Astrophysics Data System (ADS)
Cesare, Christopher Anthony
This dissertation is concerned with quantum computation using many-body quantum systems encoded in topological codes. The interest in these topological systems has increased in recent years as devices in the lab begin to reach the fidelities required for performing arbitrarily long quantum algorithms. The most well-studied system, Kitaev's toric code, provides both a physical substrate for performing universal fault-tolerant quantum computations and a useful pedagogical tool for explaining the way other topological codes work. In this dissertation, I first review the necessary formalism for quantum information and quantum stabilizer codes, and then I introduce two families of topological codes: Kitaev's toric code and Bombin's color codes. I then present three chapters of original work. First, I explore the distinctness of encoding schemes in the color codes. Second, I introduce a model of quantum computation based on the toric code that uses adiabatic interpolations between static Hamiltonians with gaps constant in the system size. Lastly, I describe novel state distillation protocols that are naturally suited for topological architectures and show that they provide resource savings in terms of the number of required ancilla states when compared to more traditional approaches to quantum gate approximation.
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr.
1990-01-01
Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.
electromagnetics, eddy current, computer codes
Energy Science and Technology Software Center (ESTSC)
2002-03-12
TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.
Using the DEWSBR computer code
Cable, G.D.
1989-09-01
A computer code is described which is designed to determine the fraction of time during which a given ground location is observable from one or more members of a satellite constellation in earth orbit. Ground visibility parameters are determined from the orientation and strength of an appropriate ionized cylinder (used to simulate a beam experiment) at the selected location. Satellite orbits are computed in a simplified two-body approximation computation. A variety of printed and graphical outputs is provided. 9 refs., 50 figs., 2 tabs.
Computer access security code system
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr. (Inventor)
1990-01-01
A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Cahalan, R. F.
2001-05-01
The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards
Documentation for computer code NACL
Weres, O.; Peiper, J.C.; Pitzer, K.S.; Pabalan, R.
1987-02-01
The computer program NACL incorporates the empirical model of the thermodynamic properties of the system NaCl-H/sub 2/O recently published by Pitzer et al. NACL is derived from the research codes used by Pitzer et al. to analyze the experimental data and fix the parameters in their model. NACL calculates values for all thermodynamic properties which are identical to values tabulated in Ref. 1. NACL is written in VAX/VMS FORTRAN, and was developed on a VAX 8600 computer. Machine specific features have been avoided, and NACL should require few changes to compile and run with other compilers and computers. A sample output and full code listing of NACL are appended to this document. For one year following the publication of this document, the code will be made available to interested users on 5.25'' floppy diskette in MS-DOS 2.11 format. Please send a formatted diskette and a stamped, self-addressed mailer to Oleh Weres, Lawrence Berkeley Laboratory, 50E, Berkeley, CA 94720. Please put your name and address on the diskette.
H/sup 0/ precessor computer code
van Dyck, O.B.; Floyd, R.A.
1981-05-01
A spin precessor using H/sup -/ to H/sup 0/ stripping, followed by small precession magnets, has been developed for the LAMPF 800-MeV polarized H/sup -/ beam. The performance of the system was studied with the computer code documented in this report. The report starts from the fundamental physics of a system of spins with hyperfine coupling in a magnetic field and contains many examples of beam behavior as calculated by the program.
2011-05-21
GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.
Energy Science and Technology Software Center (ESTSC)
2011-05-21
GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problemsmore » and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.« less
Tang Haibin; Cheng Jiao; Liu Chang; York, Thomas M.
2012-07-15
A two-dimensional axisymmetric electromagnetic particle-in-cell code with Monte Carlo collision conditions has been developed for an applied-field magnetoplasmadynamic thruster simulation. This theoretical approach establishes a particle acceleration model to investigate the microscopic and macroscopic characteristics of particles. This new simulation code was used to study the physical processes associated with applied magnetic fields. In this paper (I), detail of the computation procedure and results of predictions of local plasma and field properties are presented. The numerical model was applied to the configuration of a NASA Lewis Research Center 100-kW magnetoplasmadynamic thruster which has well documented experimental results. The applied magnetic field strength was varied from 0 to 0.12 T, and the effects on thrust were calculated as a basis for verification of the theoretical approach. With this confirmation, the changes in the distributions of ion density, velocity, and temperature throughout the acceleration region related to the applied magnetic fields were investigated. Using these results, the effects of applied field on physical processes in the thruster discharge region could be represented in detail, and those results are reported.
NASA Astrophysics Data System (ADS)
Tang, Hai-Bin; Cheng, Jiao; Liu, Chang; York, Thomas M.
2012-07-01
A two-dimensional axisymmetric electromagnetic particle-in-cell code with Monte Carlo collision conditions has been developed for an applied-field magnetoplasmadynamic thruster simulation. This theoretical approach establishes a particle acceleration model to investigate the microscopic and macroscopic characteristics of particles. This new simulation code was used to study the physical processes associated with applied magnetic fields. In this paper (I), detail of the computation procedure and results of predictions of local plasma and field properties are presented. The numerical model was applied to the configuration of a NASA Lewis Research Center 100-kW magnetoplasmadynamic thruster which has well documented experimental results. The applied magnetic field strength was varied from 0 to 0.12 T, and the effects on thrust were calculated as a basis for verification of the theoretical approach. With this confirmation, the changes in the distributions of ion density, velocity, and temperature throughout the acceleration region related to the applied magnetic fields were investigated. Using these results, the effects of applied field on physical processes in the thruster discharge region could be represented in detail, and those results are reported.
Surface code quantum computing by lattice surgery
NASA Astrophysics Data System (ADS)
Horsman, Clare; Fowler, Austin G.; Devitt, Simon; Van Meter, Rodney
2012-12-01
In recent years, surface codes have become a leading method for quantum error correction in theoretical large-scale computational and communications architecture designs. Their comparatively high fault-tolerant thresholds and their natural two-dimensional nearest-neighbour (2DNN) structure make them an obvious choice for large scale designs in experimentally realistic systems. While fundamentally based on the toric code of Kitaev, there are many variants, two of which are the planar- and defect-based codes. Planar codes require fewer qubits to implement (for the same strength of error correction), but are restricted to encoding a single qubit of information. Interactions between encoded qubits are achieved via transversal operations, thus destroying the inherent 2DNN nature of the code. In this paper we introduce a new technique enabling the coupling of two planar codes without transversal operations, maintaining the 2DNN of the encoded computer. Our lattice surgery technique comprises splitting and merging planar code surfaces, and enables us to perform universal quantum computation (including magic state injection) while removing the need for braided logic in a strictly 2DNN design, and hence reduces the overall qubit resources for logic operations. Those resources are further reduced by the use of a rotated lattice for the planar encoding. We show how lattice surgery allows us to distribute encoded GHZ states in a more direct (and overhead friendly) manner, and how a demonstration of an encoded CNOT between two distance-3 logical states is possible with 53 physical qubits, half of that required in any other known construction in 2D.
Optimizing Nuclear Physics Codes on the XT5
Hartman-Baker, Rebecca J; Nam, Hai Ah
2011-01-01
Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.
Computational capabilities of physical systems
NASA Astrophysics Data System (ADS)
Wolpert, David H.
2002-01-01
In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly ``processing information faster than the universe does.'' Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, ``prediction complexity,'' is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the ``encoding'' bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike
Computational capabilities of physical systems.
Wolpert, David H
2002-01-01
In this paper strong limits on the accuracy of real-world physical computation are established. To derive these results a non-Turing machine formulation of physical computation is used. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out every computational task in the subset of such tasks that could potentially be posed to C. This means in particular that there cannot be a physical computer that can be assured of correctly "processing information faster than the universe does." Because this result holds independent of how or if the computer is physically coupled to the rest of the universe, it also means that there cannot exist an infallible, general-purpose observation apparatus, nor an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or nonclassical, and/or obey chaotic dynamics. They also hold even if one could use an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing machine (TM). After deriving these results analogs of the TM Halting theorem are derived for the novel kind of computer considered in this paper, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analog of algorithmic information complexity, "prediction complexity," is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task. This is analogous to the "encoding" bound governing how much the algorithm information complexity of a TM calculation can differ for two reference universal TMs. It is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike
Free electron laser physical process code (FELPPC)
Thode, L.E.; Chan, K.C.D.; Schmitt, M.J.
1995-02-01
Even at the conceptual level, the strong coupling between subsystem elements complicates the understanding and design of a free electron laser (FEL). Given the requirements for high-performance FELS, the coupling between subsystems must be included to obtain a realistic picture of the potential operational capability. The concept of an Integrated Numerical Experiment (INEX) was implemented to accurately calculate the coupling between the FEL subsystems. During the late 1980`s, the INEX approach was successfully applied to a large number of accelerator and FEL experiments. Unfortunately, because of significant manpower and computational requirements, the integrated approach is difficult to apply to trade-off and initial design studies. However, the INEX codes provided a base from which realistic accelerator, wiggler, optics, and control models could be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from the INEX codes, provides coupling between the subsystem models, and incorporates application models relevant to a specific study. In other words, FELPPC solves the complete physical process model using realistic physics and technology constraints. FELPPC can calculate complex FEL configurations including multiple accelerator and wiggler combinations. When compared with the INEX codes, the subsystem models have been found to be quite accurate over many orders-of-magnitude. As a result, FELPPC has been used for the initial design studies of a large number of FEL applications: high-average-power ground, space, plane, and ship based FELS; beacon and illuminator FELS; medical and compact FELS; and XUV FELS.
Computer Code for Nanostructure Simulation
NASA Technical Reports Server (NTRS)
Filikhin, Igor; Vlahovic, Branislav
2009-01-01
Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.
Computations in Plasma Physics.
ERIC Educational Resources Information Center
Cohen, Bruce I.; Killeen, John
1983-01-01
Discusses contributions of computers to research in magnetic and inertial-confinement fusion, charged-particle-beam propogation, and space sciences. Considers use in design/control of laboratory and spacecraft experiments and in data acquisition; and reviews major plasma computational methods and some of the important physics problems they…
Energy Science and Technology Software Center (ESTSC)
2012-12-21
GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-written components: (i) a set of standard finite, discrete, and discontinuous displacement element physics solvers for resolving Darcy fluid flow, explicit mechanics, implicit mechanics, fault rupture and earthquake nucleation, and fluid-mediated fracturing, including resolution of physcial behaviors both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. Themore » resultant code can be used alone for linearly elastic problems; ploblems involving hydraulic fracturing, where the mesh topology is dynamically changed; fault rupture modeling and seismic risk assessment; and general granular materials behavior. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for , e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release. CPAC's secondary applications include modeling fault evolution for predicting the statistical distribution of earthquake events and to capture granular materials behavior under different load paths.« less
2012-12-21
GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-written components: (i) a set of standard finite, discrete, and discontinuous displacement element physics solvers for resolving Darcy fluid flow, explicit mechanics, implicit mechanics, fault rupture and earthquake nucleation, and fluid-mediated fracturing, including resolution of physcial behaviors both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems; ploblems involving hydraulic fracturing, where the mesh topology is dynamically changed; fault rupture modeling and seismic risk assessment; and general granular materials behavior. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for , e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release. CPAC's secondary applications include modeling fault evolution for predicting the statistical distribution of earthquake events and to capture granular materials behavior under different load paths.
Teaching Physics with Computers
NASA Astrophysics Data System (ADS)
Botet, R.; Trizac, E.
2005-09-01
Computers are now so common in our everyday life that it is difficult to imagine the computer-free scientific life of the years before the 1980s. And yet, in spite of an unquestionable rise, the use of computers in the realm of education is still in its infancy. This is not a problem with students: for the new generation, the pre-computer age seems as far in the past as the the age of the dinosaurs. It may instead be more a question of teacher attitude. Traditional education is based on centuries of polished concepts and equations, while computers require us to think differently about our method of teaching, and to revise the content accordingly. Our brains do not work in terms of numbers, but use abstract and visual concepts; hence, communication between computer and man boomed when computers escaped the world of numbers to reach a visual interface. From this time on, computers have generated new knowledge and, more importantly for teaching, new ways to grasp concepts. Therefore, just as real experiments were the starting point for theory, virtual experiments can be used to understand theoretical concepts. But there are important differences. Some of them are fundamental: a virtual experiment may allow for the exploration of length and time scales together with a level of microscopic complexity not directly accessible to conventional experiments. Others are practical: numerical experiments are completely safe, unlike some dangerous but essential laboratory experiments, and are often less expensive. Finally, some numerical approaches are suited only to teaching, as the concept necessary for the physical problem, or its solution, lies beyond the scope of traditional methods. For all these reasons, computers open physics courses to novel concepts, bringing education and research closer. In addition, and this is not a minor point, they respond naturally to the basic pedagogical needs of interactivity, feedback, and individualization of instruction. This is why one can
HOTSPOT Health Physics codes for the PC
Homann, S.G.
1994-03-01
The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculation tool for evaluating accidents involving radioactive materials. HOTSPOT codes are a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT codes produce a consistent output for the same input assumptions and minimize the probability of errors associated with reading a graph incorrectly or scaling a universal nomogram during an emergency. The HOTSPOT codes are designed for short-term (less than 24 hours) release durations. Users requiring radiological release consequences for release scenarios over a longer time period, e.g., annual windrose data, are directed to such long-term models as CAPP88-PC (Parks, 1992). Users requiring more sophisticated modeling capabilities, e.g., complex terrain; multi-location real-time wind field data; etc., are directed to such capabilities as the Department of Energy`s ARAC computer codes (Sullivan, 1993). Four general programs -- Plume, Explosion, Fire, and Resuspension -- calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Other programs deal with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. Additional programs estimate the dose commitment from the inhalation of any one of the radionuclides listed in the database of radionuclides; calibrate a radiation survey instrument for ground-survey measurements; and screen plutonium uptake in the lung (see FIDLER Calibration and LUNG Screening sections).
Computer Augmented Physics Lab
ERIC Educational Resources Information Center
Geller, Kenneth N.; Newstein, Herman
1972-01-01
Experiments are designed with application to phenomena of the real world for use in a third-quarter introductory college physics course in wave motion, sound, and light. The computer is used to process and analyze experimental data and to extend experimental observations through simulation. (Author/TS)
Computer codes for RF cavity design
Ko, K.
1992-08-01
In RF cavity design, numerical modeling is assuming an increasingly important role with the help of sophisticated computer codes and powerful yet affordable computers. A description of the cavity codes in use in the accelerator community has been given previously. The present paper will address the latest developments and discuss their applications to cavity toning and matching problems.
Computer Code Aids Design Of Wings
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; Darden, Christine M.
1993-01-01
AERO2S computer code developed to aid design engineers in selection and evaluation of aerodynamically efficient wing/canard and wing/horizontal-tail configurations that includes simple hinged-flap systems. Code rapidly estimates longitudinal aerodynamic characteristics of conceptual airplane lifting-surface arrangements. Developed in FORTRAN V on CDC 6000 computer system, and ported to MS-DOS environment.
Computational Knowledge for Toroidal Confinement Physics: Part I
Chang, C. S.
2009-02-19
Basic high level computational knowledge for studying the toroidal confinement physics is discussed. Topics include the primacy hierarchy of simulation quantities in statistical plasma physics, importance of the nonlinear-multiscale self-organization phenomena in a computational study, different types of codes for different applications, and different types of computer architectures for different types of codes.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
Network Coding for Function Computation
ERIC Educational Resources Information Center
Appuswamy, Rathinakumar
2011-01-01
In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…
Computational Physics in the Undergraduate Physics Curriculum
NASA Astrophysics Data System (ADS)
Hasbun, J. E.
2006-03-01
Recent efforts to incorporate computational physics in the undergraduate physics curriculum have made use of Matlab, IDL, Maple, Mathematica, Fortran, and C^1 as well as Java.^2 The benefits of similar efforts in our undergraduate physics curriculum are that students learn ways to go beyond what they learn in the classroom and use computational techniques to explore realistic physics applications. In so doing students become better prepared to perform undergraduate research that will be useful throughout their scientific careers.^3 Our standard computational physics course uses some of the above tools.^1 More recently, we have developed a first draft of a textbook for the junior level mechanics physics course that incorporates computational techniques. The text being developed in addition to employing the invaluable traditional analytical approach to problem solving, it incorporates computational physics to build on those problems. In particular, the course makes use of students abilities to use programming to go beyond the analytical approach and complement their understanding. Selected examples of representative lecture problems will be presented. ^1 ``Computation and Problem Solving in Undergraduate Physics,'' David M. Cook, Lawrence University (2003), http://www.lawrence.edu/dept/physics/ccli. ^2 ``Simulations in Physics: Applications to Physical Systems,'' H. Gould, J. Tobochnik, and W Christian; see also, http://www.opensourcephysics.org. ^3 R. Landau, APS Bull. Vol 50, No.1, 1069 (2005)
Computational Accelerator Physics Working Group Summary
Cary, John R.; Bohn, Courtlandt L.
2004-08-27
The working group on computational accelerator physics at the 11th Advanced Accelerator Concepts Workshop held a series of meetings during the Workshop. Verification, i.e., showing that a computational application correctly solves the assumed model, and validation, i.e., showing that the model correctly describes the modeled system, were discussed for a number of systems. In particular, the predictions of the massively parallel codes, OSIRIS and VORPAL, used for modeling advanced accelerator concepts, were compared and shown to agree, thereby establishing some verification of both codes. In addition, a number of talks on the status and frontiers of computational accelerator physics were presented, to include the modeling of ultrahigh-brightness electron photoinjectors and the physics of beam halo production. Finally, talks discussing computational needs were presented.
Computational Accelerator Physics Working Group Summary
Cary, John R.; Bohn, Courtlandt L.
2004-12-07
The working group on computational accelerator physics at the 11th Advanced Accelerator Concepts Workshop held a series of meetings during the Workshop. Verification, i.e., showing that a computational application correctly solves the assumed model, and validation, i.e., showing that the model correctly describes the modeled system, were discussed for a number of systems. In particular, the predictions of the massively parallel codes, OSIRIS and VORPAL, used for modeling advanced accelerator concepts, were compared and shown to agree, thereby establishing some verification of both codes. In addition, a number of talks on the status and frontiers of computational accelerator physics were presented, to include the modeling of ultrahigh-brightness electron photoinjectors and the physics of beam halo production. Finally, talks discussing computational needs were presented.
Physics Division computer facilities
Cyborski, D.R.; Teh, K.M.
1995-08-01
The Physics Division maintains several computer systems for data analysis, general-purpose computing, and word processing. While the VMS VAX clusters are still used, this past year saw a greater shift to the Unix Cluster with the addition of more RISC-based Unix workstations. The main Divisional VAX cluster which consists of two VAX 3300s configured as a dual-host system serves as boot nodes and disk servers to seven other satellite nodes consisting of two VAXstation 3200s, three VAXstation 3100 machines, a VAX-11/750, and a MicroVAX II. There are three 6250/1600 bpi 9-track tape drives, six 8-mm tapes and about 9.1 GB of disk storage served to the cluster by the various satellites. Also, two of the satellites (the MicroVAX and VAX-11/750) have DAPHNE front-end interfaces for data acquisition. Since the tape drives are accessible cluster-wide via a software package, they are, in addition to replay, used for tape-to-tape copies. There is however, a satellite node outfitted with two 8 mm drives available for this purpose. Although not part of the main cluster, a DEC 3000 Alpha machine obtained for data acquisition is also available for data replay. In one case, users reported a performance increase by a factor of 10 when using this machine.
High-Productivity Computing in Computational Physics Education
NASA Astrophysics Data System (ADS)
Tel-Zur, Guy
2011-03-01
We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.
Thermal Hydraulic Computer Code System.
Energy Science and Technology Software Center (ESTSC)
1999-07-16
Version 00 RELAP5 was developed to describe the behavior of a light water reactor (LWR) subjected to postulated transients such as loss of coolant from large or small pipe breaks, pump failures, etc. RELAP5 calculates fluid conditions such as velocities, pressures, densities, qualities, temperatures; thermal conditions such as surface temperatures, temperature distributions, heat fluxes; pump conditions; trip conditions; reactor power and reactivity from point reactor kinetics; and control system variables. In addition to reactor applications,more » the program can be applied to transient analysis of other thermal‑hydraulic systems with water as the fluid. This package contains RELAP5/MOD1/029 for CDC computers and RELAP5/MOD1/025 for VAX or IBM mainframe computers.« less
Computational plasma physics and supercomputers
Killeen, J.; McNamara, B.
1984-09-01
The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics.
The Computational Physics Program of the national MFE Computer Center
Mirin, A.A.
1989-01-01
Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.
Establishing confidence in complex physics codes: Art or science?
Trucano, T.
1997-12-31
The ALEGRA shock wave physics code, currently under development at Sandia National Laboratories and partially supported by the US Advanced Strategic Computing Initiative (ASCI), is generic to a certain class of physics codes: large, multi-application, intended to support a broad user community on the latest generation of massively parallel supercomputer, and in a continual state of formal development. To say that the author has ``confidence`` in the results of ALEGRA is to say something different than that he believes that ALEGRA is ``predictive.`` It is the purpose of this talk to illustrate the distinction between these two concepts. The author elects to perform this task in a somewhat historical manner. He will summarize certain older approaches to code validation. He views these methods as aiming to establish the predictive behavior of the code. These methods are distinguished by their emphasis on local information. He will conclude that these approaches are more art than science.
Development of probabilistic multimedia multipathway computer codes.
Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1981-10-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1983-06-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
Computational-physics program of the National MFE Computer Center
Mirin, A.A.
1982-02-01
The computational physics group is ivolved in several areas of fusion research. One main area is the application of multidimensional Fokker-Planck, transport and combined Fokker-Planck/transport codes to both toroidal and mirror devices. Another major area is the investigation of linear and nonlinear resistive magnetohydrodynamics in two and three dimensions, with applications to all types of fusion devices. The MHD work is often coupled with the task of numerically generating equilibria which model experimental devices. In addition to these computational physics studies, investigations of more efficient numerical algorithms are being carried out.
Secure Computation from Random Error Correcting Codes
NASA Astrophysics Data System (ADS)
Chen, Hao; Cramer, Ronald; Goldwasser, Shafi; de Haan, Robbert; Vaikuntanathan, Vinod
Secure computation consists of protocols for secure arithmetic: secret values are added and multiplied securely by networked processors. The striking feature of secure computation is that security is maintained even in the presence of an adversary who corrupts a quorum of the processors and who exercises full, malicious control over them. One of the fundamental primitives at the heart of secure computation is secret-sharing. Typically, the required secret-sharing techniques build on Shamir's scheme, which can be viewed as a cryptographic twist on the Reed-Solomon error correcting code. In this work we further the connections between secure computation and error correcting codes. We demonstrate that threshold secure computation in the secure channels model can be based on arbitrary codes. For a network of size n, we then show a reduction in communication for secure computation amounting to a multiplicative logarithmic factor (in n) compared to classical methods for small, e.g., constant size fields, while tolerating t < ({1 over 2} - {ɛ}) {n} players to be corrupted, where ɛ> 0 can be arbitrarily small. For large networks this implies considerable savings in communication. Our results hold in the broadcast/negligible error model of Rabin and Ben-Or, and complement results from CRYPTO 2006 for the zero-error model of Ben-Or, Goldwasser and Wigderson (BGW). Our general theory can be extended so as to encompass those results from CRYPTO 2006 as well. We also present a new method for constructing high information rate ramp schemes based on arbitrary codes, and in particular we give a new construction based on algebraic geometry codes.
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
Computer design code for conical ribbon parachutes
Waye, D.E.
1986-01-01
An interactive computer design code has been developed to aid in the design of conical ribbon parachutes. The program is written to include single conical and polyconical parachute designs. The code determines the pattern length, vent diameter, radial length, ribbon top and bottom lengths, and geometric local and average porosity for the designer with inputs of constructed diameter, ribbon widths, ribbon spacings, radial width, and number of gores. The gores are designed with one mini-radial in the center with an option for the addition of two outer mini-radials. The output provides all of the dimensions necessary for the construction of the parachute. These results could also be used as input into other computer codes used to predict parachute loads.
Thermoelectric pump performance analysis computer code
NASA Technical Reports Server (NTRS)
Johnson, J. L.
1973-01-01
A computer program is presented that was used to analyze and design dual-throat electromagnetic dc conduction pumps for the 5-kwe ZrH reactor thermoelectric system. In addition to a listing of the code and corresponding identification of symbols, the bases for this analytical model are provided.
COLD-SAT Dynamic Model Computer Code
NASA Technical Reports Server (NTRS)
Bollenbacher, G.; Adams, N. S.
1995-01-01
COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.
Efficient tree codes on SIMD computer architectures
NASA Astrophysics Data System (ADS)
Olson, Kevin M.
1996-11-01
This paper describes changes made to a previous implementation of an N -body tree code developed for a fine-grained, SIMD computer architecture. These changes include (1) switching from a balanced binary tree to a balanced oct tree, (2) addition of quadrupole corrections, and (3) having the particles search the tree in groups rather than individually. An algorithm for limiting errors is also discussed. In aggregate, these changes have led to a performance increase of over a factor of 10 compared to the previous code. For problems several times larger than the processor array, the code now achieves performance levels of ~ 1 Gflop on the Maspar MP-2 or roughly 20% of the quoted peak performance of this machine. This percentage is competitive with other parallel implementations of tree codes on MIMD architectures. This is significant, considering the low relative cost of SIMD architectures.
Computational... Physics Education: Letting physics learning drive the computational learning
NASA Astrophysics Data System (ADS)
Chonacky, Norman
2011-03-01
For several years I have been part of a team researching and rethinking why physicists are more willing to admit the value of computational modeling than to include it in what they teach. We have concluded that undergraduate faculty face characteristic barriers that discourage them from starting to integrate computation into their courses. Computational tools and resources are already developed and freely available for them to use. But there loom ill-defined ``costs'' to their course learning objectives and to them personally as instructors in undertaking this. In an attempt to understand these issues more deeply, I placed myself in the mindset of a relative novice to computational applications. My approach: focus on a physics problem first and then on the computation needed to address it. I asked: could I deepen my understanding of physics while simultaneously mastering new computational skills? My results may aid appreciation of the plight of both a novice professor contemplating the introduction of computation into a course and the students taking it. These may also provide insight into practical ways that computational physics might be integrated into an entire undergraduate curriculum. Research support from: Shodor Education Foundation; IEEE-Computer Society; and Teragrid Project. Research collaboration from Partnership for Integration of Computation into Undergraduate Physics.
Concatenated codes for fault tolerant quantum computing
Knill, E.; Laflamme, R.; Zurek, W.
1995-05-01
The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.
Undergraduate computational physics projects on quantum computing
NASA Astrophysics Data System (ADS)
Candela, D.
2015-08-01
Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.
User's manual for HDR3 computer code
Arundale, C.J.
1982-10-01
A description of the HDR3 computer code and instructions for its use are provided. HDR3 calculates space heating costs for a hot dry rock (HDR) geothermal space heating system. The code also compares these costs to those of a specific oil heating system in use at the National Aeronautics and Space Administration Flight Center at Wallops Island, Virginia. HDR3 allows many HDR system parameters to be varied so that the user may examine various reservoir management schemes and may optimize reservoir design to suit a particular set of geophysical and economic parameters.
Framework for Physics Computation
Schwan, Karsten
2012-01-13
The Georgia Tech team has been working in collaboration with ORNL and Rutgers on improved I/O for petascale fusion codes, specifically, to integrate staging methods into the ADIOS framework. As part of this on-going work, we have released the DataTap server as part of the ADIOS release, and we have been working on improving the ‘in situ’ processing capabilities of the ADIOS framework. In particular, we have been moving forward with a design that adds additional metadata to describe the data layout and structure of data that is being moved for I/O purposes, building on the FFS type system developed in our past research
The computational physics program of the National MFE Computer Center
Mirin, A.A.
1988-01-01
The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.
Computational physics program of the National MFE Computer Center
Mirin, A.A.
1980-08-01
The computational physics group is involved in several areas of fusion research. One main area is the application of multidimensional Fokker-Planck, transport and combined Fokker-Planck/transport codes to both toroidal and mirror devices. Another major area is the investigation of linear and nonlinear resistive magnetohydrodynamics in two and three dimensions, with applications to all types of fusion studies, investigations of more efficient numerical algorithms are being carried out.
A Spectral Verification of the HELIOS-2 Lattice Physics Code
D. S. Crawford; B. D. Ganapol; D. W. Nigg
2012-11-01
Core modeling of the Advanced Test Reactor (ATR) at INL is currently undergoing a significant update through the Core Modeling Update Project1. The intent of the project is to bring ATR core modeling in line with today’s standard of computational efficiency and verification and validation practices. The HELIOS-2 lattice physics code2 is the lead code of several reactor physics codes to be dedicated to modernize ATR core analysis. This presentation is concerned with an independent verification of the HELIOS-2 spectral representation including the slowing down and thermalization algorithm and its data dependency. Here, we will describe and demonstrate a recently developed simple cross section generation algorithm based entirely on analytical multigroup parameters for both the slowing down and thermal spectrum. The new capability features fine group detail to assess the flux and multiplication factor dependencies on cross section data sets using the fundamental infinite medium as an example.
Energy Science and Technology Software Center (ESTSC)
2013-04-18
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating insidents involving redioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
Energy Science and Technology Software Center (ESTSC)
2010-03-02
The HotSpot Health Physics Codes were created to provide emergency response personnel and emergency planners with a fast, field-portable set of software tools for evaluating incidents involving radioactive material. The software is also used for safety-analysis of facilities handling nuclear material. HotSpot provides a fast and usually conservative means for estimation the radiation effects associated with the short-term (less than 24 hours) atmospheric release of radioactive materials.
The Physics of Quantum Computation
NASA Astrophysics Data System (ADS)
Falci, Giuseppe; Paladino, Elisabette
2015-10-01
Quantum Computation has emerged in the past decades as a consequence of down-scaling of electronic devices to the mesoscopic regime and of advances in the ability of controlling and measuring microscopic quantum systems. QC has many interdisciplinary aspects, ranging from physics and chemistry to mathematics and computer science. In these lecture notes we focus on physical hardware, present day challenges and future directions for design of quantum architectures.
Probabilistic structural analysis computer code (NESSUS)
NASA Technical Reports Server (NTRS)
Shiao, Michael C.
1988-01-01
Probabilistic structural analysis has been developed to analyze the effects of fluctuating loads, variable material properties, and uncertain analytical models especially for high performance structures such as SSME turbopump blades. The computer code NESSUS (Numerical Evaluation of Stochastic Structure Under Stress) was developed to serve as a primary computation tool for the characterization of the probabilistic structural response due to the stochastic environments by statistical description. The code consists of three major modules NESSUS/PRE, NESSUS/FEM, and NESSUS/FPI. NESSUS/PRE is a preprocessor which decomposes the spatially correlated random variables into a set of uncorrelated random variables using a modal analysis method. NESSUS/FEM is a finite element module which provides structural sensitivities to all the random variables considered. NESSUS/FPI is Fast Probability Integration method by which a cumulative distribution function or a probability density function is calculated.
Developing Computational Physics in Nigeria
NASA Astrophysics Data System (ADS)
Akpojotor, Godfrey; Enukpere, Emmanuel; Akpojotor, Famous; Ojobor, Sunny
2009-03-01
Computer based instruction is permeating the educational curricula of many countries oweing to the realization that computational physics which involves computer modeling, enhances the teaching/learning process when combined with theory and experiment. For the students, it gives them more insight and understanding in the learning process and thereby equips them with scientific and computing skills to excel in the industrial and commercial environments as well as at the Masters and doctoral levels. And for the teachers, among others benefits, the availability of open access sites on both instructional and evaluation materials can improve their performances. With a growing population of students and new challenges to meet developmental goals, this paper examine the challenges and prospects of current drive to develop Computational physics as a university undergraduate programme or as a choice of specialized modules or laboratories within the mainstream physics programme in Nigeria institutions. In particular, the current effort of the Nigerian Computational Physics Working Group to design computational physics programmes to meet the developmental goals of the country is discussed.
TAIR: A transonic airfoil analysis computer code
NASA Technical Reports Server (NTRS)
Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.
1981-01-01
The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.
Computer code to assess accidental pollutant releases
Pendergast, M.M.; Huang, J.C.
1980-07-01
A computer code was developed to calculate the cumulative frequency distributions of relative concentrations of an air pollutant following an accidental release from a stack or from a building penetration such as a vent. The calculations of relative concentration are based on the Gaussian plume equations. The meteorological data used for the calculation are in the form of joint frequency distributions of wind and atmospheric stability.
Computing Challenges in Coded Mask Imaging
NASA Technical Reports Server (NTRS)
Skinner, Gerald
2009-01-01
This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.
Hanford Meteorological Station computer codes: Volume 1, The GEN computer code
Buck, J.W.; Andrews, G.L.
1987-07-01
The Hanford Meteorological Station, operated by Pacific Northwest Laboratory, issues general weather forecasts twice a day. The GEN computer code is used to archive the 24-hour forecasts and apply quality assurance checks to the forecast data. This code accesses an input file, which contains the date and hour of the previous forecast, and an output file, which contains 24-hour forecasts for the current month. As part of the program, a data entry form consisting of 14 fields that describe various weather conditions must be filled in. The information on the form is appended to the current 24-hour monthly forecast file, which provides an archive for the 24-hour general weather forecasts. This report consists of several volumes documenting the various computer codes used at the Hanford Meteorological Station. This volume describes the implementation and operation of the GEN computer code at the station.
Hanford meteorological station computer codes: Volume 9, The quality assurance computer codes
Burk, K.W.; Andrews, G.L.
1989-02-01
The Hanford Meteorological Station (HMS) was established in 1944 on the Hanford Site to collect and archive meteorological data and provide weather forecasts and related services for Hanford Site approximately 1/2 mile east of the 200 West Area and is operated by PNL for the US Department of Energy. Meteorological data are collected from various sensors and equipment located on and off the Hanford Site. These data are stored in data bases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS (hereafter referred to as the HMS computer). Files from those data bases are routinely transferred to the Emergency Management System (EMS) computer at the Unified Dose Assessment Center (UDAC). To ensure the quality and integrity of the HMS data, a set of Quality Assurance (QA) computer codes has been written. The codes will be routinely used by the HMS system manager or the data base custodian. The QA codes provide detailed output files that will be used in correcting erroneous data. The following sections in this volume describe the implementation and operation of QA computer codes. The appendices contain detailed descriptions, flow charts, and source code listings of each computer code. 2 refs.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1994-01-01
Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.
New developments in the Saphire computer codes
Russell, K.D.; Wood, S.T.; Kvarfordt, K.J.
1996-03-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.
MAGNUM-2D computer code: user's guide
England, R.L.; Kline, N.W.; Ekblad, K.J.; Baca, R.G.
1985-01-01
Information relevant to the general use of the MAGNUM-2D computer code is presented. This computer code was developed for the purpose of modeling (i.e., simulating) the thermal and hydraulic conditions in the vicinity of a waste package emplaced in a deep geologic repository. The MAGNUM-2D computer computes (1) the temperature field surrounding the waste package as a function of the heat generation rate of the nuclear waste and thermal properties of the basalt and (2) the hydraulic head distribution and associated groundwater flow fields as a function of the temperature gradients and hydraulic properties of the basalt. MAGNUM-2D is a two-dimensional numerical model for transient or steady-state analysis of coupled heat transfer and groundwater flow in a fractured porous medium. The governing equations consist of a set of coupled, quasi-linear partial differential equations that are solved using a Galerkin finite-element technique. A Newton-Raphson algorithm is embedded in the Galerkin functional to formulate the problem in terms of the incremental changes in the dependent variables. Both triangular and quadrilateral finite elements are used to represent the continuum portions of the spatial domain. Line elements may be used to represent discrete conduits. 18 refs., 4 figs., 1 tab.
Majorana Fermion Surface Code for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Vijay, Sagar; Hsieh, Tim; Fu, Liang
We introduce an exactly solvable model of interacting Majorana fermions realizing Z2 topological order with a Z2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid state systems, including topological insulators, nanowires or two-dimensional electron gases, proximitized by s-wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physical ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.
Majorana Fermion Surface Code for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Vijay, Sagar; Hsieh, Timothy H.; Fu, Liang
2015-10-01
We introduce an exactly solvable model of interacting Majorana fermions realizing Z2 topological order with a Z2 fermion parity grading and lattice symmetries permuting the three fundamental anyon types. We propose a concrete physical realization by utilizing quantum phase slips in an array of Josephson-coupled mesoscopic topological superconductors, which can be implemented in a wide range of solid-state systems, including topological insulators, nanowires, or two-dimensional electron gases, proximitized by s -wave superconductors. Our model finds a natural application as a Majorana fermion surface code for universal quantum computation, with a single-step stabilizer measurement requiring no physical ancilla qubits, increased error tolerance, and simpler logical gates than a surface code with bosonic physical qubits. We thoroughly discuss protocols for stabilizer measurements, encoding and manipulating logical qubits, and gate implementations.
Collaborative Comparison of High-Energy-Density Physics Codes
NASA Astrophysics Data System (ADS)
Fatenejad, M.; Fryer, C.; Fryxell, B.; Lamb, D.; Myra, E.; Wohlbier, J.
2011-10-01
We will describe a collaborative effort involving the Flash Center for Computational Science, The Center for Radiative Shock Hydrodynamics (CRASH), LANL, and LLNL to compare several sophisticated radiation-hydrodynamics codes on a variety of HEDP test problems and experiments. Currently we are comparing efforts to simulate ongoing radiative shock experiments being conducted by CRASH at the OMEGA laser facility that are relevant to a wide range of astrophysical problems. The experiments drive a collapsed planar radiative shock through a Xenon-filled shock tube. Attempts to simulate these experiments have uncovered various challenges to obtaining agreement with experimental results. We will present the results of code-to-code comparisons that have enabled us to understand the impact of differences in numerical methods, physical approximations, microphysical parameters, etc. This work was supported in part by the US Department of Energy.
Analog system for computing sparse codes
Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell
2010-08-24
A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.
Spiking network simulation code for petascale computers
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
Spiking network simulation code for petascale computers.
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
Plasma physics via computer simulation
Birdsall, C.K.; Langdon, A.B.
1985-01-01
This book describes the computerized simulation of plasma kinetics. Topics considered include why attempting to do plasma physics via computer simulation using particles makes good physical sense; overall view of a one-dimensional electrostatic program; a one-dimensional electrostatic program; introduction to the numerical methods used; a 1d electromagnetic program; projects for EM1; effects of the spatial grid; effects of the finite time step; energy-conserving simulation models; multipole models; kinetic theory for fluctuations and noise; collisions; statistical mechanics of a sheet plasma; electrostatic programs in two and three dimensions; electromagnetic programs in 2D and 3D; design of computer experiments; and the choice of parameters.
Methodology for computational fluid dynamics code verification/validation
Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.
1995-07-01
The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.
Recommended documentation plan for the FLAG and CHEMFLUB computer codes
1983-09-02
Reviews have been conducted on both FLAG and CHEMFLUB's documentation and computer codes. The documentation of both models is: (1) incomplete, (2) confusing, (3) not helpful to the reader, (4) filled with extraneous information and (5) lack claimed versatility in analyzing coal gasifier systems. The documentation is such that the computer coding itself must be used as a reference to complete the documentation. Once the codes are set up they are relatively easy to run. We have exercised both of them. Most of our efforts thus far have been concentrated on FLAG because of its importance and complexity. FLAG in its present form can not be expected to yield meaningful data applicable to coal gasifier systems. The reasons for this are twofold. First, the model is incorrect in describing some aspects of fluid particle behavior in coal gasifier systems. Second, the numerical formulation/solution methodology is incorrectly implemented and introduces spurious numerical effects, thereby obscuring the physics of the model. In brief, this means that resulting calculations are not correctly related to the physics. CHEMFLUB, while less extensively exercised, shows that it should be no surprise that CHEMFLUB is best utilized as a tool for generating first approximations. We have concluded from these reviews that we cannot perform meaningful comparisons as required under tasks 3.3, 3.4, and 3.5 without first reconstructing and correcting when necessary the physical/numerical models. A plan is presented for accomplishing this reconstruction/modification.
ERIC Educational Resources Information Center
Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed
2013-01-01
Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…
Computer Graphics and Physics Teaching.
ERIC Educational Resources Information Center
Bork, Alfred M.; Ballard, Richard
New, more versatile and inexpensive terminals will make computer graphics more feasible in science instruction than before. This paper describes the use of graphics in physics teaching at the University of California at Irvine. Commands and software are detailed in established programs, which include a lunar landing simulation and a program which…
Relevance of Computational Rock Physics
NASA Astrophysics Data System (ADS)
Dvorkin, J. P.
2014-12-01
The advent of computational rock physics has brought to light the often ignored question: How applicable are controlled-experiment data acquired at one scale to interpreting measurements obtained at a different scale? An answer is not to use a single data point or even a few data points but rather find a trend that links two or more rock properties to each other in a selected rock type. In the physical laboratory, these trends are generated by measuring a significant number of samples. In contrast, in the computational laboratory, these trends are hidden inside a very small digital sample and can be derived by subsampling it. Often, the internal heterogeneity of measurable properties inside a small sample mimics the large-scale heterogeneity, making the tend applicable in a range of scales. Computational rock physics is uniquely tooled for finding such trends: Although it is virtually impossible to subsample a physical sample and consistently conduct the same laboratory experiments on each of the subsamples, it is straightforward to accomplish this task in the computer.
Computational physics of the mind
NASA Astrophysics Data System (ADS)
Duch, Włodzisław
1996-08-01
In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.
Computer codes for evaluation of control room habitability (HABIT)
Stage, S.A.
1996-06-01
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.
Computing Across the Physics and Astrophysics Curriculum
NASA Astrophysics Data System (ADS)
DeGioia Eastwood, Kathy; James, M.; Dolle, E.
2012-01-01
Computational skills are essential in today's marketplace. Bachelors entering the STEM workforce report that their undergraduate education does not adequately prepare them to use scientific software and to write programs. Computation can also increase student learning; not only are the students actively engaged, but computational problems allow them to explore physical problems that are more realistic than the few that can be solved analytically. We have received a grant from the NSF CCLI Phase I program to integrate computing into our upper division curriculum. Our language of choice is Matlab; this language had already been chosen for our required sophomore course in Computational Physics because of its prevalence in industry. For two summers we have held faculty workshops to help our professors develop the needed expertise, and we are now in the implementation and evaluation stage. The end product will be a set of learning materials in the form of computational modules that we will make freely available. These modules will include the assignment, pedagogical goals, Matlab code, samples of student work, and instructor comments. At this meeting we present an overview of the project as well as modules written for a course in upper division stellar astrophysics. We acknowledge the support of the NSF through DUE-0837368.
TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE
NASA Technical Reports Server (NTRS)
Dougherty, F. C.
1994-01-01
The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters
Numerical uncertainty in computational engineering and physics
Hemez, Francois M
2009-01-01
Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts of consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.
Enhanced Verification Test Suite for Physics Simulation Codes
Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G
2008-10-10
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater
ICAN Computer Code Adapted for Building Materials
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.
1997-01-01
The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
A surface code quantum computer in silicon
Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.
2015-01-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
Joshua J. Cogliati; Abderrafi M. Ougouag
2006-10-01
A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1983-01-01
The Weight Analysis of Turbine Engines (WATE) computer code was developed by Boeing under contract to NASA Lewis. It was designed to function as an adjunct to the Navy/NASA Engine Program (NNEP). NNEP calculates the design and off-design thrust and sfc performance of User defined engine cycles. The thermodynamic parameters throughout the engine as generated by NNEP are then combined with input parameters defining the component characteristics in WATE to calculate the bare engine weight of this User defined engine. Preprocessor programs for NNEP were previously developed to simplify the task of creating input datasets. This report describes a similar preprocessor for the WATE code.
Computational Physics for Space Flight Applications
NASA Technical Reports Server (NTRS)
Reed, Robert A.
2004-01-01
This paper presents viewgraphs on computational physics for space flight applications. The topics include: 1) Introduction to space radiation effects in microelectronics; 2) Using applied physics to help NASA meet mission objectives; 3) Example of applied computational physics; and 4) Future directions in applied computational physics.
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
A theory manual for multi-physics code coupling in LIME.
Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren
2011-03-01
The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.
1997-12-01
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
RMC - A Monte Carlo code for reactor physics analysis
Wang, K.; Li, Z.; She, D.; Liang, J.; Xu, Q.; Qiu, A.; Yu, J.; Sun, J.; Fan, X.; Yu, G.
2013-07-01
A new Monte Carlo neutron transport code RMC has been being developed by Department of Engineering Physics, Tsinghua University, Beijing as a tool for reactor physics analysis on high-performance computing platforms. To meet the requirements of reactor analysis, RMC now has such functions as criticality calculation, fixed-source calculation, burnup calculation and kinetics simulations. Some techniques for geometry treatment, new burnup algorithm, source convergence acceleration, massive tally and parallel calculation, and temperature dependent cross sections processing are researched and implemented in RMC to improve the efficiency. Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances. (authors)
When does a physical system compute?
Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv
2014-01-01
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245
Computer Analysis of a Physical Pendulum.
ERIC Educational Resources Information Center
Priest, Joseph; Potts, Larry
1990-01-01
The interfacing of a physical pendulum to an Apple IIe computer and the physic instruction associated with it are discussed. Laboratory procedures, software commands, and computations used in this lesson are described. (CW)
Computer programs for the characterization of protein coding genes.
Pierno, G; Barni, N; Candurro, M; Cipollaro, M; Franzè, A; Juliano, L; Macchiato, M F; Mastrocinque, G; Moscatelli, C; Scarlato, V
1984-01-11
Computer programs, implemented on an Univac II00/80 computer system, for the identification and characterization of protein coding genes and for the analysis of nucleic acid sequences, are described. PMID:6546420
Computer programs for the characterization of protein coding genes.
Pierno, G; Barni, N; Candurro, M; Cipollaro, M; Franzè, A; Juliano, L; Macchiato, M F; Mastrocinque, G; Moscatelli, C; Scarlato, V
1984-01-01
Computer programs, implemented on an Univac II00/80 computer system, for the identification and characterization of protein coding genes and for the analysis of nucleic acid sequences, are described. PMID:6546420
Fault-tolerant Holonomic Quantum Computation in Surface Codes
NASA Astrophysics Data System (ADS)
Zheng, Yicong; Brun, Todd; USC QIP Team Team
2015-03-01
We show that universal holonomic quantum computation (HQC) can be achieved by adiabatically deforming the gapped stabilizer Hamiltonian of the surface code, where quantum information is encoded in the degenerate ground space of the system Hamiltonian. We explicitly propose procedures to perform each logical operation, including logical state initialization, logical state measurement, logical CNOT, state injection and distillation,etc. In particular, adiabatic braiding of different types of holes on the surface leads to a topologically protected, non-Abelian geometric logical CNOT. Throughout the computation, quantum information is protected from both small perturbations and low weight thermal excitations by a constant energy gap, and is independent of the system size. Also the Hamiltonian terms have weight at most four during the whole process. The effect of thermal error propagation is considered during the adiabatic code deformation. With the help of active error correction, this scheme is fault-tolerant, in the sense that the computation time can be arbitrarily long for large enough lattice size. It is shown that the frequency of error correction and the physical resources needed can be greatly reduced by the constant energy gap.
Hanford Meteorological Station computer codes: Volume 3, The TANK computer code
Buck, J.W.; Andrews, G.L.
1987-09-01
At the end of each graveyard shift the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, issues a forecast of eight hourly average wind speeds and wind gusts for the 50-ft level. The Hanford Waste Management crew uses this forecast, called the tank farm forecast, to schedule daily work loads. The forecast covers an 8-hour period (0800 to 1600 during Pacific Standard Time (PST) or 0700 to 1500 during Pacific Daylight Time (PDT)) and the day-shift forecaster may modify the tank farm forecast to reflect changing wind conditions. The TANK computer code is used to archive these forecasts and apply quality assurance checks to the forecast data. The code accesses an input file, which contains the date of the previous forecast, and an output file, which contains tank farm forecasts for the current month. The program includes a data entry form consisting of 12 fields that must be filled in by the user. The information entered on the form is appended to the monthly forecast file, which provides an archive for the tank farm forecasts. This volume describes the implementation and operation of the TANK computer code at the HMS.
Hanford Meteorological Station computer codes: Volume 4, The SUM computer code
Andrews, G.L.; Buck, J.W.
1987-09-01
At the end of each swing shift, the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, archives a set of daily weather observations. These weather observations are a summary of the maximum and minimum temperature, total precipitation, maximum and minimum relative humidity, total snowfall, total snow depth at 1200 Greenwich Mean Time (GMT), and maximum wind speed plus the direction from which the wind occurred and the time it occurred. This summary also indicates the occurrence of rain, snow, and other weather phenomena. The SUM computer code is used to archive the summary and apply quality assurance checks to the data. This code accesses an input file that contains the date of the previous archive and an output file that contains a daily weather summary for the current month. As part of the program, a data entry form consisting of 21 fields must be filled in by the user. The information on the form is appended to the monthly file, which provides an archive for the daily weather summary. This volume describes the implementation and operation of the SUM computer code at the HMS.
Computational Physics and Evolutionary Dynamics
NASA Astrophysics Data System (ADS)
Fontana, Walter
2000-03-01
One aspect of computational physics deals with the characterization of statistical regularities in materials. Computational physics meets biology when these materials can evolve. RNA molecules are a case in point. The folding of RNA sequences into secondary structures (shapes) inspires a simple biophysically grounded genotype-phenotype map that can be explored computationally and in the laboratory. We have identified some statistical regularities of this map and begin to understand their evolutionary consequences. (1) ``typical shapes'': Only a small subset of shapes realized by the RNA folding map is typical, in the sense of containing shapes that are realized significantly more often than others. Consequence: evolutionary histories mostly involve typical shapes, and thus exhibit generic properties. (2) ``neutral networks'': Sequences folding into the same shape are mutationally connected into a network that reaches across sequence space. Consequence: Evolutionary transitions between shapes reflect the fraction of boundary shared by the corresponding neutral networks in sequence space. The notion of a (dis)continuous transition can be made rigorous. (3) ``shape space covering'': Given a random sequence, a modest number of mutations suffices to reach a sequence realizing any typical shape. Consequence: The effective search space for evolutionary optimization is greatly reduced, and adaptive success is less dependent on initial conditions. (4) ``plasticity mirrors variability'': The repertoire of low energy shapes of a sequence is an indicator of how much and in which ways its energetically optimal shape can be altered by a single point mutation. Consequence: (i) Thermodynamic shape stability and mutational robustness are intimately linked. (ii) When natural selection favors the increase of stability, extreme mutational robustness -- to the point of an evolutionary dead-end -- is produced as a side effect. (iii) The hallmark of robust shapes is modularity.
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
McGrail, B.P.; Mahoney, L.A.
1995-10-01
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.
Hanford Meteorological Station computer codes: Volume 6, The SFC computer code
Andrews, G.L.; Buck, J.W.
1987-11-01
Each hour the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, records and archives weather observations. Hourly surface weather observations consist of weather phenomena such as cloud type and coverage; dry bulb, wet bulb, and dew point temperatures; relative humidity; atmospheric pressure; and wind speed and direction. The SFC computer code is used to archive those weather observations and apply quality assurance checks to the data. This code accesses an input file, which contains the previous archive's date and hour and an output file, which contains surface observations for the current day. As part of the program, a data entry form consisting of 24 fields must be filled in. The information on the form is appended to the daily file, which provides an archive for the hourly surface observations.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Proceduracy: Computer Code Writing in the Continuum of Literacy
ERIC Educational Resources Information Center
Vee, Annette
2010-01-01
This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes....
Liquid rocket combustor computer code development
NASA Technical Reports Server (NTRS)
Liang, P. Y.
1985-01-01
The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.
Horak, W.C.; Lu, Ming-Shih
1991-12-01
This paper reviews the accuracy and precision of methods used by United States electric utilities to determine the actinide isotopic and element content of irradiated fuel. After an extensive literature search, three key code suites were selected for review. Two suites of computer codes, CASMO and ARMP, are used for reactor physics calculations; the ORIGEN code is used for spent fuel calculations. They are also the most widely used codes in the nuclear industry throughout the world. Although none of these codes calculate actinide isotopics as their primary variables intended for safeguards applications, accurate calculation of actinide isotopic content is necessary to fulfill their function.
Hanford Meteorological Station computer codes: Volume 7, The RIVER computer code
Andrews, G.L.; Buck, J.W.
1988-03-01
The RIVER computer code is used to archive Columbia River data measured at the 100N reactor. The data are recorded every other hour starting at 0100 Pacific Standard Time (12 observations in a day), and consists of river elevation, temperature, and flow rate. The program prompts the user for river data by using a data entry form. After the data have been enetered and verified, the program appends each hour of river data to the end of each corresponding surface observation record for the current day. The appended data are then stored in the current month's surface observation file.
Panel-Method Computer Code For Potential Flow
NASA Technical Reports Server (NTRS)
Ashby, Dale L.; Dudley, Michael R.; Iguchi, Steven K.
1992-01-01
Low-order panel method used to reduce computation time. Panel code PMARC (Panel Method Ames Research Center) numerically simulates flow field around or through complex three-dimensional bodies such as complete aircraft models or wind tunnel. Based on potential-flow theory. Facilitates addition of new features to code and tailoring of code to specific problems and computer-hardware constraints. Written in standard FORTRAN 77.
Computer Tensor Codes to Design the War Drive
NASA Astrophysics Data System (ADS)
Maccone, C.
To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.
The r-Java 2.0 code: nuclear physics
NASA Astrophysics Data System (ADS)
Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.
2014-08-01
Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.
Application of computational fluid dynamics methods to improve thermal hydraulic code analysis
NASA Astrophysics Data System (ADS)
Sentell, Dennis Shannon, Jr.
A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.
Hanford Meteorological Station computer codes: Volume 8, The REVIEW computer code
Andrews, G.L.; Burk, K.W.
1988-08-01
The Hanford Meteorological Station (HMS) routinely collects meteorological data from sources on and off the Hanford Site. The data are averaged over both 15 minutes and 1 hour and are maintained in separate databases on the Digital Equipment Corporation (DEC) VAX 11/750 at the HMS. The databases are transferred to the Emergency Management System (EMS) DEC VAX 11/750 computer. The EMS is part of the Unified Dose Assessment Center, which is located on on the ground-level floor of the Federal building in Richland and operated by Pacific Northwest Laboratory. The computer program REVIEW is used to display meteorological data in graphical and alphanumeric form from either the 15-minute or hourly database. The code is available on the HMS and EMS computer. The REVIEW program helps maintain a high level of quality assurance on the instruments that collect the data and provides a convenient mechanism for analyzing meteorological data on a routine basis and during emergency response situations.
Codes of Ethics for Computing at Russian Institutions and Universities.
ERIC Educational Resources Information Center
Pourciau, Lester J.; Spain, Victoria, Ed.
1997-01-01
To determine the degree to which Russian institutions and universities have formulated and promulgated codes of ethics or policies for acceptable computer use, the author examined Russian institution and university home pages. Lists home pages examined, 10 commandments for computer ethics from the Computer Ethics Institute, and a policy statement…
Optimization of KINETICS Chemical Computation Code
NASA Technical Reports Server (NTRS)
Donastorg, Cristina
2012-01-01
NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.
ERIC Educational Resources Information Center
Przybylla, Mareen; Romeike, Ralf
2014-01-01
Physical computing covers the design and realization of interactive objects and installations and allows students to develop concrete, tangible products of the real world, which arise from the learners' imagination. This can be used in computer science education to provide students with interesting and motivating access to the different topic…
Talking about Code: Integrating Pedagogical Code Reviews into Early Computing Courses
ERIC Educational Resources Information Center
Hundhausen, Christopher D.; Agrawal, Anukrati; Agarwal, Pawan
2013-01-01
Given the increasing importance of soft skills in the computing profession, there is good reason to provide students withmore opportunities to learn and practice those skills in undergraduate computing courses. Toward that end, we have developed an active learning approach for computing education called the "Pedagogical Code Review"…
A comparison between several computer codes for calculations on microwave propagation
NASA Astrophysics Data System (ADS)
Vogel, M. H.
1993-03-01
Microwave propagation in the troposphere is largely dependent on the variation of the air's refractive index with altitude. For a radar system, this strongly affects the probability of detection of a given target. Six computer codes for calculations on microwave propagation are compared. The advantages and disadvantages of each code when used in a maritime environment are discussed. The analyzed codes are: PCPEM (Personal Computer Parabolic Equation Method), EMPE (Electromagnetic Parabolic Equation), RPO (Radio Physical Optics), IREPS (Integrated-Refractive Effects Prediction System), EREPS (Engineers' Refraction Effects Prediction System), and MPPM (Microwave Propagation Prediction Model).
Computer code for charge-exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.
1981-01-01
The propagation of the charge-exchange plasma from an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ASNI Standard FORTRAN.
PLASIM: A computer code for simulating charge exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.
1982-01-01
The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.
Computer Code Systems for Use with Meteorological Data.
Energy Science and Technology Software Center (ESTSC)
1983-09-14
Version 00 The staff of the Nuclear Regulatory Commission uses the computer codes in this collection to examine, assess, and utilize the hourly values of meteorological data which are received on magnetic tapes in a specified format.
Code 672 observational science branch computer networks
NASA Technical Reports Server (NTRS)
Hancock, D. W.; Shirk, H. G.
1988-01-01
In general, networking increases productivity due to the speed of transmission, easy access to remote computers, ability to share files, and increased availability of peripherals. Two different networks within the Observational Science Branch are described in detail.
Health Physics Code System for Evaluating Accidents Involving Radioactive Materials.
2014-10-01
Version 03 The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculational tool for evaluating accidents involving radioactive materials. HOTSPOT codes provide a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. The developer's website is: http://www.llnl.gov/nhi/hotspot/. Four general programs, PLUME, EXPLOSION, FIRE, and RESUSPENSION, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Additional programs deal specifically with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. The FIDLER program can calibrate radiation survey instruments for ground survey measurements and initial screening of personnel for possible plutonium uptake in the lung. The HOTSPOT codes are fast, portable, easy to use, and fully documented in electronic help files. HOTSPOT supports color high resolution monitors and printers for concentration plots and contours. The codes have been extensively used by the DOS community since 1985. Tables and graphical output can be directed to the computer screen, printer, or a disk file. The graphical output consists of dose and ground contamination as a function of plume centerline downwind distance, and radiation dose and ground contamination contours. Users have the option of displaying scenario text on the plots. HOTSPOT 3.0.1 fixes three significant Windows 7 issues: � Executable installed properly under "Program Files/HotSpot 3.0". Installation package now smaller: removed dependency on older Windows DLL files which previously needed to \\ � Forms now properly scale based on DPI instead of font for users who change their screen resolution to something other than 100%. This is a more common feature in Windows 7.
Health Physics Code System for Evaluating Accidents Involving Radioactive Materials.
Energy Science and Technology Software Center (ESTSC)
2014-10-01
Version 03 The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculational tool for evaluating accidents involving radioactive materials. HOTSPOT codes provide a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. The developer's website is: http://www.llnl.gov/nhi/hotspot/. Four general programs, PLUME, EXPLOSION, FIRE, and RESUSPENSION, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosivemore » release, fuel fire, or an area contamination event. Additional programs deal specifically with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. The FIDLER program can calibrate radiation survey instruments for ground survey measurements and initial screening of personnel for possible plutonium uptake in the lung. The HOTSPOT codes are fast, portable, easy to use, and fully documented in electronic help files. HOTSPOT supports color high resolution monitors and printers for concentration plots and contours. The codes have been extensively used by the DOS community since 1985. Tables and graphical output can be directed to the computer screen, printer, or a disk file. The graphical output consists of dose and ground contamination as a function of plume centerline downwind distance, and radiation dose and ground contamination contours. Users have the option of displaying scenario text on the plots. HOTSPOT 3.0.1 fixes three significant Windows 7 issues: � Executable installed properly under "Program Files/HotSpot 3.0". Installation package now smaller: removed dependency on older Windows DLL files which previously needed to \\ � Forms now properly scale based on DPI instead of font for users who change their screen resolution to something other than 100%. This is a more common feature in Windows 7
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
Los Alamos radiation transport code system on desktop computing platforms
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.; West, J.T. )
1990-01-01
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. The current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.
Hanford Meteorological Station computer codes: Volume 10, The ARCHIVE computer code
Andrews, G.L.; Burk, K.W.
1989-08-01
The purpose of the ARCHIVE computer program is twofold: (1) convert selected hourly binary data into formatted ASCII data, and (2) organize the converted data into monthly files. Formatted ASCII files are easier to access on a routine basis. The program is executed once a day and is initiated from a command file that submits itself to the SYS$BATCH queue on a daily basis. The monthly files are stored on the HMS computer's fixed hard disk and are merged into yearly files (located on removable disk packs) at the end of each year. This report describes the data bases maintained at the HMS, gives an overview of the ARCHIVE program, describes input and output files accessed by the ARCHIVE program, provides a description of program initiation, and discusses the limitations of the ARCHIVE program. A section on trouble-shooting is included. In addition, the appendixes contain flow charts, detailed descriptions, and source code listings for the ARCHIVE program and related subroutines. A description of the ARCHIVE command file and the data input and output files completes the report. 3 refs., 1 fig.
Enhancements to the STAGS computer code
NASA Technical Reports Server (NTRS)
Rankin, C. C.; Stehlin, P.; Brogan, F. A.
1986-01-01
The power of the STAGS family of programs was greatly enhanced. Members of the family include STAGS-C1 and RRSYS. As a result of improvements implemented, it is now possible to address the full collapse of a structural system, up to and beyond critical points where its resistance to the applied loads vanishes or suddenly changes. This also includes the important class of problems where a multiplicity of solutions exists at a given point (bifurcation), and where until now no solution could be obtained along any alternate (secondary) load path with any standard production finite element code.
Statistical physics, optimization and source coding
NASA Astrophysics Data System (ADS)
Zechhina, Riccardo
2005-06-01
The combinatorial problem of satisfying a given set of constraints that depend on N discrete variables is a fundamental one in optimization and coding theory. Even for instances of randomly generated problems, the question ``does there exist an assignment to the variables that satisfies all constraints?'' may become extraordinarily difficult to solve in some range of parameters where a glass phase sets in. We shall provide a brief review of the recent advances in the statistical mechanics approach to these satisfiability problems and show how the analytic results have helped to design a new class of message-passing algorithms -- the survey propagation (SP) algorithms -- that can efficiently solve some combinatorial problems considered intractable. As an application, we discuss how the packing properties of clusters of solutions in randomly generated satisfiability problems can be exploited in the design of simple lossy data compression algorithms.
Proposed standards for peer-reviewed publication of computer code
Technology Transfer Automated Retrieval System (TEKTRAN)
Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...
NASA Lewis Stirling engine computer code evaluation
Sullivan, T.J.
1989-01-01
In support of the US Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was /minus/11 percent for the P-40 and 12 percent for the RE-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvement to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions. 13 refs., 26 figs., 3 tabs.
NASA Lewis Stirling engine computer code evaluation
NASA Technical Reports Server (NTRS)
Sullivan, Timothy J.
1989-01-01
In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.
Conversion of radionuclide transport codes from mainframes to personal computers
Pon, W.D.; Marschke, S.F. )
1987-01-01
Converting a mainframe computer code to run on a personal computer (PC) calls for more than just a simple translation -- the converted program and associated data files must be modified to fit the PC's environment. This has been done for three well-known mainframe codes that are used to estimate the impacts of normal operational radiological releases from nuclear power plants: GALE, GASPAR, and LADTAP. The programs were converted to run on an IBM PC and combined into a single integrated package. This article describes the steps in the conversion process and shows how the mainframe codes were modified and enhanced to take advantage of the PC's ease of use.
Reasoning with Computer Code: a new Mathematical Logic
NASA Astrophysics Data System (ADS)
Pissanetzky, Sergio
2013-01-01
A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self
Computer code for intraply hybrid composite design
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1981-01-01
A computer program is described for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories, and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.
Computer code for intraply hybrid composite design
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1981-01-01
A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.
RESRAD-CHEM: A computer code for chemical risk assessment
Cheng, J.J.; Yu, C.; Hartmann, H.M.; Jones, L.G.; Biwer, B.M.; Dovel, E.S.
1993-10-01
RESRAD-CHEM is a computer code developed at Argonne National Laboratory for the U.S. Department of Energy to evaluate chemically contaminated sites. The code is designed to predict human health risks from multipathway exposure to hazardous chemicals and to derive cleanup criteria for chemically contaminated soils. The method used in RESRAD-CHEM is based on the pathway analysis method in the RESRAD code and follows the U.S. Environmental Protection Agency`s (EPA`s) guidance on chemical risk assessment. RESRAD-CHEM can be used to evaluate a chemically contaminated site and, in conjunction with the use of the RESRAD code, a mixed waste site.
Code system to compute radiation dose in human phantoms
Ryman, J.C.; Cristy, M.; Eckerman, K.F.; Davis, J.L.; Tang, J.S.; Kerr, G.D.
1986-01-01
Monte Carlo photon transport code and a code using Monte Carlo integration of a point kernel have been revised to incorporate human phantom models for an adult female, juveniles of various ages, and a pregnant female at the end of the first trimester of pregnancy, in addition to the adult male used earlier. An analysis code has been developed for deriving recommended values of specific absorbed fractions of photon energy. The computer code system and calculational method are described, emphasizing recent improvements in methods. (LEW)
A Computer Code for TRIGA Type Reactors.
Energy Science and Technology Software Center (ESTSC)
1992-04-09
Version 00 TRIGAP was developed for reactor physics calculations of the 250 kW TRIGA reactor. The program can be used for criticality predictions, power peaking predictions, fuel element burn-up calculations and data logging, and in-core fuel management and fuel utilization improvement.
Utility subroutine package used by Applied Physics Division export codes. [LMFBR
Adams, C.H.; Derstine, K.L.; Henryson, H. II; Hosteny, R.P.; Toppel, B.J.
1983-04-01
This report describes the current state of the utility subroutine package used with codes being developed by the staff of the Applied Physics Division. The package provides a variety of useful functions for BCD input processing, dynamic core-storage allocation and managemnt, binary I/0 and data manipulation. The routines were written to conform to coding standards which facilitate the exchange of programs between different computers.
An algorithm for computing the distance spectrum of trellis codes
NASA Technical Reports Server (NTRS)
Rouanne, Marc; Costello, Daniel J., Jr.
1989-01-01
A class of quasiregular codes is defined for which the distance spectrum can be calculated from the codeword corresponding to the all-zero information sequence. Convolutional codes and regular codes are both quasiregular, as well as most of the best known trellis codes. An algorithm to compute the distance spectrum of linear, regular, and quasiregular trellis codes is presented. In particular, it can calculate the weight spectrum of convolutional (linear trellis) codes and the distance spectrum of most of the best known trellis codes. The codes do not have to be linear or regular, and the signals do not have to be used with equal probabilities. The algorithm is derived from a bidirectional stack algorithm, although it could also be based on the Viterbi algorithm. The algorithm is used to calculate the beginning of the distance spectrum of some of the best known trellis codes and to compute tight estimates on the first-event-error probability and on the bit-error probability.
Computer codes for dispersion of dense gas
Weber, A.H.; Watts, J.R.
1982-02-01
Two models for describing the behavior of dense gases have been adapted for specific applications at the Savannah River Plant (SRP) and have been programmed on the IBM computer. One of the models has been used to predict the effect of a ruptured H/sub 2/S storage tank at the 400 Area. The other model has been used to simulate the effect of an unignited release of H/sub 2/S from the 400-Area flare tower.
Computer Code For Turbocompounded Adiabatic Diesel Engine
NASA Technical Reports Server (NTRS)
Assanis, D. N.; Heywood, J. B.
1988-01-01
Computer simulation developed to study advantages of increased exhaust enthalpy in adiabatic turbocompounded diesel engine. Subsytems of conceptual engine include compressor, reciprocator, turbocharger turbine, compounded turbine, ducting, and heat exchangers. Focus of simulation of total system is to define transfers of mass and energy, including release and transfer of heat and transfer of work in each subsystem, and relationship among subsystems. Written in FORTRAN IV.
Computer vision cracks the leaf code.
Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas
2016-03-22
Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies. PMID:26951664
Nuclear Physics computer networking: Report of the Nuclear Physics Panel on Computer Networking
Bemis, C. ); Erskine, J. ); Franey, M. ); Greiner, D. ); Hoehn, M. ); Kaletka, M. ); LeVine, M. ); Roberson, R. (Duke Univ., Durham, NC (U
1990-05-01
This paper discusses: the state of computer networking within nuclear physics program; network requirements for nuclear physics; management structure; and issues of special interest to the nuclear physics program office.
HUDU: The Hanford Unified Dose Utility computer code
Scherpelz, R.I.
1991-02-01
The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs.
Automated uncertainty analysis methods in the FRAP computer codes. [PWR
Peck, S O
1980-01-01
A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts.
Analyzing Pulse-Code Modulation On A Small Computer
NASA Technical Reports Server (NTRS)
Massey, David E.
1988-01-01
System for analysis pulse-code modulation (PCM) comprises personal computer, computer program, and peripheral interface adapter on circuit board that plugs into expansion bus of computer. Functions essentially as "snapshot" PCM decommutator, which accepts and stores thousands of frames of PCM data, sifts through them repeatedly to process according to routines specified by operator. Enables faster testing and involves less equipment than older testing systems.
Computer code for space-time diagnostics of nuclear safety parameters
Solovyev, D. A.; Semenov, A. A.; Gruzdov, F. V.; Druzhaev, A. A.; Shchukin, N. V.; Dolgenko, S. G.; Solovyeva, I. V.; Ovchinnikova, E. A.
2012-07-01
The computer code ECRAN 3D (Experimental and Calculation Reactor Analysis) is designed for continuous monitoring and diagnostics of reactor cores and databases for RBMK-1000 on the basis of analytical methods for the interrelation parameters of nuclear safety. The code algorithms are based on the analysis of deviations between the physically obtained figures and the results of neutron-physical and thermal-hydraulic calculations. Discrepancies between the measured and calculated signals are equivalent to obtaining inadequacy between performance of the physical device and its simulator. The diagnostics system can solve the following problems: identification of facts and time for inconsistent results, localization of failures, identification and quantification of the causes for inconsistencies. These problems can be effectively solved only when the computer code is working in a real-time mode. This leads to increasing requirements for a higher code performance. As false operations can lead to significant economic losses, the diagnostics system must be based on the certified software tools. POLARIS, version 4.2.1 is used for the neutron-physical calculation in the computer code ECRAN 3D. (authors)
Experimental methodology for computational fluid dynamics code validation
Aeschliman, D.P.; Oberkampf, W.L.
1997-09-01
Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.
Space radiator simulation manual for computer code
NASA Technical Reports Server (NTRS)
Black, W. Z.; Wulff, W.
1972-01-01
A computer program that simulates the performance of a space radiator is presented. The program basically consists of a rigorous analysis which analyzes a symmetrical fin panel and an approximate analysis that predicts system characteristics for cases of non-symmetrical operation. The rigorous analysis accounts for both transient and steady state performance including aerodynamic and radiant heating of the radiator system. The approximate analysis considers only steady state operation with no aerodynamic heating. A description of the radiator system and instructions to the user for program operation is included. The input required for the execution of all program options is described. Several examples of program output are contained in this section. Sample output includes the radiator performance during ascent, reentry and orbit.
Preliminary blade design using integrated computer codes
NASA Astrophysics Data System (ADS)
Ryan, Arve
1988-12-01
Loads on the root of a horizontal axis wind turbine (HAWT) rotor blade were analyzed. A design solution for the root area is presented. The loads on the blades are given by different load cases that are specified. To get a clear picture of the influence of different parameters, the whole blade is designed from scratch. This is only a preliminary design study and the blade should not be looked upon as a construction reference. The use of computer programs for the design and optimization is extensive. After the external geometry is set and the aerodynamic loads calculated, parameters like design stresses and laminate thicknesses are run through the available programs, and a blade design optimized on basis of facts and estimates used is shown.
Recent applications of the transonic wing analysis computer code, TWING
NASA Technical Reports Server (NTRS)
Subramanian, N. R.; Holst, T. L.; Thomas, S. D.
1982-01-01
An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.
A three-dimensional magnetostatics computer code for insertion devices.
Chubar, O; Elleaume, P; Chavanne, J
1998-05-01
RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented. PMID:15263552
Teaching Computational Physics Using Spreadsheets
NASA Astrophysics Data System (ADS)
Lee, Jaebong; Shin, K.; Lee, S.
2006-12-01
In recent year, many research groups have been developing spreadsheet program for physics teaching. For example it used to solve Laplace equation1, to visualize potential surface2, and to animate physical contents3. Because Microsoft Excel program is easy to learn, it can apply to many physics problem. And Microsoft® Excel has a Visual Basics for Applications (VBA). ExcelVBA is user-friendly programming tool. Using Excel-VBA and operation with cells, we develop kinds of program about simple harmonic motion, pendulum, satellite orbit, diffraction and so on. We also taught undergraduate students how to program about physics contents using Excel-VBA. We will discuss its’ effect and student’s response. 1. T.T. Crow,”Solution to Laplace’s equation using spreadsheets on personal computer”, Am. J. Phy. 55, 817-823(Sept. 1987) 2. R. J. Beichner, “Visualizing potential surfaces with a spreadsheets”, Phys. Teach. 35, 95-97(Feb. 1997) 3. O. A. Haugland,” Spreadsheet Waves”, Phys. Teach. 37, 14(Jan. 1999) *Supported by the Brain Korea 21 project in 2006
Computational Physics as a Path for Physics Education
NASA Astrophysics Data System (ADS)
Landau, Rubin H.
2008-04-01
Evidence and arguments will be presented that modifications in the undergraduate physics curriculum are necessary to maintain the long-term relevance of physics. Suggested will a balance of analytic, experimental, computational, and communication skills, that in many cases will require an increased inclusion of computation and its associated skill set into the undergraduate physics curriculum. The general arguments will be followed by a detailed enumeration of suggested subjects and student learning outcomes, many of which have already been adopted or advocated by the computational science community, and which permit high performance computing and communication. Several alternative models for how these computational topics can be incorporated into the undergraduate curriculum will be discussed. This includes enhanced topics in the standard existing courses, as well as stand-alone courses. Applications and demonstrations will be presented throughout the talk, as well as prototype video-based materials and electronic books.
Bandy, P.J.; Hall, L.F.
1993-03-01
This report presents information on computer codes for numerical and analytical models that have been used at the Idaho National Engineering Laboratory (INEL) to model ground water and surface water flow and contaminant transport. Organizations conducting modeling at the INEL include: EG G Idaho, Inc., US Geological Survey, and Westinghouse Idaho Nuclear Company. Information concerning computer codes included in this report are: agency responsible for the modeling effort, name of the computer code, proprietor of the code (copyright holder or original author), validation and verification studies, applications of the model at INEL, the prime user of the model, computer code description, computing environment requirements, and documentation and references for the computer code.
Survey of computer codes applicable to waste facility performance evaluations
Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.
1988-01-01
This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs.
Open Source Physics: Code and Curriculum Material for Teachers, Authors, and Developers
NASA Astrophysics Data System (ADS)
Christian, Wolfgang
2004-03-01
The continued use of procedural languages in education is due in part to the lack of up-to-date curricular materials that combine science topics with an object-oriented programming framework. Although there are many resources for teaching computational physics, few are object-oriented. What is needed by the broader science education community is not another computational physics, numerical analysis, or Java programming book (although such books are essential for discipline-specific practitioners), but a synthesis of curriculum development, computational physics, computer science, and physics education that will be useful for scientists and students wishing to write their own simulations and develop their own curricular material. The Open Source Physics (OSP) project was established to meet this need. OSP is an NSF-funded curriculum development project that is developing and distributing a code library, programs, and examples of computer-based interactive curricular material. In this talk, we will describe this library, demonstrate its use, and report on its adoption by curriculum authors. The Open Source Physics code library, documentation, and sample curricular material can be downloaded from http://www.opensourcephysics.org/. Partial funding for this work was obtained through NSF grant DUE-0126439.
Effective Computer Use in Physics Education
ERIC Educational Resources Information Center
Bork, Alfred M.
1975-01-01
Illustrates a sample remedial program in mathematics for physics students. Describes two computer games with successful instructional strategies and programs which help mathematically unsophisticated students to grasp the notion of a differential equation. (GH)
Osiris: A Modern, High-Performance, Coupled, Multi-Physics Code For Nuclear Reactor Core Analysis
Procassini, R J; Chand, K K; Clouse, C J; Ferencz, R M; Grandy, J M; Henshaw, W D; Kramer, K J; Parsons, I D
2007-02-26
To meet the simulation needs of the GNEP program, LLNL is leveraging a suite of high-performance codes to be used in the development of a multi-physics tool for modeling nuclear reactor cores. The Osiris code project, which began last summer, is employing modern computational science techniques in the development of the individual physics modules and the coupling framework. Initial development is focused on coupling thermal-hydraulics and neutral-particle transport, while later phases of the project will add thermal-structural mechanics and isotope depletion. Osiris will be applicable to the design of existing and future reactor systems through the use of first-principles, coupled physics models with fine-scale spatial resolution in three dimensions and fine-scale particle-energy resolution. Our intent is to replace an existing set of legacy, serial codes which require significant approximations and assumptions, with an integrated, coupled code that permits the design of a reactor core using a first-principles physics approach on a wide range of computing platforms, including the world's most powerful parallel computers. A key research activity of this effort deals with the efficient and scalable coupling of physics modules which utilize rather disparate mesh topologies. Our approach allows each code module to use a mesh topology and resolution that is optimal for the physics being solved, and employs a mesh-mapping and data-transfer module to effect the coupling. Additional research is planned in the area of scalable, parallel thermal-hydraulics, high-spatial-accuracy depletion and coupled-physics simulation using Monte Carlo transport.
Learning from computers about physics teaching
NASA Astrophysics Data System (ADS)
Taylor, Edwin F.
1988-11-01
Experience with teaching uses of computers and an analogy between education and nutrition help in reexamining the ways physics is taught, both old and new. The levels of 16 ``educational nutrients'' for four conventional teaching modes (textbooks, lectures, homework/exams, and standard laboratory) and five uses of computers in education (Tutorial, Demonstration/Simulation, Modeling Toolkit, Laboratory Aid, and Student as Programmer) are estimated. This analysis is used to predict some future developments in college physics teaching.
ARMP-02 documentation: Part 2, Chapter 6: CPM-2 computer code manual: Volume 3, Programmer's manual
Jones, D.B.
1987-04-01
CPM-2 is a lattice physics computer code which employs two-dimensional, multigroup neutron transport equations to solve for detailed neutron flux distributions and eigenvalues in fuel assembly designs typical of light water reactors. CPM-2 employs a special predictor-corrector methodology for calculating the burnup of fuel isotopics. CPM-2 combines the rigorous theory of collision probabilities, simple input and enhanced Restart/Data file management capabilities to form an accurate, production-oriented tool for the analysis of nuclear fuel assemblies. CPM-2 is a single-source computer code written entirely in FORTRAN and is available in CDC and IBM versions.
ARMP-02 documentation: Part 2, Chapter 6: CPM-2 computer code manual: Volume 2, User's manual
Jones, D.B.
1987-04-01
CPM-2 is a lattice physics computer code which employs two-dimensional, multigroup neutron transport equations to solve for detailed neutron flux distributions and eigenvalues in fuel assembly designs typical of light water reactors. CPM-2 employs a special predictor-corrector methodology for calculating the burnup of fuel isotopics. CPM-2 combines the rigorous theory of collision probabilities, simple input and enhanced Restart/Data file management capabilities to form an accurate, production-oriented tool for the analysis of nuclear fuel assemblies. CPM-2 is a single-source computer code written entirely in FORTRAN and is available in CDC and IBM versions.
Once-through CANDU reactor models for the ORIGEN2 computer code
Croff, A.G.; Bjerke, M.A.
1980-11-01
Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.
Recent improvements of reactor physics codes in MHI
NASA Astrophysics Data System (ADS)
Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-01
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
Recent improvements of reactor physics codes in MHI
Kosaka, Shinya Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki
2015-12-31
This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Application of computational physics within Northrop
NASA Technical Reports Server (NTRS)
George, M. W.; Ling, R. T.; Mangus, J. F.; Thompkins, W. T.
1987-01-01
An overview of Northrop programs in computational physics is presented. These programs depend on access to today's supercomputers, such as the Numerical Aerodynamical Simulator (NAS), and future growth on the continuing evolution of computational engines. Descriptions here are concentrated on the following areas: computational fluid dynamics (CFD), computational electromagnetics (CEM), computer architectures, and expert systems. Current efforts and future directions in these areas are presented. The impact of advances in the CFD area is described, and parallels are drawn to analagous developments in CEM. The relationship between advances in these areas and the development of advances (parallel) architectures and expert systems is also presented.
Users manual for CAFE-3D : a computational fluid dynamics fire code.
Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma
2005-03-01
The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.
Conference Abstracts: Computers in Physics Instruction.
ERIC Educational Resources Information Center
Baird, William E.
1989-01-01
Provides selected abstracts from the Computers in Physics Instruction conference held on August 1-5, 1988. Topics include: wave and particle motion, the CT programing language, microcomputer-based laboratories, student written simulations, concept maps, summer institutes, computer bulletin boards, interactive video, and videodisks. (MVL)
Physical Model for the Evolution of the Genetic Code
NASA Astrophysics Data System (ADS)
Yamashita, Tatsuro; Narikiyo, Osamu
2011-12-01
Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.
Physical-layer network coding in coherent optical OFDM systems.
Guan, Xun; Chan, Chun-Kit
2015-04-20
We present the first experimental demonstration and characterization of the application of optical physical-layer network coding in coherent optical OFDM systems. It combines two optical OFDM frames to share the same link so as to enhance system throughput, while individual OFDM frames can be recovered with digital signal processing at the destined node. PMID:25969046
RESRAD: A computer code for evaluating radioactively contaminated sites
Yu, C.; Zielen, A.J.; Cheng, J.J.
1993-12-31
This document briefly describes the uses of the RESRAD computer code in calculating site-specific residual radioactive material guidelines and radiation dose-risk to an on-site individual (worker or resident) at a radioactively contaminated site. The adoption by the DOE in order 5400.5, pathway analysis methods, computer requirements, data display, the inclusion of chemical contaminants, benchmarking efforts, and supplemental information sources are all described. (GHH)
Upgrades of Two Computer Codes for Analysis of Turbomachinery
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Liou, Meng-Sing
2005-01-01
Major upgrades have been made in two of the programs reported in "ive Computer Codes for Analysis of Turbomachinery". The affected programs are: Swift -- a code for three-dimensional (3D) multiblock analysis; and TCGRID, which generates a 3D grid used with Swift. Originally utilizing only a central-differencing scheme for numerical solution, Swift was augmented by addition of two upwind schemes that give greater accuracy but take more computing time. Other improvements in Swift include addition of a shear-stress-transport turbulence model for better prediction of adverse pressure gradients, addition of an H-grid capability for flexibility in modeling flows in pumps and ducts, and modification to enable simultaneous modeling of hub and tip clearances. Improvements in TCGRID include modifications to enable generation of grids for more complicated flow paths and addition of an option to generate grids compatible with the ADPAC code used at NASA and in industry. For both codes, new test cases were developed and documentation was updated. Both codes were converted to Fortran 90, with dynamic memory allocation. Both codes were also modified for ease of use in both UNIX and Windows operating systems.
A proposed framework for computational fluid dynamics code calibration/validation
Oberkampf, W.L.
1993-12-31
The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.
Fault-tolerant quantum computation in multiqubit block codes: performance and overhead
NASA Astrophysics Data System (ADS)
Brun, Todd
Fault-tolerant quantum computation requires that quantum information remain encoded in a quantum error-correcting code at all times; that a universal set of logical unitary gates and measurements is available; and that the probability of an uncorrectable error is low for the duration of the computation. Quantum computation can in principle be scaled up to unlimited size if the rate of decoherence is below a threshold. The main constructions that have been studied involve encoding each logical qubit in a separate block (either a concatenated code or a block of the surface code), which typically requires thousands of physical qubits per logical qubit, if not more. To reduce this overhead, we consider using multiqubit codes to achieve much higher storage rates. We estimate performance and overhead for certain families of codes, and ask: how large a quantum computation can be done as a function of the decoherence rate for a fixed size code block? Finally, we consider remaining open questions and limitations to this approach. This work is supported by NSF Grant No. CCF-1421078.
Computing in high-energy physics
Mount, Richard P.
2016-05-31
I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software.
Plagiarism Detection Algorithm for Source Code in Computer Science Education
ERIC Educational Resources Information Center
Liu, Xin; Xu, Chan; Ouyang, Boyu
2015-01-01
Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…
General review of the MOSTAS computer code for wind turbines
NASA Technical Reports Server (NTRS)
Dungundji, J.; Wendell, J. H.
1981-01-01
The MOSTAS computer code for wind turbine analysis is reviewed, and techniques and methods used in its analyses are described. Impressions of its strengths and weakness, and recommendations for its application, modification, and further development are made. Basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed.
User's manual for the ORIGEN2 computer code
Croff, A.G.
1980-07-01
This report describes how to use a revised version of the ORIGEN computer code, designated ORIGEN2. Included are a description of the input data, input deck organization, and sample input and output. ORIGEN2 can be obtained from the Radiation Shielding Information Center at ORNL.
Computer code for double beta decay QRPA based calculations
Barbero, C. A.; Mariano, A.; Krmpotić, F.; Samana, A. R.; Ferreira, V. dos Santos; Bertulani, C. A.
2014-11-11
The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.
Connecting Neural Coding to Number Cognition: A Computational Account
ERIC Educational Resources Information Center
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
User's manual for the GABAS spectrum computer code. Final report
Thayer, D.D.; Lurie, N.A.
1982-01-01
The Gamma and Beta Spectrum computer code (GABAS) was developed at IRT Corporation for calculating time-dependent beta and/or gamma spectra from decaying fission products. GABAS calculates composite fission product spectra based on the technique used by England, et al., in conjunction with the CINDER family of fission product codes. Multigroup beta and gamma spectra for individual nuclides are folded with their corresponding time-dependent activities (usually generated by a fission product inventory code) to produce a composite time-dependent fission product spectrum. This manual contains the methodology employed by GABAS, input requirements for proper execution, a sample problem and a FORTRAN listing compatible with a UNIVAC machine. The code is available in a UNIVAC 1100/81 version and a VAX 11/780 version. The former may be obtained from the Radiation Shielding Information Center (RSIC); the latter may be obtained directly from IRT Corporation.
GPU-computing in econophysics and statistical physics
NASA Astrophysics Data System (ADS)
Preis, T.
2011-03-01
A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.
The Entangled Histories of Physics and Computation
NASA Astrophysics Data System (ADS)
Rodriguez, Cesar
2007-03-01
The history of physics and computation intertwine in a fascinating manner that is relevant to the field of quantum computation. This talk focuses of the interconnections between both by examining their rhyming philosophies, recurrent characters and common themes. Leibniz not only was one of the lead figures of calculus, but also left his footprint in physics and invented the concept of a universal computational language. This last idea was further developed by Boole, Russell, Hilbert and G"odel. Physicists such as Boltzmann and Maxwell also established the foundation of the field of information theory later developed by Shannon. The war efforts of von Neumann and Turing can be juxtaposed to the Manhattan Project. Professional and personal connections of these characters to the development of physics will be emphasized. Recently, new cryptographic developments lead to a reexamination of the fundamentals of quantum mechanics, while quantum computation is discovering a new perspective on the nature of information itself.
Computing in the Introductory Physics Course
NASA Astrophysics Data System (ADS)
Chabay, Ruth; Sherwood, Bruce
2004-03-01
In the Matter & Interactions version of the calculus-based introductory physics course (http://www4.ncsu.edu/ ˜rwchabay/mi) , students write programs in VPython (http://vpython.org) to model physical systems and to calculate and visualize electric and magnetic fields. VPython is unusually easy to learn, produces navigable 3D animations as a side effect of physics computations, and supports full vector calculations. The high speed of current computers makes sophisticated numerical analysis techniques unnecessary. Students can use simple first-order Euler integration, cutting the step size until the behavior of the system no longer changes. In mechanics, iterative application of the momentum principle gives students a sense of the time-evolution character of Newton's second law which is usually missing from the standard course. In E, students calculate electric and magnetic fields numerically and display them in 3D. We are currently studying the impact of introducing computational physics into the introductory course.
Computer code for determination of thermally perfect gas properties
NASA Technical Reports Server (NTRS)
Witte, David W.; Tatum, Kenneth E.
1994-01-01
A set of one-dimensional compressible flow relations for a thermally perfect, calorically imperfect gas is derived for the specific heat c(sub p), expressed as a polynomial function of temperature, and developed into the thermally perfect gas (TPG) computer code. The code produces tables of compressible flow properties similar to those of NACA Rep. 1135. Unlike the tables of NACA Rep. 1135 which are valid only in the calorically perfect temperature regime, the TPG code results are also valid in the thermally perfect calorically imperfect temperature regime which considerably extends the range of temperature application. Accuracy of the TPG code in the calorically perfect temperature regime is verified by comparisons with the tables of NACA Rep. 1135. In the thermally perfect, calorically imperfect temperature regime, the TPG code is validated by comparisons with results obtained from the method of NACA Rep. 1135 for calculating the thermally perfect calorically imperfect compressible flow properties. The temperature limits for application of the TPG code are also examined. The advantage of the TPG code is its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture thereof, whereas the method of NACA Rep. 1135 is restricted to only diatomic gases.
Validation of Numerical Codes to Compute Tsunami Runup And Inundation
NASA Astrophysics Data System (ADS)
Velioğlu, Deniz; Cevdet Yalçıner, Ahmet; Kian, Rozita; Zaytsev, Andrey
2015-04-01
FLOW 3D and NAMI DANCE are two numerical codes which can be applied to analysis of flow and motion of long waves. Flow 3D simulates linear and nonlinear propagating surface waves as well as irregular waves including long waves. NAMI DANCE uses finite difference computational method to solve nonlinear shallow water equations (NSWE) in long wave problems, specifically tsunamis. Both codes can be applied to tsunami simulations and visualization of long waves. Both codes are capable of solving flooding problems. However, FLOW 3D is designed mainly to solve flooding problem from land and NAMI DANCE is designed to solve flooding problem from the sea. These numerical codes are applied to some benchmark problems for validation and verification. One useful benchmark problem is the runup of solitary waves which is investigated analytically and experimentally by Synolakis (1987). Since 1970s, solitary waves have commonly been used to model tsunamis especially in experimental and numerical studies. In this respect, a benchmark problem on runup of solitary waves is a relevant choice to assess the capability and validity of the numerical codes on amplification of tsunamis. In this study both codes have been tested, compared and validated by applying to the analytical benchmark problem of solitary wave runup on a sloping beach. Comparison of the results showed that both codes are in good agreement with the analytical and experimental results and thus can be proposed to be used in inundation of long waves and tsunami hazard analysis.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1993-01-01
Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.
Computational Physics at a Liberal Arts College
NASA Astrophysics Data System (ADS)
Christian, Wolfgang
1997-11-01
Since students have different skills, computational physics at an undergraduate liberal arts college must be flexible. Some students write well; other students have good graphical design skills; and other students have mathematical ability. Most students will not major in physics and many will not major in science. We believe, however, that Computational Physics has broad appeal since it is an effective way to develop problem solving skills and to become computer literate. Students perceive that they are not well educated without a good understanding of a computer's power and its limitations. Learning to write and to design an interface that communicates an idea is part of our program. So is downloading information via the World Wide Web, FTP-ing homework, getting help from Computer Services, and emailing other students or the instructor. We have adopted a web-based approach throughout the curriculum and have added Computational Physics as a required course for majors. It is our intent (following a philosophy pioneered by the M.U.P.P.E.T. team at the University of Maryland) that students use the computer to explore real scientific problems early in their undergraduate career. Examples of student work will be presented.
Simulating physical phenomena with a quantum computer
NASA Astrophysics Data System (ADS)
Ortiz, Gerardo
2003-03-01
In a keynote speech at MIT in 1981 Richard Feynman raised some provocative questions in connection to the exact simulation of physical systems using a special device named a ``quantum computer'' (QC). At the time it was known that deterministic simulations of quantum phenomena in classical computers required a number of resources that scaled exponentially with the number of degrees of freedom, and also that the probabilistic simulation of certain quantum problems were limited by the so-called sign or phase problem, a problem believed to be of exponential complexity. Such a QC was intended to mimick physical processes exactly the same as Nature. Certainly, remarks coming from such an influential figure generated widespread interest in these ideas, and today after 21 years there are still some open questions. What kind of physical phenomena can be simulated with a QC?, How?, and What are its limitations? Addressing and attempting to answer these questions is what this talk is about. Definitively, the goal of physics simulation using controllable quantum systems (``physics imitation'') is to exploit quantum laws to advantage, and thus accomplish efficient imitation. Fundamental is the connection between a quantum computational model and a physical system by transformations of operator algebras. This concept is a necessary one because in Quantum Mechanics each physical system is naturally associated with a language of operators and thus can be considered as a possible model of quantum computation. The remarkable result is that an arbitrary physical system is naturally simulatable by another physical system (or QC) whenever a ``dictionary'' between the two operator algebras exists. I will explain these concepts and address some of Feynman's concerns regarding the simulation of fermionic systems. Finally, I will illustrate the main ideas by imitating simple physical phenomena borrowed from condensed matter physics using quantum algorithms, and present experimental
Applications of the ARGUS code in accelerator physics
Petillo, J.J.; Mankofsky, A.; Krueger, W.A.; Kostas, C.; Mondelli, A.A.; Drobot, A.T.
1993-12-31
ARGUS is a three-dimensional, electromagnetic, particle-in-cell (PIC) simulation code that is being distributed to U.S. accelerator laboratories in collaboration between SAIC and the Los Alamos Accelerator Code Group. It uses a modular architecture that allows multiple physics modules to share common utilities for grid and structure input., memory management, disk I/O, and diagnostics, Physics modules are in place for electrostatic and electromagnetic field solutions., frequency-domain (eigenvalue) solutions, time- dependent PIC, and steady-state PIC simulations. All of the modules are implemented with a domain-decomposition architecture that allows large problems to be broken up into pieces that fit in core and that facilitates the adaptation of ARGUS for parallel processing ARGUS operates on either Cray or workstation platforms, and MOTIF-based user interface is available for X-windows terminals. Applications of ARGUS in accelerator physics and design are described in this paper.
Hofmann, R.
1981-11-01
A useful computer simulation method based on the explicit finite difference technique can be used to address transient dynamic situations associated with nuclear reactor design and analysis. This volume is divided into two parts. Part A contains the theoretical background (physical and numerical) and the numerical equations for the STEALTH 1D, 2D, and 3D computer codes. Part B contains input instructions for all three codes. The STEALTH codes are based entirely on the published technology of the Lawrence Livermore National Laboratory, Livermore, California, and Sandia National Laboratories, Albuquerque, New Mexico.
Hofmann, R.
1981-11-01
A useful computer simulation method based on the explicit finite difference technique can be used to address transient dynamic situations associated with nuclear reactor design and analysis. This volume is divided into two parts. Part A contains the theoretical background (physical and numerical) and the numerical equations for the STEALTH 1D, 2D, and 3D computer codes. Part B contains input instructions for all three codes. The STEALTH codes are based entirely on the published technology of the Lawrence Livermore National Laboratory, Livermore, California, and Sandia National Laboratories, Albuquerque, New Mexico.
Construction of large-scale simulation codes using ALPAL (A Livermore Physics Applications Language)
Cook, G.
1990-10-01
A Livermore Physics Applications Language (ALPAL) is a new computer tool that is designed to leverage the abilities and creativity of computational scientist. Some of the ways that ALPAL provides this leverage are: first, it eliminates many sources of errors; second, it permits building code modules with far greater speed than is otherwise possible; third, it provides a means of specifying almost any numerical algorithm; and fourth, it is a language that is close to a journal-style presentation of physics models and numerical methods for solving them. 13 refs., 9 figs.
Verification, Validation, and Predictive Capability in Computational Engineering and Physics
OBERKAMPF, WILLIAM L.; TRUCANO, TIMOTHY G.; HIRSCH, CHARLES
2003-02-01
Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue. This paper presents our viewpoint of the state of the art in V&V in computational physics. (In this paper we refer to all fields of computational engineering and physics, e.g., computational fluid dynamics, computational solid mechanics, structural dynamics, shock wave physics, computational chemistry, etc., as computational physics.) We do not provide a comprehensive review of the multitudinous contributions to V&V, although we do reference a large number of previous works from many fields. We have attempted to bring together many different perspectives on V&V, highlight those perspectives that are effective from a practical engineering viewpoint, suggest future research topics, and discuss key implementation issues that are necessary to improve the effectiveness of V&V. We describe our view of the framework in which predictive capability relies on V&V, as well as other factors that affect predictive capability. Our opinions about the research needs and management issues in V&V are very practical: What methods and techniques need to be developed and what changes in the views of management need to occur to increase the usefulness, reliability, and impact of computational physics for decision making about engineering
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Koga, Dennis (Technical Monitor)
2000-01-01
In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task
Additional extensions to the NASCAP computer code, volume 3
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Cooke, D. L.
1981-01-01
The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.
Compendium of computer codes for the researcher in magnetic fusion energy
Porter, G.D.
1989-03-10
This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs.
New Parallel computing framework for radiation transport codes
Kostin, M.A.; Mokhov, N.V.; Niita, K.; /JAERI, Tokai
2010-09-01
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.
RADTRAN 5: A computer code for transportation risk analysis
Neuhauser, K. S.; Kanipe, F. L.
1991-01-01
RADTRAN 5 is a computer code developed at Sandia National Laboratories (SNL) in Albuquerque, NM, to estimate radiological and nonradiological risks of radioactive materials transportation. RADTRAN 5 is written in ANSI Standard FORTRAN 77 and contains significant advances in the methodology for route-specific analysis first developed by SNL for RADTRAN 4 (Neuhauser and Kanipe, 1992). Like the previous RADTRAN codes, RADTRAN 5 contains two major modules for incident-free and accident risk amlysis, respectively. All commercially important transportation modes may be analyzed with RADTRAN 5: highway by combination truck; highway by light-duty vehicle; rail; barge; ocean-going ship; cargo air; and passenger air.
SCALE: A modular code system for performing standardized computer analyses for licensing evaluation
1997-03-01
This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?
Rzhetsky, Andrey; Evans, James A.
2011-01-01
The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276
VARSKIN MOD 2 and SADDE MOD2: Computer codes for assessing skin dose from skin contamination
Durham, J.S. )
1992-12-01
The computer code VARSKIN has been modified to calculate dose to skin from three-dimensional sources, sources separated from the skin by layers of protective clothing, and gamma dose from certain radionuclides correction for backscatter has also been incorporated for certain geometries. This document describes the new code, VARSKIN Mod 2, including installation and operation instructions, provides detailed descriptions of the models used, and suggests methods for avoiding misuse of the code. The input data file for VARSKIN Mod 2 has been modified to reflect current physical data, to include the contribution to dose from internal conversion and Auger electrons, and to reflect a correction for low-energy electrons. In addition, the computer code SADDE: Scaled Absorbed Dose Distribution Evaluator has been modified to allow the generation of scaled absorbed dose distributions for mixtures of radionuclides and intereat conversion and Auger electrons. This new code, SADDE Mod 2, is also described in this document. Instructions for installation and operation of the code and detailed descriptions of the models used in the code are provided.
Nanostructure symmetry: Relevance for physics and computing
Dupertuis, Marc-André; Oberli, D. Y.; Karlsson, K. F.; Dalessi, S.; Gallinet, B.; Svendsen, G.
2014-03-31
We review the research done in recent years in our group on the effects of nanostructure symmetry, and outline its relevance both for nanostructure physics and for computations of their electronic and optical properties. The exemples of C3v and C2v quantum dots are used. A number of surprises and non-trivial aspects are outlined, and a few symmetry-based tools for computing and analysis are shortly presented.
A domain decomposition scheme for Eulerian shock physics codes
Bell, R.L.; Hertel, E.S. Jr.
1994-08-01
A new algorithm which allows for complex domain decomposition in Eulerian codes was developed at Sandia National Laboratories. This new feature allows a user to customize the zoning for each portion of a calculation and to refine volumes of the computational space of particular interest This option is available in one, two, and three dimensions. The new technique will be described in detail and several examples of the effectiveness of this technique will also be discussed.
Additional extensions to the NASCAP computer code, volume 1
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Katz, I.; Stannard, P. R.
1981-01-01
Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.
Interaction of Intuitive Physics with Computer-Simulated Physics.
ERIC Educational Resources Information Center
Flick, Lawrence B.
1990-01-01
The question of how children solve force and motion problems in computer simulations without explicit knowledge of the underlying physics was investigated. Keystroke sequences made by children were saved and analyzed, and children were interviewed to understand their perception of the relationship between keyboard input and on-screen action. (CW)
Geothermal reservoir engineering computer code comparison and validation
Faust, C.R.; Mercer, J.W.; Miller, W.J.
1980-11-12
The results of computer simulations for a set of six problems typical of geothermal reservoir engineering applications are presented. These results are compared to those obtained by others using similar geothermal reservoir simulators on the same problem set. The purpose of this code comparison is to check the performance of participating codes on a set of typical reservoir problems. The results provide a measure of the validity and appropriateness of the simulators in terms of major assumptions, governing equations, numerical accuracy, and computational procedures. A description is given of the general reservoir simulator - its major assumptions, mathematical formulation, and numerical techniques. Following the description of the model is the presentation of the results for the six problems. Included with the results for each problem is a discussion of the results; problem descriptions and result tabulations are included in appendixes. Each of the six problems specified in the contract was successfully simulated. (MHR)
Inlet-Compressor Analysis Performed Using Coupled Computational Fluid Dynamics Codes
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Suresh, Ambady; Townsend, Scott
1999-01-01
A thorough understanding of dynamic interactions between inlets and compressors is extremely important to the design and development of propulsion control systems, particularly for supersonic aircraft such as the High-Speed Civil Transport (HSCT). Computational fluid dynamics (CFD) codes are routinely used to analyze individual propulsion components. By coupling the appropriate CFD component codes, it is possible to investigate inlet-compressor interactions. The objectives of this work were to gain a better understanding of inlet-compressor interaction physics, formulate a more realistic compressor-face boundary condition for time-accurate CFD simulations of inlets, and to take a first step toward the CFD simulation of an entire engine by coupling multidimensional component codes. This work was conducted at the NASA Lewis Research Center by a team of civil servants and support service contractors as part of the High Performance Computing and Communications Program (HPCCP).
Validation and testing of the VAM2D computer code
Kool, J.B.; Wu, Y.S. )
1991-10-01
This document describes two modeling studies conducted by HydroGeoLogic, Inc. for the US NRC under contract no. NRC-04089-090, entitled, Validation and Testing of the VAM2D Computer Code.'' VAM2D is a two-dimensional, variably saturated flow and transport code, with applications for performance assessment of nuclear waste disposal. The computer code itself is documented in a separate NUREG document (NUREG/CR-5352, 1989). The studies presented in this report involve application of the VAM2D code to two diverse subsurface modeling problems. The first one involves modeling of infiltration and redistribution of water and solutes in an initially dry, heterogeneous field soil. This application involves detailed modeling over a relatively short, 9-month time period. The second problem pertains to the application of VAM2D to the modeling of a waste disposal facility in a fractured clay, over much larger space and time scales and with particular emphasis on the applicability and reliability of using equivalent porous medium approach for simulating flow and transport in fractured geologic media. Reflecting the separate and distinct nature of the two problems studied, this report is organized in two separate parts. 61 refs., 31 figs., 9 tabs.
Development of non-linear finite element computer code
NASA Technical Reports Server (NTRS)
Becker, E. B.; Miller, T.
1985-01-01
Recent work has shown that the use of separable symmetric functions of the principal stretches can adequately describe the response of certain propellant materials and, further, that a data reduction scheme gives a convenient way of obtaining the values of the functions from experimental data. Based on representation of the energy, a computational scheme was developed that allows finite element analysis of boundary value problems of arbitrary shape and loading. The computational procedure was implemental in a three-dimensional finite element code, TEXLESP-S, which is documented herein.
Computational radiology and imaging with the MCNP Monte Carlo code
Estes, G.P.; Taylor, W.M.
1995-05-01
MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.
Bragg optics computer codes for neutron scattering instrument design
Popovici, M.; Yelon, W.B.; Berliner, R.R.; Stoica, A.D.
1997-09-01
Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.
Computer Code For Calculation Of The Mutual Coherence Function
NASA Astrophysics Data System (ADS)
Bugnolo, Dimitri S.
1986-05-01
We present a computer code in FORTRAN 77 for the calculation of the mutual coherence function (MCF) of a plane wave normally incident on a stochastic half-space. This is an exact result. The user need only input the path length, the wavelength, the outer scale size, and the structure constant. This program may be used to calculate the MCF of a well-collimated laser beam in the atmosphere.
Computer Network Resources for Physical Geography Instruction.
ERIC Educational Resources Information Center
Bishop, Michael P.; And Others
1993-01-01
Asserts that the use of computer networks provides an important and effective resource for geography instruction. Describes the use of the Internet network in physical geography instruction. Provides an example of the use of Internet resources in a climatology/meteorology course. (CFR)
Computer-Based Physics: An Anthology.
ERIC Educational Resources Information Center
Blum, Ronald, Ed.
Designed to serve as a guide for integrating interactive problem-solving or simulating computers into a college-level physics course, this anthology contains nine articles each of which includes an introduction, a student manual, and a teacher's guide. Among areas covered in the articles are the computerized reduction of data to a Gaussian…
The Fundamental Physical Limits of Computation.
ERIC Educational Resources Information Center
Bennett, Charles H.; Landauer, Rolf
1985-01-01
Examines what constraints govern the physical process of computation, considering such areas as whether a minimum amount of energy is required per logic step. Indicates that although there seems to be no minimum, answers to other questions are unresolved. Examples used include DNA/RNA, a Brownian clockwork turning machine, and others. (JN)
Statistical and computational challenges in physical mapping
Nelson, D.O.; Speed, T.P.
1994-06-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like Huntington`s disease, cystic fibrosis, and myotonic dystrophy. Instrumental in these efforts has been the construction of so-called {open_quotes}physical maps{close_quotes} of large regions of human chromosomes. Constructing a physical map of a chromosome presents a number of interesting challenges to the computational statistician. In addition to the general ill-posedness of the problem, complications include the size of the data sets, computational complexity, and the pervasiveness of experimental error. The nature of the problem and the presence of many levels of experimental uncertainty make statistical approaches to map construction appealing. Simultaneously, however, the size and combinatorial complexity of the problem make such approaches computationally demanding. In this paper we discuss what physical maps are and describe three different kinds of physical maps, outlining issues which arise in constructing them. In addition, we describe our experience with powerful, interactive statistical computing environments. We found that the ability to create high-level specifications of proposed algorithms which could then be directly executed provided a flexible rapid prototyping facility for developing new statistical models and methods. The ability to check the implementation of an algorithm by comparing its results to that of an executable specification enabled us to rapidly debug both specification and implementation in an environment of changing needs.
The Computer in Second Semester Introductory Physics.
ERIC Educational Resources Information Center
Merrill, John R.
This supplementary text material is meant to suggest ways in which the computer can increase students' intuitive understanding of fields and waves. The first way allows the student to produce a number of examples of the physics discussed in the text. For example, more complicated field and potential maps, or intensity patterns, can be drawn from…
Extreme Scale Computing for First-Principles Plasma Physics Research
Chang, Choogn-Seock
2011-10-12
World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.
Teaching Computational Physics to High School Teachers
NASA Astrophysics Data System (ADS)
Cancio, Antonio C.
2007-10-01
This talk describes my experience in developing and giving an experimental workshop to expose high school teachers to basic concepts in computer modeling and give them tools to make simple 3D simulations for class demos and student projects. Teachers learned basic techniques of simulating dynamics using high school and introductory college level physics and basic elements of programming. High quality graphics were implemented in an easy to use, open source software package, VPython, currently in use in college introductory courses. Simulations covered areas of everyday physics accessible to computational approaches which would otherwise be hard to treat at introductory level, such as the physics of sports, realistic planetary motion and chaotic motion. The challenges and successes of teaching this subject in an experimental one-week-long workshop format, and to an audience completely new to the subject will be discussed.
Singular Function Integration in Computational Physics
NASA Astrophysics Data System (ADS)
Hasbun, Javier
2009-03-01
In teaching computational methods in the undergraduate physics curriculum, standard integration approaches taught include the rectangular, trapezoidal, Simpson, Romberg, and others. Over time, these techniques have proven to be invaluable and students are encouraged to employ the most efficient method that is expected to perform best when applied to a given problem. However, some physics research applications require techniques that can handle singularities. While decreasing the step size in traditional approaches is an alternative, this may not always work and repetitive processes make this route even more inefficient. Here, I present two existing integration rules designed to handle singular integrals. I compare them to traditional rules as well as to the exact analytic results. I suggest that it is perhaps time to include such approaches in the undergraduate computational physics course.