Science.gov

Sample records for advanced simulation codes

  1. An Advanced simulation Code for Modeling Inductive Output Tubes

    SciTech Connect

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  2. SAC: Sheffield Advanced Code

    NASA Astrophysics Data System (ADS)

    Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick

    2013-06-01

    The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

  3. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  4. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    SciTech Connect

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  5. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    SciTech Connect

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  6. Electrical Circuit Simulation Code

    SciTech Connect

    Wix, Steven D.; Waters, Arlon J.; Shirley, David

    2001-08-09

    Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.

  7. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  8. MHD Simulation of Magnetic Nozzle Plasma with the NIMROD Code: Applications to the VASIMR Advanced Space Propulsion Concept

    NASA Astrophysics Data System (ADS)

    Tarditi, Alfonso G.; Shebalin, John V.

    2002-11-01

    A simulation study with the NIMROD code [1] is being carried on to investigate the efficiency of the thrust generation process and the properties of the plasma detachment in a magnetic nozzle. In the simulation, hot plasma is injected in the magnetic nozzle, modeled as a 2D, axi-symmetric domain. NIMROD has two-fluid, 3D capabilities but the present runs are being conducted within the MHD, 2D approximation. As the plasma travels through the magnetic field, part of its thermal energy is converted into longitudinal kinetic energy, along the axis of the nozzle. The plasma eventually detaches from the magnetic field at a certain distance from the nozzle throat where the kinetic energy becomes larger than the magnetic energy. Preliminary NIMROD 2D runs have been benchmarked with a particle trajectory code showing satisfactory results [2]. Further testing is here reported with the emphasis on the analysis of the diffusion rate across the field lines and of the overall nozzle efficiency. These simulation runs are specifically designed for obtaining comparisons with laboratory measurements of the VASIMR experiment, by looking at the evolution of the radial plasma density and temperature profiles in the nozzle. VASIMR (Variable Specific Impulse Magnetoplasma Rocket, [3]) is an advanced space propulsion concept currently under experimental development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A plasma (typically ionized Hydrogen or Helium) is generated by a RF (Helicon) discharge and heated by an Ion Cyclotron Resonance Heating antenna. The heated plasma is then guided into a magnetic nozzle to convert the thermal plasma energy into effective thrust. The VASIMR system has no electrodes and a solenoidal magnetic field produced by an asymmetric mirror configuration ensures magnetic insulation of the plasma from the material surfaces. By powering the plasma source and the heating antenna at different levels it is possible to vary smoothly of the

  9. Application of Advanced Concepts and Techniques in Electromagnetic Topology Based Simulations: CRIPTE and Related Codes

    DTIC Science & Technology

    2008-12-01

    simulation validation process. The experimental setup used to study the interaction of the electromagnetic field with an aperture is seen in Fig. IVA -1. The...distance of 3 meters from the metallic plate to satisfy the far field condition at low frequencies. Semi-anechoic Chamber Fig. IVA -1 Experiment setup...transfer function is then calculated from recored electric fields as follows. Hani") ’trans,all {co)-Eu (co) "/’ EM ( IVA -1) where H^a)) is the

  10. Challenge problem and milestones for : Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC).

    SciTech Connect

    Freeze, Geoffrey A.; Wang, Yifeng; Howard, Robert; McNeish, Jerry A.; Schultz, Peter Andrew; Arguello, Jose Guadalupe, Jr.

    2010-09-01

    This report describes the specification of a challenge problem and associated challenge milestones for the Waste Integrated Performance and Safety Codes (IPSC) supporting the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The NEAMS challenge problems are designed to demonstrate proof of concept and progress towards IPSC goals. The goal of the Waste IPSC is to develop an integrated suite of modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. To demonstrate proof of concept and progress towards these goals and requirements, a Waste IPSC challenge problem is specified that includes coupled thermal-hydrologic-chemical-mechanical (THCM) processes that describe (1) the degradation of a borosilicate glass waste form and the corresponding mobilization of radionuclides (i.e., the processes that produce the radionuclide source term), (2) the associated near-field physical and chemical environment for waste emplacement within a salt formation, and (3) radionuclide transport in the near field (i.e., through the engineered components - waste form, waste package, and backfill - and the immediately adjacent salt). The initial details of a set of challenge milestones that collectively comprise the full challenge problem are also specified.

  11. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  12. Electromagnetic particle simulation codes

    NASA Technical Reports Server (NTRS)

    Pritchett, P. L.

    1985-01-01

    Electromagnetic particle simulations solve the full set of Maxwell's equations. They thus include the effects of self-consistent electric and magnetic fields, magnetic induction, and electromagnetic radiation. The algorithms for an electromagnetic code which works directly with the electric and magnetic fields are described. The fields and current are separated into transverse and longitudinal components. The transverse E and B fields are integrated in time using a leapfrog scheme applied to the Fourier components. The particle pushing is performed via the relativistic Lorentz force equation for the particle momentum. As an example, simulation results are presented for the electron cyclotron maser instability which illustrate the importance of relativistic effects on the wave-particle resonance condition and on wave dispersion.

  13. LFSC - Linac Feedback Simulation Code

    SciTech Connect

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Simulation Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  14. Experience with advanced nodal codes at YAEC

    SciTech Connect

    Cacciapouti, R.J.

    1990-01-01

    Yankee Atomic Electric Company (YAEC) has been performing reload licensing analysis since 1969. The basic pressurized water reactor (PWR) methodology involves the use of LEOPARD for cross-section generation, PDQ for radial power distributions and integral control rod worth, and SIMULATE for axial power distributions and differential control rod worth. In 1980, YAEC began performing reload licensing analysis for the Vermont Yankee boiling water reactor (BWR). The basic BWR methodology involves the use of CASMO for cross-section generation and SIMULATE for three-dimensional power distributions. In 1986, YAEC began investigating the use of CASMO-3 for cross-section generation and the advanced nodal code SIMULATE-3 for power distribution analysis. Based on the evaluation, the CASMO-3/SIMULATE-3 methodology satisfied all requirements. After careful consideration, the cost of implementing the new methodology is expected to be offset by reduced computing costs, improved engineering productivity, and fuel-cycle performance gains.

  15. Flight code validation simulator

    SciTech Connect

    Sims, B.A.

    1995-08-01

    An End-To-End Simulation capability for software development and validation of missile flight software on the actual embedded computer has been developed utilizing a 486 PC, i860 DSP coprocessor, embedded flight computer and custom dual port memory interface hardware. This system allows real-time interrupt driven embedded flight software development and checkout. The flight software runs in a Sandia Digital Airborne Computer (SANDAC) and reads and writes actual hardware sensor locations in which IMU (Inertial Measurements Unit) data resides. The simulator provides six degree of freedom real-time dynamic simulation, accurate real-time discrete sensor data and acts on commands and discretes from the flight computer. This system was utilized in the development and validation of the successful premier flight of the Digital Miniature Attitude Reference System (DMARS) in January 1995 at the White Sands Missile Range on a two stage attitude controlled sounding rocket.

  16. HADES, A Radiographic Simulation Code

    SciTech Connect

    Aufderheide, M.B.; Slone, D.M.; Schach von Wittenau, A.E.

    2000-08-18

    We describe features of the HADES radiographic simulation code. We begin with a discussion of why it is useful to simulate transmission radiography. The capabilities of HADES are described, followed by an application of HADES to a dynamic experiment recently performed at the Los Alamos Neutron Science Center. We describe quantitative comparisons between experimental data and HADES simulations using a copper step wedge. We conclude with a short discussion of future work planned for HADES.

  17. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1.

    SciTech Connect

    Bartlett, Roscoe Ainsworth; Arguello, Jose Guadalupe, Jr.; Urbina, Angel; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Knupp, Patrick Michael; Wang, Yifeng; Schultz, Peter Andrew; Howard, Robert; McCornack, Marjorie Turner

    2011-01-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.

  18. Computer Code for Nanostructure Simulation

    NASA Technical Reports Server (NTRS)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  19. Advanced Code for Photocathode Design

    SciTech Connect

    Ives, Robert Lawrence; Jensen, Kevin; Montgomery, Eric; Bui, Thuc

    2015-12-15

    The Phase I activity demonstrated that PhotoQE could be upgraded and modified to allow input using a graphical user interface. Specific calls to platform-dependent (e.g. IMSL) function calls were removed, and Fortran77 components were rewritten for Fortran95 compliance. The subroutines, specifically the common block structures and shared data parameters, were reworked to allow the GUI to update material parameter data, and the system was targeted for desktop personal computer operation. The new structures overcomes the previous rigid and unmodifiable library structures by implementing new, materials library data sets and repositioning the library values to external files. Material data may originate from published literature or experimental measurements. Further optimization and restructuring would allow custom and specific emission models for beam codes that rely on parameterized photoemission algorithms. These would be based on simplified and parametric representations updated and extended from previous versions (e.g., Modified Fowler-Dubridge, Modified Three-Step, etc.).

  20. Foundational development of an advanced nuclear reactor integrated safety code.

    SciTech Connect

    Clarno, Kevin; Lorber, Alfred Abraham; Pryor, Richard J.; Spotz, William F.; Schmidt, Rodney Cannon; Belcourt, Kenneth; Hooper, Russell Warren; Humphries, Larry LaRon

    2010-02-01

    This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.

  1. NASA National Combustion Code Simulations

    NASA Technical Reports Server (NTRS)

    Iannetti, Anthony; Davoudzadeh, Farhad

    2001-01-01

    A systematic effort is in progress to further validate the National Combustion Code (NCC) that has been developed at NASA Glenn Research Center (GRC) for comprehensive modeling and simulation of aerospace combustion systems. The validation efforts include numerical simulation of the gas-phase combustor experiments conducted at the Center for Turbulence Research (CTR), Stanford University, followed by comparison and evaluation of the computed results with the experimental data. Presently, at GRC, a numerical model of the experimental gaseous combustor is built to simulate the experimental model. The constructed numerical geometry includes the flow development sections for air annulus and fuel pipe, 24 channel air and fuel swirlers, hub, combustor, and tail pipe. Furthermore, a three-dimensional multi-block, multi-grid grid (1.6 million grid points, 3-levels of multi-grid) is generated. Computational simulation of the gaseous combustor flow field operating on methane fuel has started. The computational domain includes the whole flow regime starting from the fuel pipe and the air annulus, through the 12 air and 12 fuel channels, in the combustion region and through the tail pipe.

  2. Advanced Modulation and Coding Technology Conference

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The objectives, approach, and status of all current LeRC-sponsored industry contracts and university grants are presented. The following topics are covered: (1) the LeRC Space Communications Program, and Advanced Modulation and Coding Projects; (2) the status of four contracts for development of proof-of-concept modems; (3) modulation and coding work done under three university grants, two small business innovation research contracts, and two demonstration model hardware development contracts; and (4) technology needs and opportunities for future missions.

  3. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  4. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    SciTech Connect

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  5. The stellar atmosphere simulation code Bifrost. Code description and validation

    NASA Astrophysics Data System (ADS)

    Gudiksen, B. V.; Carlsson, M.; Hansteen, V. H.; Hayek, W.; Leenaarts, J.; Martínez-Sykora, J.

    2011-07-01

    Context. Numerical simulations of stellar convection and photospheres have been developed to the point where detailed shapes of observed spectral lines can be explained. Stellar atmospheres are very complex, and very different physical regimes are present in the convection zone, photosphere, chromosphere, transition region and corona. To understand the details of the atmosphere it is necessary to simulate the whole atmosphere since the different layers interact strongly. These physical regimes are very diverse and it takes a highly efficient massively parallel numerical code to solve the associated equations. Aims: The design, implementation and validation of the massively parallel numerical code Bifrost for simulating stellar atmospheres from the convection zone to the corona. Methods: The code is subjected to a number of validation tests, among them the Sod shock tube test, the Orzag-Tang colliding shock test, boundary condition tests and tests of how the code treats magnetic field advection, chromospheric radiation, radiative transfer in an isothermal scattering atmosphere, hydrogen ionization and thermal conduction. Results.Bifrost completes the tests with good results and shows near linear efficiency scaling to thousands of computing cores.

  6. Development of Design Technology on Thermal-Hydraulic Performance in Tight-Lattice Rod Bundle: IV Large Paralleled Simulation by the Advanced Two-fluid Model Code

    NASA Astrophysics Data System (ADS)

    Misawa, Takeharu; Yoshida, Hiroyuki; Akimoto, Hajime

    In Japan Atomic Energy Agency (JAEA), the Innovative Water Reactor for Flexible Fuel Cycle (FLWR) has been developed. For thermal design of FLWR, it is necessary to develop analytical method to predict boiling transition of FLWR. Japan Atomic Energy Agency (JAEA) has been developing three-dimensional two-fluid model analysis code ACE-3D, which adopts boundary fitted coordinate system to simulate complex shape channel flow. In this paper, as a part of development of ACE-3D to apply to rod bundle analysis, introduction of parallelization to ACE-3D and assessments of ACE-3D are shown. In analysis of large-scale domain such as a rod bundle, even two-fluid model requires large number of computational cost, which exceeds upper limit of memory amount of 1 CPU. Therefore, parallelization was introduced to ACE-3D to divide data amount for analysis of large-scale domain among large number of CPUs, and it is confirmed that analysis of large-scale domain such as a rod bundle can be performed by parallel computation with keeping parallel computation performance even using large number of CPUs. ACE-3D adopts two-phase flow models, some of which are dependent upon channel geometry. Therefore, analyses in the domains, which simulate individual subchannel and 37 rod bundle, are performed, and compared with experiments. It is confirmed that the results obtained by both analyses using ACE-3D show agreement with past experimental result qualitatively.

  7. Advanced Vadose Zone Simulations Using TOUGH

    SciTech Connect

    Finsterle, S.; Doughty, C.; Kowalsky, M.B.; Moridis, G.J.; Pan,L.; Xu, T.; Zhang, Y.; Pruess, K.

    2007-02-01

    The vadose zone can be characterized as a complex subsurfacesystem in which intricate physical and biogeochemical processes occur inresponse to a variety of natural forcings and human activities. Thismakes it difficult to describe, understand, and predict the behavior ofthis specific subsurface system. The TOUGH nonisothermal multiphase flowsimulators are well-suited to perform advanced vadose zone studies. Theconceptual models underlying the TOUGH simulators are capable ofrepresenting features specific to the vadose zone, and of addressing avariety of coupled phenomena. Moreover, the simulators are integratedinto software tools that enable advanced data analysis, optimization, andsystem-level modeling. We discuss fundamental and computationalchallenges in simulating vadose zone processes, review recent advances inmodeling such systems, and demonstrate some capabilities of the TOUGHsuite of codes using illustrative examples.

  8. Advanced electromagnetic gun simulation

    NASA Astrophysics Data System (ADS)

    Brown, J. L.; George, E. B.; Lippert, J. R.; Balius, A. R.

    1986-11-01

    The architecture, software and application of a simulation system for evaluating electromagnetic gun (EMG) operability, maintainability, test data and performance tradeoffs are described. The system features a generic preprocessor designed for handling the large data rates necessary for EMG simulations. The preprocessor and postprocessor operate independent of the EMG simulation, which is viewed through windows by the user, who can then select the areas of the simulation desired. The simulation considers a homopolar generator, busbars, pulse shaping coils, the barrel, switches, and prime movers. In particular, account is taken of barrel loading by the magnetic field, Lorentz force and plasma pressure.

  9. Edge-relevant plasma simulations with the continuum code COGENT

    NASA Astrophysics Data System (ADS)

    Dorf, M.; Dorr, M.; Ghosh, D.; Hittinger, J.; Rognlien, T.; Cohen, R.; Lee, W.; Schwartz, P.

    2016-10-01

    We describe recent advances in cross-separatrix and other edge-relevant plasma simulations with COGENT, a continuum gyro-kinetic code being developed by the Edge Simulation Laboratory (ESL) collaboration. The distinguishing feature of the COGENT code is its high-order finite-volume discretization methods, which employ arbitrary mapped multiblock grid technology (nearly field-aligned on blocks) to handle the complexity of tokamak divertor geometry with high accuracy. This paper discusses the 4D (axisymmetric) electrostatic version of the code, and the presented topics include: (a) initial simulations with kinetic electrons and development of reduced fluid models; (b) development and application of implicit-explicit (IMEX) time integration schemes; and (c) conservative modeling of drift-waves and the universal instability. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344 and at LBNL under contract DE-AC02-05CH11231.

  10. Advanced coding and modulation schemes for TDRSS

    NASA Technical Reports Server (NTRS)

    Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan

    1993-01-01

    This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.

  11. Advanced coding and modulation schemes for TDRSS

    NASA Astrophysics Data System (ADS)

    Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan

    1993-11-01

    This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.

  12. Nuclear Energy -- Knowledge Base for Advanced Modeling and Simulation (NE-KAMS) Code Verification and Validation Data Standards and Requirements: Fluid Dynamics Version 1.0

    SciTech Connect

    Greg Weirs; Hyung Lee

    2011-09-01

    V&V and UQ are the primary means to assess the accuracy and reliability of M&S and, hence, to establish confidence in M&S. Though other industries are establishing standards and requirements for the performance of V&V and UQ, at present, the nuclear industry has not established such standards or requirements. However, the nuclear industry is beginning to recognize that such standards are needed and that the resources needed to support V&V and UQ will be very significant. In fact, no single organization has sufficient resources or expertise required to organize, conduct and maintain a comprehensive V&V and UQ program. What is needed is a systematic and standardized approach to establish and provide V&V and UQ resources at a national or even international level, with a consortium of partners from government, academia and industry. Specifically, what is needed is a structured and cost-effective knowledge base that collects, evaluates and stores verification and validation data, and shows how it can be used to perform V&V and UQ, leveraging collaboration and sharing of resources to support existing engineering and licensing procedures as well as science-based V&V and UQ processes. The Nuclear Energy Knowledge base for Advanced Modeling and Simulation (NE-KAMS) is being developed at the Idaho National Laboratory in conjunction with Bettis Laboratory, Sandia National Laboratories, Argonne National Laboratory, Utah State University and others with the objective of establishing a comprehensive and web-accessible knowledge base to provide V&V and UQ resources for M&S for nuclear reactor design, analysis and licensing. The knowledge base will serve as an important resource for technical exchange and collaboration that will enable credible and reliable computational models and simulations for application to nuclear power. NE-KAMS will serve as a valuable resource for the nuclear industry, academia, the national laboratories, the U.S. Nuclear Regulatory Commission (NRC) and

  13. A MULTIPURPOSE COHERENT INSTABILITY SIMULATION CODE

    SciTech Connect

    BLASKIEWICZ,M.

    2007-06-25

    A multipurpose coherent instability simulation code has been written, documented, and released for use. TRANFT (tran-eff-tee) uses fast Fourier transforms to model transverse wakefields, transverse detuning wakes and longitudinal wakefields in a computationally efficient way. Dual harmonic RF allows for the study of enhanced synchrotron frequency spread. When coupled with chromaticity, the theoretically challenging but highly practical post head-tail regime is open to study. Detuning wakes allow for transverse space charge forces in low energy hadron beams, and a switch allowing for radiation damping makes the code useful for electrons.

  14. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  15. Automatic differentiation of advanced CFD codes for multidisciplinary design

    SciTech Connect

    Bischof, C.; Corliss, G.; Griewank, A.; Green, L.; Haigler, K.; Newman, P.

    1992-12-31

    Automated multidisciplinary design of aircraft and other flight vehicles requires the optimization of complex performance objectives with respect to a number of design parameters and constraints. The effect of these independent design variables on the system performance criteria can be quantified in terms of sensitivity derivatives which must be calculated and propagated by the individual discipline simulation codes. Typical advanced CFD analysis codes do not provide such derivatives as part of a flow solution; these derivatives are very expensive to obtain by divided (finite) differences from perturbed solutions. It is shown here that sensitivity derivatives can be obtained accurately and efficiently using the ADIFOR source translator for automatic differentiation. In particular, it is demonstrated that the 3-D, thin-layer Navier-Stokes, multigrid flow solver called TLNS3D is amenable to automatic differentiation in the forward mode even with its implicit iterative solution algorithm and complex turbulence modeling. It is significant that using computational differentiation, consistent discrete nongeometric sensitivity derivatives have been obtained from an aerodynamic 3-D CFD code in a relatively short time, e.g. O(man-week) not O(man-year).

  16. Automatic differentiation of advanced CFD codes for multidisciplinary design

    SciTech Connect

    Bischof, C.; Corliss, G.; Griewank, A. ); Green, L.; Haigler, K.; Newman, P. . Langley Research Center)

    1992-01-01

    Automated multidisciplinary design of aircraft and other flight vehicles requires the optimization of complex performance objectives with respect to a number of design parameters and constraints. The effect of these independent design variables on the system performance criteria can be quantified in terms of sensitivity derivatives which must be calculated and propagated by the individual discipline simulation codes. Typical advanced CFD analysis codes do not provide such derivatives as part of a flow solution; these derivatives are very expensive to obtain by divided (finite) differences from perturbed solutions. It is shown here that sensitivity derivatives can be obtained accurately and efficiently using the ADIFOR source translator for automatic differentiation. In particular, it is demonstrated that the 3-D, thin-layer Navier-Stokes, multigrid flow solver called TLNS3D is amenable to automatic differentiation in the forward mode even with its implicit iterative solution algorithm and complex turbulence modeling. It is significant that using computational differentiation, consistent discrete nongeometric sensitivity derivatives have been obtained from an aerodynamic 3-D CFD code in a relatively short time, e.g. O(man-week) not O(man-year).

  17. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  18. ALEGRA -- code validation: Experiments and simulations

    SciTech Connect

    Chhabildas, L.C.; Konrad, C.H.; Mosher, D.A.; Reinhart, W.D; Duggins, B.D.; Rodeman, R.; Trucano, T.G.; Summers, R.M.; Peery, J.S.

    1998-03-16

    In this study, the authors are providing an experimental test bed for validating features of the ALEGRA code over a broad range of strain rates with overlapping diagnostics that encompass the multiple responses. A unique feature of the Arbitrary Lagrangian Eulerian Grid for Research Applications (ALEGRA) code is that it allows simultaneous computational treatment, within one code, of a wide range of strain-rates varying from hydrodynamic to structural conditions. This range encompasses strain rates characteristic of shock-wave propagation (10{sup 7}/s) and those characteristic of structural response (10{sup 2}/s). Most previous code validation experimental studies, however, have been restricted to simulating or investigating a single strain-rate regime. What is new and different in this investigation is that the authors have performed well-instrumented experiments which capture features relevant to both hydrodynamic and structural response in a single experiment. Aluminum was chosen for use in this study because it is a well characterized material--its EOS and constitutive material properties are well defined over a wide range of loading rates. The current experiments span strain rate regimes of over 10{sup 7}/s to less than 10{sup 2}/s in a single experiment. The input conditions are extremely well defined. Velocity interferometers are used to record the high strain-rate response, while low strain rate data were collected using strain gauges.

  19. Advanced concepts flight simulation facility.

    PubMed

    Chappell, S L; Sexton, G A

    1986-12-01

    The cockpit environment is changing rapidly. New technology allows airborne computerised information, flight automation and data transfer with the ground. By 1995, not only will the pilot's task have changed, but also the tools for doing that task. To provide knowledge and direction for these changes, the National Aeronautics and Space Administration (NASA) and the Lockheed-Georgia Company have completed three identical Advanced Concepts Flight Simulation Facilities. Many advanced features have been incorporated into the simulators - e g, cathode ray tube (CRT) displays of flight and systems information operated via touch-screen or voice, print-outs of clearances, cockpit traffic displays, current databases containing navigational charts, weather and flight plan information, and fuel-efficient autopilot control from take-off to touchdown. More importantly, this cockpit is a versatile test bed for studying displays, controls, procedures and crew management in a full-mission context. The facility also has an air traffic control simulation, with radio and data communications, and an outside visual scene with variable weather conditions. These provide a veridical flight environment to evaluate accurately advanced concepts in flight stations.

  20. Progress in Advanced Spray Combustion Code Integration

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A multiyear project to assemble a robust, muitiphase spray combustion code is now underway and gradually building up to full speed. The overall effort involves several university and government research teams as well as Rocketdyne. The first part of this paper will give an overview of the respective roles of the different participants involved, the master strategy, the evolutionary milestones, and an assessment of the state-of-the-art of various key components. The second half of this paper will highlight the progress made to date in extending the baseline Navier-Stokes solver to handle multiphase, multispecies, chemically reactive sub- to supersonic flows. The major hurdles to overcome in order to achieve significant speed ups are delineated and the approaches to overcoming them will be discussed.

  1. Time parallelization of advanced operation scenario simulations of ITER plasma

    SciTech Connect

    Samaddar, D.; Casper, T. A.; Kim, S. H.; Berry, Lee A; Elwasif, Wael R; Batchelor, Donald B; Houlberg, Wayne A

    2013-01-01

    This work demonstrates that simulations of advanced burning plasma operation scenarios can be successfully parallelized in time using the parareal algorithm. CORSICA - an advanced operation scenario code for tokamak plasmas is used as a test case. This is a unique application since the parareal algorithm has so far been applied to relatively much simpler systems except for the case of turbulence. In the present application, a computational gain of an order of magnitude has been achieved which is extremely promising. A successful implementation of the Parareal algorithm to codes like CORSICA ushers in the possibility of time efficient simulations of ITER plasmas.

  2. Low-temperature plasma simulations with the LSP PIC code

    NASA Astrophysics Data System (ADS)

    Carlsson, Johan; Khrabrov, Alex; Kaganovich, Igor; Keating, David; Selezneva, Svetlana; Sommerer, Timothy

    2014-10-01

    The LSP (Large-Scale Plasma) PIC-MCC code has been used to simulate several low-temperature plasma configurations, including a gas switch for high-power AC/DC conversion, a glow discharge and a Hall thruster. Simulation results will be presented with an emphasis on code comparison and validation against experiment. High-voltage, direct-current (HVDC) power transmission is becoming more common as it can reduce construction costs and power losses. Solid-state power-electronics devices are presently used, but it has been proposed that gas switches could become a compact, less costly, alternative. A gas-switch conversion device would be based on a glow discharge, with a magnetically insulated cold cathode. Its operation is similar to that of a sputtering magnetron, but with much higher pressure (0.1 to 0.3 Torr) in order to achieve high current density. We have performed 1D (axial) and 2D (axial/radial) simulations of such a gas switch using LSP. The 1D results were compared with results from the EDIPIC code. To test and compare the collision models used by the LSP and EDIPIC codes in more detail, a validation exercise was performed for the cathode fall of a glow discharge. We will also present some 2D (radial/azimuthal) LSP simulations of a Hall thruster. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000298.

  3. Advances in space radiation shielding codes

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Qualls, Garry D.; Cucinotta, Francis A.; Prael, Richard E.; Norbury, John W.; Heinbockel, John H.; Tweed, John; De Angelis, Giovanni

    2002-01-01

    Early space radiation shield code development relied on Monte Carlo methods and made important contributions to the space program. Monte Carlo methods have resorted to restricted one-dimensional problems leading to imperfect representation of appropriate boundary conditions. Even so, intensive computational requirements resulted and shield evaluation was made near the end of the design process. Resolving shielding issues usually had a negative impact on the design. Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary concept to the final design. For the last few decades, we have pursued deterministic solutions of the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design methods. A single ray trace in such geometry requires 14 milliseconds and limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given.

  4. Simulation Toolkit for Renewable Energy Advanced Materials Modeling

    SciTech Connect

    Sides, Scott; Kemper, Travis; Larsen, Ross; Graf, Peter

    2013-11-13

    STREAMM is a collection of python classes and scripts that enables and eases the setup of input files and configuration files for simulations of advanced energy materials. The core STREAMM python classes provide a general framework for storing, manipulating and analyzing atomic/molecular coordinates to be used in quantum chemistry and classical molecular dynamics simulations of soft materials systems. The design focuses on enabling the interoperability of materials simulation codes such as GROMACS, LAMMPS and Gaussian.

  5. Export Controls on Astrophysical Simulation Codes

    NASA Astrophysics Data System (ADS)

    Whalen, Daniel

    2015-01-01

    Amidst concerns about nuclear proliferation, the US government has established guidelines on what types of astrophysical simulation codes can be run and disseminated on open systems. I will review the basic export controls that have been enacted by the federal government to slow the pace of software acquisition by potential adversaries who seek to develop weapons of mass destruction. The good news is that it is relatively simple to avoid ITAR issues with the Department of Energy if one remembers a few simple rules. I will discuss in particular what types of algorithm development can get researchers into trouble if they are not aware of the regulations and how to avoid these pitfalls while doing world class science.

  6. Advanced simulation of digital filters

    NASA Astrophysics Data System (ADS)

    Doyle, G. S.

    1980-09-01

    An Advanced Simulation of Digital Filters has been implemented on the IBM 360/67 computer utilizing Tektronix hardware and software. The program package is appropriate for use by persons beginning their study of digital signal processing or for filter analysis. The ASDF programs provide the user with an interactive method by which filter pole and zero locations can be manipulated. Graphical output on both the Tektronix graphics screen and the Versatec plotter are provided to observe the effects of pole-zero movement.

  7. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  8. The ADVANCE Code of Conduct for collaborative vaccine studies.

    PubMed

    Kurz, Xavier; Bauchau, Vincent; Mahy, Patrick; Glismann, Steffen; van der Aa, Lieke Maria; Simondon, François

    2017-04-04

    Lessons learnt from the 2009 (H1N1) flu pandemic highlighted factors limiting the capacity to collect European data on vaccine exposure, safety and effectiveness, including lack of rapid access to available data sources or expertise, difficulties to establish efficient interactions between multiple parties, lack of confidence between private and public sectors, concerns about possible or actual conflicts of interest (or perceptions thereof) and inadequate funding mechanisms. The Innovative Medicines Initiative's Accelerated Development of VAccine benefit-risk Collaboration in Europe (ADVANCE) consortium was established to create an efficient and sustainable infrastructure for rapid and integrated monitoring of post-approval benefit-risk of vaccines, including a code of conduct and governance principles for collaborative studies. The development of the code of conduct was guided by three core and common values (best science, strengthening public health, transparency) and a review of existing guidance and relevant published articles. The ADVANCE Code of Conduct includes 45 recommendations in 10 topics (Scientific integrity, Scientific independence, Transparency, Conflicts of interest, Study protocol, Study report, Publication, Subject privacy, Sharing of study data, Research contract). Each topic includes a definition, a set of recommendations and a list of additional reading. The concept of the study team is introduced as a key component of the ADVANCE Code of Conduct with a core set of roles and responsibilities. It is hoped that adoption of the ADVANCE Code of Conduct by all partners involved in a study will facilitate and speed-up its initiation, design, conduct and reporting. Adoption of the ADVANCE Code of Conduct should be stated in the study protocol, study report and publications and journal editors are encouraged to use it as an indication that good principles of public health, science and transparency were followed throughout the study.

  9. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    SciTech Connect

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; March-Leuba, Jose A; Thurston, Carl; Hudson, Nathanael H.; Ireland, A.; Wysocki, A.

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, the capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.

  10. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; ...

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  11. Advances in Parallel Electromagnetic Codes for Accelerator Science and Development

    SciTech Connect

    Ko, Kwok; Candel, Arno; Ge, Lixin; Kabel, Andreas; Lee, Rich; Li, Zenghai; Ng, Cho; Rawat, Vineet; Schussman, Greg; Xiao, Liling; /SLAC

    2011-02-07

    Over a decade of concerted effort in code development for accelerator applications has resulted in a new set of electromagnetic codes which are based on higher-order finite elements for superior geometry fidelity and better solution accuracy. SLAC's ACE3P code suite is designed to harness the power of massively parallel computers to tackle large complex problems with the increased memory and solve them at greater speed. The US DOE supports the computational science R&D under the SciDAC project to improve the scalability of ACE3P, and provides the high performance computing resources needed for the applications. This paper summarizes the advances in the ACE3P set of codes, explains the capabilities of the modules, and presents results from selected applications covering a range of problems in accelerator science and development important to the Office of Science.

  12. Quantization and psychoacoustic model in audio coding in advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2011-10-01

    This paper presents complete optimized architecture of Advanced Audio Coder quantization with Huffman coding. After that psychoacoustic model theory is presented and few algorithms described: standard Two Loop Search, its modifications, Genetic, Just Noticeable Level Difference, Trellis-Based and its modification: Cascaded Trellis-Based Algorithm.

  13. Communication Systems Simulator with Error Correcting Codes Using MATLAB

    ERIC Educational Resources Information Center

    Gomez, C.; Gonzalez, J. E.; Pardo, J. M.

    2003-01-01

    In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…

  14. Recent advances to NEC (Numerical Electromagnetics Code): Applications and validation

    SciTech Connect

    Burke, G.J. )

    1989-03-03

    Capabilities of the antenna modeling code NEC are reviewed and results are presented to illustrate typical applications. Recent developments are discussed that will improve accuracy in modeling electrically small antennas, stepped-radius wires and junctions of tightly coupled wires, and also a new capability for modeling insulated wires in air or earth is described. These advances will be included in a future release of NEC, while for now the results serve to illustrate limitations of the present code. NEC results are compared with independent analytical and numerical solutions and measurements to validate the model for wires near ground and for insulated wires. 41 refs., 26 figs., 1 tab.

  15. Software quality and process improvement in scientific simulation codes

    SciTech Connect

    Ambrosiano, J.; Webster, R.

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  16. Overview of the Tusas Code for Simulation of Dendritic Solidification

    SciTech Connect

    Trainer, Amelia J.; Newman, Christopher Kyle; Francois, Marianne M.

    2016-01-07

    The aim of this project is to conduct a parametric investigation into the modeling of two dimensional dendrite solidification, using the phase field model. Specifically, we use the Tusas code, which is for coupled heat and phase-field simulation of dendritic solidification. Dendritic solidification, which may occur in the presence of an unstable solidification interface, results in treelike microstructures that often grow perpendicular to the rest of the growth front. The interface may become unstable if the enthalpy of the solid material is less than that of the liquid material, or if the solute is less soluble in solid than it is in liquid, potentially causing a partition [1]. A key motivation behind this research is that a broadened understanding of phase-field formulation and microstructural developments can be utilized for macroscopic simulations of phase change. This may be directly implemented as a part of the Telluride project at Los Alamos National Laboratory (LANL), through which a computational additive manufacturing simulation tool is being developed, ultimately to become part of the Advanced Simulation and Computing Program within the U.S. Department of Energy [2].

  17. User's manual: Subsonic/supersonic advanced panel pilot code

    NASA Technical Reports Server (NTRS)

    Moran, J.; Tinoco, E. N.; Johnson, F. T.

    1978-01-01

    Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.

  18. On-line application of the PANTHER advanced nodal code

    SciTech Connect

    Hutt, P.K.; Knight, M.P. )

    1992-01-01

    Over the last few years, Nuclear Electric has developed an integrated core performance code package for both light water reactors (LWRs) and advanced gas-cooled reactors (AGRs) that can perform a comprehensive range of calculations for fuel cycle design, safety analysis, and on-line operational support for such plants. The package consists of the following codes: WIMS for lattice physics, PANTHER whole reactor nodal flux and AGR thermal hydraulics, VIPRE for LWR thermal hydraulics, and ENIGMA for fuel performance. These codes are integrated within a UNIX-based interactive system called the Reactor Physics Workbench (RPW), which provides an interactive graphic user interface and quality assurance records/data management. The RPW can also control calculational sequences and data flows. The package has been designed to run both off-line and on-line accessing plant data through the RPW.

  19. The development of the fast-running simulation pressurized water reactor plant analyzer code (NUPAC-1)

    SciTech Connect

    Sasaki, K.; Terashita, N.; Ogino, T. . Central Research Lab.)

    1989-06-01

    This article discusses a pressurized water reactor plant analyzer code (NUPAC-1) has been developed to apply to an operator support system or an advanced training simulator. The simulation code must produce reasonably accurate results as well as fun in a fast mode for realizing functions such as anomaly detection, estimation of unobservable plant internal states, and prediction of plant state trends. The NUPAC-1 code adopts fast computing methods, i.e., the table fitting method of the state variables, time-step control, and calculation control of heat transfer coefficients, in order to attain accuracy and fast-running capability.

  20. The TESS (Tandem Experiment Simulation Studies) computer code user's manual

    SciTech Connect

    Procassini, R.J. . Dept. of Nuclear Engineering); Cohen, B.I. )

    1990-06-01

    TESS (Tandem Experiment Simulation Studies) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the confinement and transport of plasma in a magnetic mirror device, including tandem mirror configurations. Mirror plasmas may be modeled in a system which includes an applied magnetic field and/or a self-consistent or applied electrostatic potential. The PIC code TESS is similar to the PIC code DIPSI (Direct Implicit Plasma Surface Interactions) which is designed to study plasma transport to and interaction with a solid surface. The codes TESS and DIPSI are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 10 refs., 2 tabs.

  1. The Particle Accelerator Simulation Code PyORBIT

    SciTech Connect

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M; Shishlo, Andrei P

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT is an open source code accessible to the public through the Google Open Source Projects Hosting service.

  2. Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST

    SciTech Connect

    Xu, X Q

    2007-11-09

    We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. With our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.

  3. Code qualification of structural materials for AFCI advanced recycling reactors.

    SciTech Connect

    Natesan, K.; Li, M.; Majumdar, S.; Nanstad, R.K.; Sham, T.-L.

    2012-05-31

    This report summarizes the further findings from the assessments of current status and future needs in code qualification and licensing of reference structural materials and new advanced alloys for advanced recycling reactors (ARRs) in support of Advanced Fuel Cycle Initiative (AFCI). The work is a combined effort between Argonne National Laboratory (ANL) and Oak Ridge National Laboratory (ORNL) with ANL as the technical lead, as part of Advanced Structural Materials Program for AFCI Reactor Campaign. The report is the second deliverable in FY08 (M505011401) under the work package 'Advanced Materials Code Qualification'. The overall objective of the Advanced Materials Code Qualification project is to evaluate key requirements for the ASME Code qualification and the Nuclear Regulatory Commission (NRC) approval of structural materials in support of the design and licensing of the ARR. Advanced materials are a critical element in the development of sodium reactor technologies. Enhanced materials performance not only improves safety margins and provides design flexibility, but also is essential for the economics of future advanced sodium reactors. Code qualification and licensing of advanced materials are prominent needs for developing and implementing advanced sodium reactor technologies. Nuclear structural component design in the U.S. must comply with the ASME Boiler and Pressure Vessel Code Section III (Rules for Construction of Nuclear Facility Components) and the NRC grants the operational license. As the ARR will operate at higher temperatures than the current light water reactors (LWRs), the design of elevated-temperature components must comply with ASME Subsection NH (Class 1 Components in Elevated Temperature Service). However, the NRC has not approved the use of Subsection NH for reactor components, and this puts additional burdens on materials qualification of the ARR. In the past licensing review for the Clinch River Breeder Reactor Project (CRBRP) and the

  4. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  5. ParaDiS-FEM dislocation dynamics simulation code primer

    SciTech Connect

    Tang, M; Hommes, G; Aubry, S; Arsenlis, A

    2011-09-27

    The ParaDiS code is developed to study bulk systems with periodic boundary conditions. When we try to perform discrete dislocation dynamics simulations for finite systems such as thin films or cylinders, the ParaDiS code must be extended. First, dislocations need to be contained inside the finite simulation box; Second, dislocations inside the finite box experience image stresses due to the free surfaces. We have developed in-house FEM subroutines to couple with the ParaDiS code to deal with free surface related issues in the dislocation dynamics simulations. This primer explains how the coupled code was developed, the main changes from the ParaDiS code, and the functions of the new FEM subroutines.

  6. Advancement of liquefaction assessment in Chinese building codes

    NASA Astrophysics Data System (ADS)

    Sun, H.; Liu, F.; Jiang, M.

    2015-09-01

    China has suffered extensive liquefaction hazards in destructive earthquakes. The post-earthquake reconnaissance effort in the country largely advances the methodology of liquefaction assessment distinct from other countries. This paper reviews the evolution of the specifications regarding liquefaction assessment in the seismic design building code of mainland China, which first appeared in 1974, came into shape in 1989, and received major amendments in 2001 and 2010 as a result of accumulated knowledge on liquefaction phenomenon. The current version of the code requires a detailed assessment of liquefaction based on in situ test results if liquefaction concern cannot be eliminated by a preliminary assessment based on descriptive information with respect to site characterization. In addition, a liquefaction index is evaluated to recognize liquefaction severity, and to choose the most appropriate engineering measures for liquefaction mitigation at a site being considered.

  7. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  8. Para: a computer simulation code for plasma driven electromagnetic launchers

    SciTech Connect

    Thio, Y.-C.

    1983-03-01

    A computer code for simulation of rail-type accelerators utilizing a plasma armature has been developed and is described in detail. Some time varying properties of the plasma are taken into account in this code thus allowing the development of a dynamical model of the behavior of a plasma in a rail-type electromagnetic launcher. The code is being successfully used to predict and analyse experiments on small calibre rail-gun launchers.

  9. Multi-dimensional free-electron laser simulation codes: a comparison study

    NASA Astrophysics Data System (ADS)

    Biedron, S. G.; Chae, Y. C.; Dejus, R. J.; Faatz, B.; Freund, H. P.; Milton, S. V.; Nuhn, H.-D.; Reiche, S.

    2000-05-01

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  10. Multi-dimensional free-electron laser simulation codes : a comparison study.

    SciTech Connect

    Biedron, S. G.; Chae, Y. C.; Dejus, R. J.; Faatz, B.; Freund, H. P.; Milton, S. V.; Nuhn, H.-D.; Reiche, S.

    1999-08-23

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  11. Multi-Dimensional Free-Electron Laser Simulation Codes: A Comparison Study

    SciTech Connect

    Nuhn, Heinz-Dieter

    2003-04-28

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  12. Fast Huffman encoding algorithms in MPEG-4 advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2014-11-01

    This paper addresses the optimisation problem of Huffman encoding in MPEG-4 Advanced Audio Coding stan- dard. At first, the Huffman encoding problem and the need of encoding two side info parameters scale factor and Huffman codebook are presented. Next, Two Loop Search, Maximum Noise Mask Ratio and Trellis Based algorithms of bit allocation are briefly described. Further, Huffman encoding optimisation are shown. New methods try to check and change scale factor bands as little as possible to estimate bitrate cost or its change. Finally, the complexity of old and new methods is calculated, compared and measured time of encoding is given.

  13. Probabilistic load simulation: Code development status

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H.

    1991-01-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  14. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  15. Advanced Geophysical Environmental Simulation Techniques

    DTIC Science & Technology

    2007-11-02

    cloud property retrieval algorithms for processing of large multiple-satellite data sets; development and application of improved cloud -phase and... cloud optical property retrieval algorithms; investigation of techniques potentially applicable for retrieval of cloud spatial properties from very...14. SUBJECT TERMS cirrus cloud retrieval satellite meteorology polar-orbiting geostationary 15. NUMBER OF PAGES 16. PRICE CODE 17. SECURITY

  16. Hybrid simulation codes with application to shocks and upstream waves

    NASA Technical Reports Server (NTRS)

    Winske, D.

    1985-01-01

    Hybrid codes in which part of the plasma is represented as particles and the rest as a fluid are discussed. In the past few years such codes with particle ions and massless, fluid electrons have been applied to space plasmas, especially to collisionless shocks. All of these simulation codes are one-dimensional and similar in structure, except for how the field equations are solved. The various approaches that are used (resistive Ohm's law, predictor-corrector, Hamiltonian) are described in detail and results from the various codes are compared with examples taken from collisionless shocks and low frequency wave phenomena upstream of shocks.

  17. Advanced Space Shuttle simulation model

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Smith, S. R.

    1982-01-01

    A non-recursive model (based on von Karman spectra) for atmospheric turbulence along the flight path of the shuttle orbiter was developed. It provides for simulation of instantaneous vertical and horizontal gusts at the vehicle center-of-gravity, and also for simulation of instantaneous gusts gradients. Based on this model the time series for both gusts and gust gradients were generated and stored on a series of magnetic tapes, entitled Shuttle Simulation Turbulence Tapes (SSTT). The time series are designed to represent atmospheric turbulence from ground level to an altitude of 120,000 meters. A description of the turbulence generation procedure is provided. The results of validating the simulated turbulence are described. Conclusions and recommendations are presented. One-dimensional von Karman spectra are tabulated, while a discussion of the minimum frequency simulated is provided. The results of spectral and statistical analyses of the SSTT are presented.

  18. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  19. An approach for coupled-code multiphysics core simulations from a common input

    SciTech Connect

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger P.; Clarno, Kevin T.; Simunovic, Srdjan; Slattery, Stuart R.; Turner, John A.; Palmtag, Scott

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.

  20. An approach for coupled-code multiphysics core simulations from a common input

    DOE PAGES

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less

  1. Code for Axial and Crossflow Turbine Simulation

    SciTech Connect

    Jonathan Murray, Matthew Barone

    2013-07-25

    CACTUS is a code that calculates the performance and aero/hydrodynamic loads of a wind or water turbine. The turbine may be either a vertical-axis or a horizontal-axis machine, and may consist of one or more blade curved or straight blades. The performance model is based on a lifting-line free wake formulation that calculates rotor power and blade loads in the time domain. The rotor is treated as a rotating rigid body, such that structural dynamical motions are not modeled. The turbine is assumed to operate within a steady, but possibly sheared, wind or current velocity. For marine hydrokinetic energy applications, the presence of a river/tidal channel bed and water surface boundaries may be modeled.

  2. DSD - A Particle Simulation Code for Modeling Dusty Plasmas

    NASA Astrophysics Data System (ADS)

    Joyce, Glenn; Lampe, Martin; Ganguli, Gurudas

    1999-11-01

    The NRL Dynamically Shielded Dust code (DSD) is a particle simulation code developed to study the behavior of strongly coupled, dusty plasmas. The model includes the electrostatic wake effects of plasma ions flowing through plasma electrons, collisions of dust and plasma particles with each other and with neutrals. The simulation model contains the short-range strong forces of a shielded Coulomb system, and the long-range forces that are caused by the wake. It also includes other effects of a flowing plasma such as drag forces. In order to model strongly coupled dust in plasmas, we make use of the techniques of molecular dynamics simulation, PIC simulation, and the "particle-particle/particle-mesh" (P3M) technique of Hockney and Eastwood. We also make use of the dressed test particle representation of Rostoker and Rosenbluth. Many of the techniques we use in the model are common to all PIC plasma simulation codes. The unique properties of the code follow from the accurate representation of both the short-range aspects of the interaction between dust grains, and long-range forces mediated by the complete plasma dielectric response. If the streaming velocity is zero, the potential used in the model reduces to the Debye-Huckel potential, and the simulation is identical to molecular dynamics models of the Yukawa potential. The plasma appears only implicitly through the plasma dispersion function, so it is not necessary in the code to resolve the fast plasma time scales.

  3. Muon simulation codes MUSIC and MUSUN for underground physics

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, V. A.

    2009-03-01

    The paper describes two Monte Carlo codes dedicated to muon simulations: MUSIC (MUon SImulation Code) and MUSUN (MUon Simulations UNderground). MUSIC is a package for muon transport through matter. It is particularly useful for propagating muons through large thickness of rock or water, for instance from the surface down to underground/underwater laboratory. MUSUN is designed to use the results of muon transport through rock/water to generate muons in or around underground laboratory taking into account their energy spectrum and angular distribution.

  4. LOOPREF: A Fluid Code for the Simulation of Coronal Loops

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda; Antiochos, Spiro; Spicer, Daniel

    1998-01-01

    This report documents the code LOOPREF. LOOPREF is a semi-one dimensional finite element code that is especially well suited to simulate coronal-loop phenomena. It has a full implementation of adaptive mesh refinement (AMR), which is crucial for this type of simulation. The AMR routines are an improved version of AMR1D. LOOPREF's versatility makes is suitable to simulate a wide variety of problems. In addition to efficiently providing very high resolution in rapidly changing regions of the domain, it is equipped to treat loops of variable cross section, any non-linear form of heat conduction, shocks, gravitational effects, and radiative loss.

  5. HADES, A Code for Simulating a Variety of Radiographic Techniques

    SciTech Connect

    Aufderheide, M B; Henderson, G; von Wittenau, A; Slone, D M; Barty, A; Martz, Jr., H E

    2004-10-28

    It is often useful to simulate radiographic images in order to optimize imaging trade-offs and to test tomographic techniques. HADES is a code that simulates radiography using ray tracing techniques. Although originally developed to simulate X-Ray transmission radiography, HADES has grown to simulate neutron radiography over a wide range of energy, proton radiography in the 1 MeV to 100 GeV range, and recently phase contrast radiography using X-Rays in the keV energy range. HADES can simulate parallel-ray or cone-beam radiography through a variety of mesh types, as well as through collections of geometric objects. HADES was originally developed for nondestructive evaluation (NDE) applications, but could be a useful tool for simulation of portal imaging, proton therapy imaging, and synchrotron studies of tissue. In this paper we describe HADES' current capabilities and discuss plans for a major revision of the code.

  6. Advances in Simulation of Wave Interaction with Extended MHD Phenomena

    SciTech Connect

    Batchelor, Donald B; Abla, Gheni; D'Azevedo, Ed F; Bateman, Glenn; Bernholdt, David E; Berry, Lee A; Bonoli, P.; Bramley, R; Breslau, Joshua; Chance, M.; Chen, J.; Choi, M.; Elwasif, Wael R; Foley, S.; Fu, GuoYong; Harvey, R. W.; Jaeger, Erwin Frederick; Jardin, S. C.; Jenkins, T; Keyes, David E; Klasky, Scott A; Kruger, Scott; Ku, Long-Poe; Lynch, Vickie E; McCune, Douglas; Ramos, J.; Schissel, D.; Schnack,; Wright, J.

    2009-01-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: 1) recent improvements to the IPS, 2) application of the IPS for very high resolution simulations of ITER scenarios, 3) studies of resistive and ideal MHD stability in tokamk discharges using IPS facilities, and 4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  7. Advances in Simulation of Wave Interactions with Extended MHD Phenomena

    SciTech Connect

    Batchelor, Donald B; D'Azevedo, Eduardo; Bateman, Glenn; Bernholdt, David E; Bonoli, P.; Bramley, Randall B; Breslau, Joshua; Elwasif, Wael R; Foley, S.; Jaeger, Erwin Frederick; Jardin, S. C.; Klasky, Scott A; Kruger, Scott E; Ku, Long-Poe; McCune, Douglas; Ramos, J.; Schissel, David P; Schnack, Dalton D

    2009-01-01

    The Integrated Plasma Simulator (IPS) provides a framework within which some of the most advanced, massively-parallel fusion modeling codes can be interoperated to provide a detailed picture of the multi-physics processes involved in fusion experiments. The presentation will cover four topics: (1) recent improvements to the IPS, (2) application of the IPS for very high resolution simulations of ITER scenarios, (3) studies of resistive and ideal MHD stability in tokamak discharges using IPS facilities, and (4) the application of RF power in the electron cyclotron range of frequencies to control slowly growing MHD modes in tokamaks and initial evaluations of optimized location for RF power deposition.

  8. Static benchmarking of the NESTLE advanced nodal code

    SciTech Connect

    Mosteller, R.D.

    1997-05-01

    Results from the NESTLE advanced nodal code are presented for multidimensional numerical benchmarks representing four different types of reactors, and predictions from NESTLE are compared with measured data from pressurized water reactors (PWRs). The numerical benchmarks include cases representative of PWRs, boiling water reactors (BWRs), CANDU heavy water reactors (HWRs), and high-temperature gas-cooled reactors (HTGRs). The measured PWR data include critical soluble boron concentrations and isothermal temperature coefficients of reactivity. The results demonstrate that NESTLE correctly solves the multigroup diffusion equations for both Cartesian and hexagonal geometries, that it reliably calculates k{sub eff} and reactivity coefficients for PWRs, and that--subsequent to the incorporation of additional thermal-hydraulic models--it will be able to perform accurate calculations for the corresponding parameters in BWRs, HWRs, and HTGRs as well.

  9. Simulating Marvel with the Stun Code

    SciTech Connect

    Glenn, L A

    2001-05-23

    MARVEL, a nuclear-driven shock-tube experiment, consisted of a 2.2 kiloton nuclear explosive detonated 176 meters underground at one end of a 122-meter long, 1-meter diameter horizontal tunnel. Vaporization of material in the immediate vicinity of the explosive provided the source of high-energy driver gas. The driven gas was the ambient atmospheric air in the tunnel. The event was staged as an experimental and calculational study of the time dependent .ow of energy in the tunnel and surrounding alluvium. In this report we describe the derivation and implementation of a ''1-3/4D'' hydrocode to simulate the experiment. Calculations were performed to study the influence of energy transport to, and mass ablation from, the walls of the tunnel on the shock velocity.

  10. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  11. Nexus: a modular workflow management system for quantum simulation codes

    SciTech Connect

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  12. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGES

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  13. Advances in atomic oxygen simulation

    NASA Technical Reports Server (NTRS)

    Froechtenigt, Joseph F.; Bareiss, Lyle E.

    1990-01-01

    Atomic oxygen (AO) present in the atmosphere at orbital altitudes of 200 to 700 km has been shown to degrade various exposed materials on Shuttle flights. The relative velocity of the AO with the spacecraft, together with the AO density, combine to yield an environment consisting of a 5 eV beam energy with a flux of 10(exp 14) to 10(exp 15) oxygen atoms/sq cm/s. An AO ion beam apparatus that produces flux levels and energy similar to that encountered by spacecraft in low Earth orbit (LEO) has been in existence since 1987. Test data was obtained from the interaction of the AO ion beam with materials used in space applications (carbon, silver, kapton) and with several special coatings of interest deposited on various surfaces. The ultimate design goal of the AO beam simulation device is to produce neutral AO at sufficient flux levels to replicate on-orbit conditions. A newly acquired mass spectrometer with energy discrimination has allowed 5 eV neutral oxygen atoms to be separated and detected from the background of thermal oxygen atoms of approx 0.2 eV. Neutralization of the AO ion beam at 5 eV was shown at the Martin Marietta AO facility.

  14. Simulation of Aircraft Landing Gears with a Nonlinear Dynamic Finite Element Code

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Jackson, Karen E.; Fasanella, Edwin L.

    2000-01-01

    Recent advances in computational speed have made aircraft and spacecraft crash simulations using an explicit, nonlinear, transient-dynamic, finite element analysis code more feasible. This paper describes the development of a simple landing gear model, which accurately simulates the energy absorbed by the gear without adding substantial complexity to the model. For a crash model, the landing gear response is approximated with a spring where the force applied to the fuselage is computed in a user-written subroutine. Helicopter crash simulations using this approach are compared with previously acquired experimental data from a full-scale crash test of a composite helicopter.

  15. Simulation of neoclassical transport with the continuum gyrokinetic code COGENT

    SciTech Connect

    Dorf, M. A.; Cohen, R. H.; Dorr, M.; Rognlien, T.; Hittinger, J.; Compton, J.; Colella, P.; Martin, D.; McCorquodale, P.

    2013-01-25

    The development of the continuum gyrokinetic code COGENT for edge plasma simulations is reported. The present version of the code models a nonlinear axisymmetric 4D (R, v∥, μ) gyrokinetic equation coupled to the long-wavelength limit of the gyro-Poisson equation. Here, R is the particle gyrocenter coordinate in the poloidal plane, and v∥ and μ are the guiding center velocity parallel to the magnetic field and the magnetic moment, respectively. The COGENT code utilizes a fourth-order finite-volume (conservative) discretization combined with arbitrary mapped multiblock grid technology (nearly field-aligned on blocks) to handle the complexity of tokamak divertor geometry with high accuracy. Furthermore, topics presented are the implementation of increasingly detailed model collision operators, and the results of neoclassical transport simulations including the effects of a strong radial electric field characteristic of a tokamak pedestal under H-mode conditions.

  16. Simulation of neoclassical transport with the continuum gyrokinetic code COGENT

    DOE PAGES

    Dorf, M. A.; Cohen, R. H.; Dorr, M.; ...

    2013-01-25

    The development of the continuum gyrokinetic code COGENT for edge plasma simulations is reported. The present version of the code models a nonlinear axisymmetric 4D (R, v∥, μ) gyrokinetic equation coupled to the long-wavelength limit of the gyro-Poisson equation. Here, R is the particle gyrocenter coordinate in the poloidal plane, and v∥ and μ are the guiding center velocity parallel to the magnetic field and the magnetic moment, respectively. The COGENT code utilizes a fourth-order finite-volume (conservative) discretization combined with arbitrary mapped multiblock grid technology (nearly field-aligned on blocks) to handle the complexity of tokamak divertor geometry with high accuracy.more » Furthermore, topics presented are the implementation of increasingly detailed model collision operators, and the results of neoclassical transport simulations including the effects of a strong radial electric field characteristic of a tokamak pedestal under H-mode conditions.« less

  17. Center for Advanced Modeling and Simulation Intern

    SciTech Connect

    Gertman, Vanessa

    2010-01-01

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  18. Center for Advanced Modeling and Simulation Intern

    ScienceCinema

    Gertman, Vanessa

    2016-07-12

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  19. Advances in pleural disease management including updated procedural coding.

    PubMed

    Haas, Andrew R; Sterman, Daniel H

    2014-08-01

    Over 1.5 million pleural effusions occur in the United States every year as a consequence of a variety of inflammatory, infectious, and malignant conditions. Although rarely fatal in isolation, pleural effusions are often a marker of a serious underlying medical condition and contribute to significant patient morbidity, quality-of-life reduction, and mortality. Pleural effusion management centers on pleural fluid drainage to relieve symptoms and to investigate pleural fluid accumulation etiology. Many recent studies have demonstrated important advances in pleural disease management approaches for a variety of pleural fluid etiologies, including malignant pleural effusion, complicated parapneumonic effusion and empyema, and chest tube size. The last decade has seen greater implementation of real-time imaging assistance for pleural effusion management and increasing use of smaller bore percutaneous chest tubes. This article will briefly review recent pleural effusion management literature and update the latest changes in common procedural terminology billing codes as reflected in the changing landscape of imaging use and percutaneous approaches to pleural disease management.

  20. Enhanced Verification Test Suite for Physics Simulation Codes

    SciTech Connect

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater

  1. UNIPIC code for simulations of high power microwave devices

    SciTech Connect

    Wang Jianguo; Zhang Dianhui; Wang Yue; Qiao Hailiang; Li Xiaoze; Liu Chunliang; Li Yongdong; Wang Hongguang

    2009-03-15

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  2. BEAM: a Monte Carlo code to simulate radiotherapy treatment units.

    PubMed

    Rogers, D W; Faddegon, B A; Ding, G X; Ma, C M; We, J; Mackie, T R

    1995-05-01

    This paper describes BEAM, a general purpose Monte Carlo code to simulate the radiation beams from radiotherapy units including high-energy electron and photon beams, 60Co beams and orthovoltage units. The code handles a variety of elementary geometric entities which the user puts together as needed (jaws, applicators, stacked cones, mirrors, etc.), thus allowing simulation of a wide variety of accelerators. The code is not restricted to cylindrical symmetry. It incorporates a variety of powerful variance reduction techniques such as range rejection, bremsstrahlung splitting and forcing photon interactions. The code allows direct calculation of charge in the monitor ion chamber. It has the capability of keeping track of each particle's history and using this information to score separate dose components (e.g., to determine the dose from electrons scattering off the applicator). The paper presents a variety of calculated results to demonstrate the code's capabilities. The calculated dose distributions in a water phantom irradiated by electron beams from the NRC 35 MeV research accelerator, a Varian Clinac 2100C, a Philips SL75-20, an AECL Therac 20 and a Scanditronix MM50 are all shown to be in good agreement with measurements at the 2 to 3% level. Eighteen electron spectra from four different commercial accelerators are presented and various aspects of the electron beams from a Clinac 2100C are discussed. Timing requirements and selection of parameters for the Monte Carlo calculations are discussed.

  3. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1993-01-01

    Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.

  4. Advanced Pellet Cladding Interaction Modeling Using the US DOE CASL Fuel Performance Code: Peregrine

    SciTech Connect

    Jason Hales; Various

    2014-06-01

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermomechanical- chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code that is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.

  5. Advanced Pellet-Cladding Interaction Modeling using the US DOE CASL Fuel Performance Code: Peregrine

    SciTech Connect

    Montgomery, Robert O.; Capps, Nathan A.; Sunderland, Dion J.; Liu, Wenfeng; Hales, Jason; Stanek, Chris; Wirth, Brian D.

    2014-06-15

    The US DOE’s Consortium for Advanced Simulation of LWRs (CASL) program has undertaken an effort to enhance and develop modeling and simulation tools for a virtual reactor application, including high fidelity neutronics, fluid flow/thermal hydraulics, and fuel and material behavior. The fuel performance analysis efforts aim to provide 3-dimensional capabilities for single and multiple rods to assess safety margins and the impact of plant operation and fuel rod design on the fuel thermo-mechanical-chemical behavior, including Pellet-Cladding Interaction (PCI) failures and CRUD-Induced Localized Corrosion (CILC) failures in PWRs. [1-3] The CASL fuel performance code, Peregrine, is an engineering scale code that is built upon the MOOSE/ELK/FOX computational FEM framework, which is also common to the fuel modeling framework, BISON [4,5]. Peregrine uses both 2-D and 3-D geometric fuel rod representations and contains a materials properties and fuel behavior model library for the UO2 and Zircaloy system common to PWR fuel derived from both open literature sources and the FALCON code [6]. The primary purpose of Peregrine is to accurately calculate the thermal, mechanical, and chemical processes active throughout a single fuel rod during operation in a reactor, for both steady state and off-normal conditions.

  6. Recent advances in the COMMIX and BODYFIT codes

    SciTech Connect

    Sha, W.T.; Chen, B.C.J.; Domanus, H.M.; Wood, P.M.

    1983-01-01

    Two general-purpose computer programs for thermal-hydraulic analysis have been developed. One is the COMMIX (COMponent MIXing code. The other one is the BODYFIT (BOunDary FITted Coordinate Transformation) code. Solution procedures based on both elliptical and parabolic systems of partial differential equations are provided in these two codes. The COMMIX code is designed to provide global analysis of thermal-hydraulic behavior of a component or multicomponent of engineering problems. The BODYFIT code is capable of treating irregular boundaries and gives more detailed local information on a subcomponent or component. These two codes are complementary to each other and represent the state-of-the-art of thermal-hydraulic analysis. Effort will continue to make further improvements and include additional capabilities in these codes.

  7. Full electromagnetic Vlasov code simulation of the Kelvin-Helmholtz instability

    SciTech Connect

    Umeda, Takayuki; Miwa, Jun-ichiro; Matsumoto, Yosuke; Togano, Kentaro; Nakamura, Takuma K. M.; Shinohara, Iku; Fukazawa, Keiichiro

    2010-05-15

    Recent advancement in numerical techniques for Vlasov simulations and their application to cross-scale coupling in the plasma universe are discussed. Magnetohydrodynamic (MHD) simulations are now widely used for numerical modeling of global and macroscopic phenomena. In the framework of the MHD approximation, however, diffusion coefficients such as resistivity and adiabatic index are given from empirical models. Thus there are recent attempts to understand first-principle kinetic processes in macroscopic phenomena, such as magnetic reconnection and the Kelvin-Helmholtz (KH) instability via full kinetic particle-in-cell and Vlasov codes. In the present study, a benchmark test for a new four-dimensional full electromagnetic Vlasov code is performed. First, the computational speed of the Vlasov code is measured and a linear performance scaling is obtained on a massively parallel supercomputer with more than 12 000 cores. Second, a first-principle Vlasov simulation of the KH instability is performed in order to evaluate current status of numerical techniques for Vlasov simulations. The KH instability is usually adopted as a benchmark test problem for guiding-center Vlasov codes, in which a cyclotron motion of charged particles is neglected. There is not any full electromagnetic Vlasov simulation of the KH instability; this is because it is difficult to follow E-vectorxB-vector drift motion accurately without approximations. The present first-principle Vlasov simulation has successfully represented the formation of KH vortices and its secondary instability. These results suggest that Vlasov code simulations would be a powerful approach for studies of cross-scale coupling on future Peta-scale supercomputers.

  8. Advances in NLTE Modeling for Integrated Simulations

    SciTech Connect

    Scott, H A; Hansen, S B

    2009-07-08

    The last few years have seen significant progress in constructing the atomic models required for non-local thermodynamic equilibrium (NLTE) simulations. Along with this has come an increased understanding of the requirements for accurately modeling the ionization balance, energy content and radiative properties of different elements for a wide range of densities and temperatures. Much of this progress is the result of a series of workshops dedicated to comparing the results from different codes and computational approaches applied to a series of test problems. The results of these workshops emphasized the importance of atomic model completeness, especially in doubly excited states and autoionization transitions, to calculating ionization balance, and the importance of accurate, detailed atomic data to producing reliable spectra. We describe a simple screened-hydrogenic model that calculates NLTE ionization balance with surprising accuracy, at a low enough computational cost for routine use in radiation-hydrodynamics codes. The model incorporates term splitting, {Delta}n = 0 transitions, and approximate UTA widths for spectral calculations, with results comparable to those of much more detailed codes. Simulations done with this model have been increasingly successful at matching experimental data for laser-driven systems and hohlraums. Accurate and efficient atomic models are just one requirement for integrated NLTE simulations. Coupling the atomic kinetics to hydrodynamics and radiation transport constrains both discretizations and algorithms to retain energy conservation, accuracy and stability. In particular, the strong coupling between radiation and populations can require either very short timesteps or significantly modified radiation transport algorithms to account for NLTE material response. Considerations such as these continue to provide challenges for NLTE simulations.

  9. Unsteady Cascade Aerodynamic Response Using a Multiphysics Simulation Code

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Reddy, T. S. R.; Spyropoulos, E.

    2000-01-01

    The multiphysics code Spectrum(TM) is applied to calculate the unsteady aerodynamic pressures of oscillating cascade of airfoils representing a blade row of a turbomachinery component. Multiphysics simulation is based on a single computational framework for the modeling of multiple interacting physical phenomena, in the present case being between fluids and structures. Interaction constraints are enforced in a fully coupled manner using the augmented-Lagrangian method. The arbitrary Lagrangian-Eulerian method is utilized to account for deformable fluid domains resulting from blade motions. Unsteady pressures are calculated for a cascade designated as the tenth standard, and undergoing plunging and pitching oscillations. The predicted unsteady pressures are compared with those obtained from an unsteady Euler co-de refer-red in the literature. The Spectrum(TM) code predictions showed good correlation for the cases considered.

  10. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Manuel, Mario; Keiter, Paul; Drake, R. P.

    2014-10-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, imploding bubbles, and interacting jet experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via Grant DEFC52-08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0001840, and by the National Laser User Facility Program, Grant Number DE-NA0000850.

  11. Simulations of Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Kuranz, Carolyn; Fein, Jeff; Wan, Willow; Young, Rachel; Keiter, Paul; Drake, R. Paul

    2015-11-01

    Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, magnetized flows, jets, and laser-produced plasmas. This work is funded by the following grants: DEFC52-08NA28616, DE-NA0001840, and DE-NA0002032.

  12. The Protoexist2 Advanced CZT Coded Aperture Telescope

    NASA Astrophysics Data System (ADS)

    Allen, Branden; Hong, J.; Grindlay, J.; Barthelmy, S.; Baker, R.

    2011-09-01

    The ProtoEXIST program was conceived for the development of a scalable detector plane architecture utilizing pixilated CdZnTe (CZT) detectors for eventual deployment in a large scale (1-4 m2 active area) coded aperture X-ray telescope for use as a wide field ( 90° × 70° FOV) all sky monitor and survey instrument for the 5 up to 600 keV energy band. The first phase of the program recently concluded with the successful 6 hour high altitude (39 km) flight of ProtoEXIST1, which utilized a closely tiled 8 × 8 array of 20 mm × 20 mm, 5 mm thick Redlen CZT crystals each bonded to a RadNET asic via an interposer board. Each individual CZT crystal utilized a 8 × 8 pixilated anode for the creation of a position sensitive detector with 2.5 mm spatial resolution. Development of ProtoEXIST2, the second advanced CZT detector plane in this series, is currently under way. ProtoEXIST2 will be composed of a closely tiled 8 × 8 array of 20 mm × 20 mm, 5 mm thick Redlen CZT crystals, similar to ProtoEXIST1, but will now utilize the Nu-ASIC which accommodates the direct bonding of CZT detectors with a 32 × 32 pixilated anode with a 604.8 μm pixel pitch. Characterization and performance of the ProtoEXIST2 detectors is discussed as well as current progress in the integration of the ProtoEXIST2 detector plane.

  13. Progress on coupling UEDGE and Monte-Carlo simulation codes

    SciTech Connect

    Rensink, M.E.; Rognlien, T.D.

    1996-08-28

    Our objective is to develop an accurate self-consistent model for plasma and neutral sin the edge of tokamak devices such as DIII-D and ITER. The tow-dimensional fluid model in the UEDGE code has been used successfully for simulating a wide range of experimental plasma conditions. However, when the neutral mean free path exceeds the gradient scale length of the background plasma, the validity of the diffusive and inertial fluid models in UEDGE is questionable. In the long mean free path regime, neutrals can be accurately and efficiently described by a Monte Carlo neutrals model. Coupling of the fluid plasma model in UEDGE with a Monte Carlo neutrals model should improve the accuracy of our edge plasma simulations. The results described here used the EIRENE Monte Carlo neutrals code, but since information is passed to and from the UEDGE plasma code via formatted test files, any similar neutrals code such as DEGAS2 or NIMBUS could, in principle, be used.

  14. Parallelization of a Monte Carlo particle transport simulation code

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.

    2010-05-01

    We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.

  15. Parallel Monte Carlo Electron and Photon Transport Simulation Code (PMCEPT code)

    NASA Astrophysics Data System (ADS)

    Kum, Oyeon

    2004-11-01

    Simulations for customized cancer radiation treatment planning for each patient are very useful for both patient and doctor. These simulations can be used to find the most effective treatment with the least possible dose to the patient. This typical system, so called ``Doctor by Information Technology", will be useful to provide high quality medical services everywhere. However, the large amount of computing time required by the well-known general purpose Monte Carlo(MC) codes has prevented their use for routine dose distribution calculations for a customized radiation treatment planning. The optimal solution to provide ``accurate" dose distribution within an ``acceptable" time limit is to develop a parallel simulation algorithm on a beowulf PC cluster because it is the most accurate, efficient, and economic. I developed parallel MC electron and photon transport simulation code based on the standard MPI message passing interface. This algorithm solved the main difficulty of the parallel MC simulation (overlapped random number series in the different processors) using multiple random number seeds. The parallel results agreed well with the serial ones. The parallel efficiency approached 100% as was expected.

  16. Simulation of dynamic material response with the PAGOSA code

    SciTech Connect

    Holian, K.S.; Adams, T.F.

    1993-08-01

    The 3D Eulerian PAGOSA hydrocode is being run on the massively parallel Connection Machine (CM) to simulate the response of materials to dynamic loading, such as by high explosives or high velocity impact. The code has a variety of equation of state forms, plastic yield models, and fracture and fragmentation models. The numerical algorithms in PAGOSA and the implementation of material models are discussed briefly.

  17. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  18. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Astrophysics Data System (ADS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-12-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  19. Monte Carlo code for high spatial resolution ocean color simulations.

    PubMed

    D'Alimonte, Davide; Zibordi, Giuseppe; Kajiyama, Tamito; Cunha, José C

    2010-09-10

    A Monte Carlo code for ocean color simulations has been developed to model in-water radiometric fields of downward and upward irradiance (E(d) and E(u)), and upwelling radiance (L(u)) in a two-dimensional domain with a high spatial resolution. The efficiency of the code has been optimized by applying state-of-the-art computing solutions, while the accuracy of simulation results has been quantified through benchmark with the widely used Hydrolight code for various values of seawater inherent optical properties and different illumination conditions. Considering a seawater single scattering albedo of 0.9, as well as surface waves of 5 m width and 0.5 m height, the study has shown that the number of photons required to quantify uncertainties induced by wave focusing effects on E(d), E(u), and L(u) data products is of the order of 10(6), 10(9), and 10(10), respectively. On this basis, the effects of sea-surface geometries on radiometric quantities have been investigated for different surface gravity waves. Data products from simulated radiometric profiles have finally been analyzed as a function of the deployment speed and sampling frequency of current free-fall systems in view of providing recommendations to improve measurement protocols.

  20. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  1. CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION

    SciTech Connect

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-15

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256{sup 3}) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  2. KULL: LLNL's ASCI Inertial Confinement Fusion Simulation Code

    SciTech Connect

    Rathkopf, J. A.; Miller, D. S.; Owen, J. M.; Zike, M. R.; Eltgroth, P. G.; Madsen, N. K.; McCandless, K. P.; Nowak, P. F.; Nemanic, M. K.; Gentile, N. A.; Stuart, L. M.; Keen, N. D.; Palmer, T. S.

    2000-01-10

    KULL is a three dimensional, time dependent radiation hydrodynamics simulation code under development at Lawrence Livermore National Laboratory. A part of the U.S. Department of Energy's Accelerated Strategic Computing Initiative (ASCI), KULL's purpose is to simulate the physical processes in Inertial Confinement Fusion (ICF) targets. The National Ignition Facility, where ICF experiments will be conducted, and ASCI are part of the experimental and computational components of DOE's Stockpile Stewardship Program. This paper provides an overview of ASCI and describes KULL, its hydrodynamic simulation capability and its three methods of simulating radiative transfer. Particular emphasis is given to the parallelization techniques essential to obtain the performance required of the Stockpile Stewardship Program and to exploit the massively parallel processor machines that ASCI is procuring.

  3. Generating performance portable geoscientific simulation code with Firedrake (Invited)

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Bercea, G.; Cotter, C. J.; Kelly, P. H.; Loriant, N.; Luporini, F.; McRae, A. T.; Mitchell, L.; Rathgeber, F.

    2013-12-01

    This presentation will demonstrate how a change in simulation programming paradigm can be exploited to deliver sophisticated simulation capability which is far easier to programme than are conventional models, is capable of exploiting different emerging parallel hardware, and is tailored to the specific needs of geoscientific simulation. Geoscientific simulation represents a grand challenge computational task: many of the largest computers in the world are tasked with this field, and the requirements of resolution and complexity of scientists in this field are far from being sated. However, single thread performance has stalled, even sometimes decreased, over the last decade, and has been replaced by ever more parallel systems: both as conventional multicore CPUs and in the emerging world of accelerators. At the same time, the needs of scientists to couple ever-more complex dynamics and parametrisations into their models makes the model development task vastly more complex. The conventional approach of writing code in low level languages such as Fortran or C/C++ and then hand-coding parallelism for different platforms by adding library calls and directives forces the intermingling of the numerical code with its implementation. This results in an almost impossible set of skill requirements for developers, who must simultaneously be domain science experts, numericists, software engineers and parallelisation specialists. Even more critically, it requires code to be essentially rewritten for each emerging hardware platform. Since new platforms are emerging constantly, and since code owners do not usually control the procurement of the supercomputers on which they must run, this represents an unsustainable development load. The Firedrake system, conversely, offers the developer the opportunity to write PDE discretisations in the high-level mathematical language UFL from the FEniCS project (http://fenicsproject.org). Non-PDE model components, such as parametrisations

  4. New Particle-in-Cell Code for Numerical Simulation of Coherent Synchrotron Radiation

    SciTech Connect

    Balsa Terzic, Rui Li

    2010-05-01

    We present a first look at the new code for self-consistent, 2D simulations of beam dynamics affected by the coherent synchrotron radiation. The code is of the particle-in-cell variety: the beam bunch is sampled by point-charge particles, which are deposited on the grid; the corresponding forces on the grid are then computed using retarded potentials according to causality, and interpolated so as to advance the particles in time. The retarded potentials are evaluated by integrating over the 2D path history of the bunch, with the charge and current density at the retarded time obtained from interpolation of the particle distributions recorded at discrete timesteps. The code is benchmarked against analytical results obtained for a rigid-line bunch. We also outline the features and applications which are currently being developed.

  5. Recent advances in CZT strip detectors and coded mask imagers

    NASA Astrophysics Data System (ADS)

    Matteson, J. L.; Gruber, D. E.; Heindl, W. A.; Pelling, M. R.; Peterson, L. E.; Rothschild, R. E.; Skelton, R. T.; Hink, P. L.; Slavis, K. R.; Binns, W. R.; Tumer, T.; Visser, G.

    1999-09-01

    The UCSD, WU, UCR and Nova collaboration has made significant progress on the necessary techniques for coded mask imaging of gamma-ray bursts: position sensitive CZT detectors with good energy resolution, ASIC readout, coded mask imaging, and background properties at balloon altitudes. Results on coded mask imaging techniques appropriate for wide field imaging and localization of gamma-ray bursts are presented, including a shadowgram and deconvolved image taken with a prototype detector/ASIC and MURA mask. This research was supported by NASA Grants NAG5-5111, NAG5-5114, and NGT5-50170.

  6. Advanced Civil Transport Simulator Cockpit View

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Advanced Civil Transport Simulator (ACTS) is a futuristic aircraft cockpit simulator designed to provide full-mission capabilities for researching issues that will affect future transport aircraft flight stations and crews. The objective is to heighten the pilots situation awareness through improved information availability and ease of interpretation in order to reduce the possibility of misinterpreted data. The simulators five 13-inch Cathode Ray Tubes are designed to display flight information in a logical easy-to-see format. Two color flat panel Control Display Units with touch sensitive screens provide monitoring and modification of aircraft parameters, flight plans, flight computers, and aircraft position. Three collimated visual display units have been installed to provide out-the-window scenes via the Computer Generated Image system. The major research objectives are to examine needs for transfer of information to and from the flight crew; study the use of advanced controls and displays for all-weather flying; explore ideas for using computers to help the crew in decision making; study visual scanning and reach behavior under different conditions with various levels of automation and flight deck-arrangements.

  7. Code System for Reactor Physics and Fuel Cycle Simulation.

    SciTech Connect

    TEUCHERT, E.

    1999-04-21

    Version 00 VSOP94 (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation based on neutron flux synthesis with depletion and shut-down features, in-core and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to Pebble Bed HTRs). Various techniques have been employed to accelerate the iterative processes and to optimize the internal data transfer. The code system has been used extensively for comparison studies of reactors, their fuel cycles, and related detailed features. In addition to its use in research and development work for the High Temperature Reactor, the system has been applied successfully to Light Water and Heavy Water Reactors.

  8. Upgrades to the NESS (Nuclear Engine System Simulation) Code

    NASA Technical Reports Server (NTRS)

    Fittje, James E.

    2007-01-01

    In support of the President's Vision for Space Exploration, the Nuclear Thermal Rocket (NTR) concept is being evaluated as a potential propulsion technology for human expeditions to the moon and Mars. The need for exceptional propulsion system performance in these missions has been documented in numerous studies, and was the primary focus of a considerable effort undertaken during the 1960's and 1970's. The NASA Glenn Research Center is leveraging this past NTR investment in their vehicle concepts and mission analysis studies with the aid of the Nuclear Engine System Simulation (NESS) code. This paper presents the additional capabilities and upgrades made to this code in order to perform higher fidelity NTR propulsion system analysis and design.

  9. Introduction to study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    1992-01-01

    During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.

  10. Onyx-Advanced Aeropropulsion Simulation Framework Created

    NASA Technical Reports Server (NTRS)

    Reed, John A.

    2001-01-01

    The Numerical Propulsion System Simulation (NPSS) project at the NASA Glenn Research Center is developing a new software environment for analyzing and designing aircraft engines and, eventually, space transportation systems. Its purpose is to dramatically reduce the time, effort, and expense necessary to design and test jet engines by creating sophisticated computer simulations of an aerospace object or system (refs. 1 and 2). Through a university grant as part of that effort, researchers at the University of Toledo have developed Onyx, an extensible Java-based (Sun Micro-systems, Inc.), objectoriented simulation framework, to investigate how advanced software design techniques can be successfully applied to aeropropulsion system simulation (refs. 3 and 4). The design of Onyx's architecture enables users to customize and extend the framework to add new functionality or adapt simulation behavior as required. It exploits object-oriented technologies, such as design patterns, domain frameworks, and software components, to develop a modular system in which users can dynamically replace components with others having different functionality.

  11. Axisymmetric Plume Simulations with NASA's DSMC Analysis Code

    NASA Technical Reports Server (NTRS)

    Stewart, B. D.; Lumpkin, F. E., III

    2012-01-01

    A comparison of axisymmetric Direct Simulation Monte Carlo (DSMC) Analysis Code (DAC) results to analytic and Computational Fluid Dynamics (CFD) solutions in the near continuum regime and to 3D DAC solutions in the rarefied regime for expansion plumes into a vacuum is performed to investigate the validity of the newest DAC axisymmetric implementation. This new implementation, based on the standard DSMC axisymmetric approach where the representative molecules are allowed to move in all three dimensions but are rotated back to the plane of symmetry by the end of the move step, has been fully integrated into the 3D-based DAC code and therefore retains all of DAC s features, such as being able to compute flow over complex geometries and to model chemistry. Axisymmetric DAC results for a spherically symmetric isentropic expansion are in very good agreement with a source flow analytic solution in the continuum regime and show departure from equilibrium downstream of the estimated breakdown location. Axisymmetric density contours also compare favorably against CFD results for the R1E thruster while temperature contours depart from equilibrium very rapidly away from the estimated breakdown surface. Finally, axisymmetric and 3D DAC results are in very good agreement over the entire plume region and, as expected, this new axisymmetric implementation shows a significant reduction in computer resources required to achieve accurate simulations for this problem over the 3D simulations.

  12. Large-Eddy Simulation Code Developed for Propulsion Applications

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2003-01-01

    A large-eddy simulation (LES) code was developed at the NASA Glenn Research Center to provide more accurate and detailed computational analyses of propulsion flow fields. The accuracy of current computational fluid dynamics (CFD) methods is limited primarily by their inability to properly account for the turbulent motion present in virtually all propulsion flows. Because the efficiency and performance of a propulsion system are highly dependent on the details of this turbulent motion, it is critical for CFD to accurately model it. The LES code promises to give new CFD simulations an advantage over older methods by directly computing the large turbulent eddies, to correctly predict their effect on a propulsion system. Turbulent motion is a random, unsteady process whose behavior is difficult to predict through computer simulations. Current methods are based on Reynolds-Averaged Navier- Stokes (RANS) analyses that rely on models to represent the effect of turbulence within a flow field. The quality of the results depends on the quality of the model and its applicability to the type of flow field being studied. LES promises to be more accurate because it drastically reduces the amount of modeling necessary. It is the logical step toward improving turbulent flow predictions. In LES, the large-scale dominant turbulent motion is computed directly, leaving only the less significant small turbulent scales to be modeled. As part of the prediction, the LES method generates detailed information on the turbulence itself, providing important information for other applications, such as aeroacoustics. The LES code developed at Glenn for propulsion flow fields is being used to both analyze propulsion system components and test improved LES algorithms (subgrid-scale models, filters, and numerical schemes). The code solves the compressible Favre-filtered Navier- Stokes equations using an explicit fourth-order accurate numerical scheme, it incorporates a compressible form of

  13. Perface: Research advances in vadose zone hydrology throughsimulations with the TOUGH codes

    SciTech Connect

    Finsterle, Stefan; Oldenburg, Curtis M.

    2004-07-12

    Numerical simulators are playing an increasingly important role in advancing our fundamental understanding of hydrological systems. They are indispensable tools for managing groundwater resources, analyzing proposed and actual remediation activities at contaminated sites, optimizing recovery of oil, gas, and geothermal energy, evaluating subsurface structures and mining activities, designing monitoring systems, assessing the long-term impacts of chemical and nuclear waste disposal, and devising improved irrigation and drainage practices in agricultural areas, among many other applications. The complexity of subsurface hydrology in the vadose zone calls for sophisticated modeling codes capable of handling the strong nonlinearities involved, the interactions of coupled physical, chemical and biological processes, and the multiscale heterogeneities inherent in such systems. The papers in this special section of ''Vadose Zone Journal'' are illustrative of the enormous potential of such numerical simulators as applied to the vadose zone. The papers describe recent developments and applications of one particular set of codes, the TOUGH family of codes, as applied to nonisothermal flow and transport in heterogeneous porous and fractured media (http://www-esd.lbl.gov/TOUGH2). The contributions were selected from presentations given at the TOUGH Symposium 2003, which brought together developers and users of the TOUGH codes at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California, for three days of information exchange in May 2003 (http://www-esd.lbl.gov/TOUGHsymposium). The papers presented at the symposium covered a wide range of topics, including geothermal reservoir engineering, fracture flow and vadose zone hydrology, nuclear waste disposal, mining engineering, reactive chemical transport, environmental remediation, and gas transport. This Special Section of ''Vadose Zone Journal'' contains revised and expanded versions of selected papers from the

  14. Simulation methods for advanced scientific computing

    SciTech Connect

    Booth, T.E.; Carlson, J.A.; Forster, R.A.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

  15. Heart simulation with surface equations for using on MCNP code

    SciTech Connect

    Rezaei-Ochbelagh, D.; Salman-Nezhad, S.; Asadi, A.; Rahimi, A.

    2011-12-26

    External photon beam radiotherapy is carried out in a way to achieve an 'as low as possible' a dose in healthy tissues surrounding the target. One of these surroundings can be heart as a vital organ of body. As it is impossible to directly determine the absorbed dose by heart, using phantoms is one way to acquire information around it. The other way is Monte Carlo method. In this work we have presented a simulation of heart geometry by introducing of different surfaces in MCNP code. We used 14 surface equations in order to determine human heart modeling. Those surfaces are borders of heart walls and contents.

  16. Heart simulation with surface equations for using on MCNP code

    NASA Astrophysics Data System (ADS)

    Rezaei-Ochbelagh, D.; Salman-Nezhad, S.; Asadi, A.; Rahimi, A.

    2011-12-01

    External photon beam radiotherapy is carried out in a way to achieve an "as low as possible" a dose in healthy tissues surrounding the target. One of these surroundings can be heart as a vital organ of body. As it is impossible to directly determine the absorbed dose by heart, using phantoms is one way to acquire information around it. The other way is Monte Carlo method. In this work we have presented a simulation of heart geometry by introducing of different surfaces in MCNP code. We used 14 surface equations in order to determine human heart modeling. Those surfaces are borders of heart walls and contents.

  17. Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime

    2016-01-01

    It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

  18. Computer code for the atomistic simulation of lattice defects and dynamics. [COMENT code

    SciTech Connect

    Schiffgens, J.O.; Graves, N.J.; Oster, C.A.

    1980-04-01

    This document has been prepared to satisfy the need for a detailed, up-to-date description of a computer code that can be used to simulate phenomena on an atomistic level. COMENT was written in FORTRAN IV and COMPASS (CDC assembly language) to solve the classical equations of motion for a large number of atoms interacting according to a given force law, and to perform the desired ancillary analysis of the resulting data. COMENT is a dual-purpose intended to describe static defect configurations as well as the detailed motion of atoms in a crystal lattice. It can be used to simulate the effect of temperature, impurities, and pre-existing defects on radiation-induced defect production mechanisms, defect migration, and defect stability.

  19. The H.264/MPEG4 advanced video coding

    NASA Astrophysics Data System (ADS)

    Gromek, Artur

    2009-06-01

    H.264/MPEG4-AVC is the newest video coding standard recommended by International Telecommunication Union - Telecommunication Standardization Section (ITU-T) and the ISO/IEC Moving Picture Expert Group (MPEG). The H.264/MPEG4-AVC has recently become leading standard for generic audiovisual services, since deployment for digital television. Nowadays is commonly used in wide range of video application ranging like mobile services, videoconferencing, IPTV, HDTV, video storage and many more. In this article, author briefly describes the technology applied in the H.264/MPEG4-AVC video coding standard, the way of real-time implementation and the way of future development.

  20. NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview

    NASA Technical Reports Server (NTRS)

    Budinger, James M.

    1992-01-01

    The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.

  1. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  2. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  3. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  4. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... simulator instructors and check airmen must include training policies and procedures, instruction methods... Simulation This appendix provides guidelines and a means for achieving flightcrew training in advanced... simulator, as appropriate. Advanced Simulation Training Program For an operator to conduct Level C or...

  5. The advanced computational testing and simulation toolkit (ACTS)

    SciTech Connect

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  6. The GBS code for tokamak scrape-off layer simulations

    SciTech Connect

    Halpern, F.D.; Ricci, P.; Jolliet, S.; Loizu, J.; Morales, J.; Mosetto, A.; Musil, F.; Riva, F.; Tran, T.M.; Wersal, C.

    2016-06-15

    We describe a new version of GBS, a 3D global, flux-driven plasma turbulence code to simulate the turbulent dynamics in the tokamak scrape-off layer (SOL), superseding the code presented by Ricci et al. (2012) [14]. The present work is driven by the objective of studying SOL turbulent dynamics in medium size tokamaks and beyond with a high-fidelity physics model. We emphasize an intertwining framework of improved physics models and the computational improvements that allow them. The model extensions include neutral atom physics, finite ion temperature, the addition of a closed field line region, and a non-Boussinesq treatment of the polarization drift. GBS has been completely refactored with the introduction of a 3-D Cartesian communicator and a scalable parallel multigrid solver. We report dramatically enhanced parallel scalability, with the possibility of treating electromagnetic fluctuations very efficiently. The method of manufactured solutions as a verification process has been carried out for this new code version, demonstrating the correct implementation of the physical model.

  7. ATES/heat pump simulations performed with ATESSS code

    NASA Astrophysics Data System (ADS)

    Vail, L. W.

    1989-01-01

    Modifications to the Aquifer Thermal Energy Storage System Simulator (ATESSS) allow simulation of aquifer thermal energy storage (ATES)/heat pump systems. The heat pump algorithm requires a coefficient of performance (COP) relationship of the form: COP = COP sub base + alpha (T sub ref minus T sub base). Initial applications of the modified ATES code to synthetic building load data for two sizes of buildings in two U.S. cities showed insignificant performance advantage of a series ATES heat pump system over a conventional groundwater heat pump system. The addition of algorithms for a cooling tower and solar array improved performance slightly. Small values of alpha in the COP relationship are the principal reason for the limited improvement in system performance. Future studies at Pacific Northwest Laboratory (PNL) are planned to investigate methods to increase system performance using alternative system configurations and operations scenarios.

  8. Hybrid Particle Code Simulations of Mars: The Energy Budget.

    NASA Astrophysics Data System (ADS)

    Brecht, S. H.; Ledvina, S. A.

    2015-12-01

    The results of our latest hybrid particle simulations using the HALFSHEL code are discussed. The presentation will address the energy budget of the solar wind interaction with Mars. The simulations produce loss rates that are very consistent with measured data, Brecht and Ledvina [2014], therefore inspection of the details of the interaction is now warranted. This paper will address the relationship between the energy flowing into the planet and the energy flowing away from the planet. The partition of the energy between fields, and individual ion species will be addressed as well as the amount of energy deposited in the neutral atmosphere by incoming solar wind plasma and during the process of ion loss caused by acceleration via electric fields. Brecht, S.H. and S.A. Ledvina (2014), "The role of the Martian crustal magnetic fields in controlling ionospheric loss," Geophys. Res. Lett., 41, 5340-5346, doi:10.1002/2014GL060841.

  9. ROAR: A 3-D tethered rocket simulation code

    SciTech Connect

    York, A.R. II; Ludwigsen, J.S.

    1992-04-01

    A high-velocity impact testing technique, utilizing a tethered rocket, is being developed at Sandia National Laboratories. The technique involves tethering a rocket assembly to a pivot location and flying it in a semicircular trajectory to deliver the rocket and payload to an impact target location. Integral to developing this testing technique is the parallel development of accurate simulation models. An operational computer code, called ROAR (Rocket-on-a-Rope), has been developed to simulate the three-dimensional transient dynamic behavior of the tether and motor/payload assembly. This report presents a discussion of the parameters modeled, the governing set of equations, the through-time integration scheme, and the input required to set up a model. Also included is a sample problem and a comparison with experimental results.

  10. Software Framework for Advanced Power Plant Simulations

    SciTech Connect

    John Widmann; Sorin Munteanu; Aseem Jain; Pankaj Gupta; Mark Moales; Erik Ferguson; Lewis Collins; David Sloan; Woodrow Fiveland; Yi-dong Lang; Larry Biegler; Michael Locke; Simon Lingard; Jay Yun

    2010-08-01

    This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. These include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.

  11. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  12. Recent advances in superconducting-mixer simulations

    NASA Technical Reports Server (NTRS)

    Withington, S.; Kennedy, P. R.

    1992-01-01

    Over the last few years, considerable progress have been made in the development of techniques for fabricating high-quality superconducting circuits, and this success, together with major advances in the theoretical understanding of quantum detection and mixing at millimeter and submillimeter wavelengths, has made the development of CAD techniques for superconducting nonlinear circuits an important new enterprise. For example, arrays of quasioptical mixers are now being manufactured, where the antennas, matching networks, filters and superconducting tunnel junctions are all fabricated by depositing niobium and a variety of oxides on a single quartz substrate. There are no adjustable tuning elements on these integrated circuits, and therefore, one must be able to predict their electrical behavior precisely. This requirement, together with a general interest in the generic behavior of devices such as direct detectors and harmonic mixers, has lead us to develop a range of CAD tools for simulating the large-signal, small-signal, and noise behavior of superconducting tunnel junction circuits.

  13. Numerical simulations of hydrodynamic instabilities: Perturbation codes PANSY, PERLE, and 2D code CHIC applied to a realistic LIL target

    NASA Astrophysics Data System (ADS)

    Hallo, L.; Olazabal-Loumé, M.; Maire, P. H.; Breil, J.; Morse, R.-L.; Schurtz, G.

    2006-06-01

    This paper deals with ablation front instabilities simulations in the context of direct drive ICF. A simplified DT target, representative of realistic target on LIL is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our Bi-dimensional hydrodynamic code Chic. Numerical solutions are shown to converge, in good agreement with analytical models.

  14. Coded aperture Fast Neutron Analysis: Latest design advances

    NASA Astrophysics Data System (ADS)

    Accorsi, Roberto; Lanza, Richard C.

    2001-07-01

    Past studies have showed that materials of concern like explosives or narcotics can be identified in bulk from their atomic composition. Fast Neutron Analysis (FNA) is a nuclear method capable of providing this information even when considerable penetration is needed. Unfortunately, the cross sections of the nuclear phenomena and the solid angles involved are typically small, so that it is difficult to obtain high signal-to-noise ratios in short inspection times. CAFNAaims at combining the compound specificity of FNA with the potentially high SNR of Coded Apertures, an imaging method successfully used in far-field 2D applications. The transition to a near-field, 3D and high-energy problem prevents a straightforward application of Coded Apertures and demands a thorough optimization of the system. In this paper, the considerations involved in the design of a practical CAFNA system for contraband inspection, its conclusions, and an estimate of the performance of such a system are presented as the evolution of the ideas presented in previous expositions of the CAFNA concept.

  15. Codesign approach towards an Exascale scalable plasma simulation code

    NASA Astrophysics Data System (ADS)

    Amaya, J.; Deca, J.; Innocenti, M. E.; Johnson, A.; Lapenta, G.; Markidis, S.; Olshevsky, V.; Vapirev, A.

    2013-10-01

    Particle in cell simulations represent an excellent paradigm for codesign efforts. PIC codes are simple and flexible with many variants addressing different physics applications (e.g. explicit, implicit, hybrid, gyrokinetic, fluid) and different architecture (e.g. vector, parallel, GPU). It is relatively easy to consider radical changes and test them in a short time. For this reason, the project DEEP funded by the European Commission (www.deep-project.eu) and the Intel Exascience Lab (www.exascience.com) have used PIC as one of their target application for a codesign approach aiming at developing PIC methods for future exascale comupters. The starting point is the iPic3D implicit PIC approach. Here we report on the analysis of code performance, on the use of GPUs and the new MICs (Intel Xeon processors). We describe how the method can be rethinked for hybrid architectures composed of MICs and CPUs (as in the new Deep Supercomputer in Juelich, as well as in others). The focus is on a codesign approach where computer science issue motivate modifications of the algorithms used while physics constraints what should be eventually achieved.

  16. VISRAD, 3-D Target Design and Radiation Simulation Code

    NASA Astrophysics Data System (ADS)

    Golovkin, Igor; Macfarlane, Joseph; Golovkina, Viktoriya

    2016-10-01

    The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, LMJ, Z, and PLX. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. We will discuss recent improvements to the software package and plans for future developments.

  17. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  18. NIMROD Code Simulation of Plasma Exhaust Expansion in the VASIMR Magnetic Nozzle

    NASA Astrophysics Data System (ADS)

    Tarditi, Alfonso G.

    2001-10-01

    The Variable Specific Impulse Magnetoplasma Rocket (VASIMR, [1]) engine is an advanced propulsion concept that uses radio frequency waves to accelerate a propellant (typically a Hydrogen or Helium plasma) at much higher speeds than can be reached by any conventional chemical rocket. The high exhaust speed results in a very efficient spacecraft design, as much less propellant mass is required to achieve the same acceleration of a vehicle in space. An experimental VASIMR prototype is currently under development at the Advanced Space Propulsion Laboratory, NASA Johnson Space Center. A magnetic nozzle is used to convert the thermal plasma energy into thrust along the longitudinal direction. A 3D, two-fluid simulation has been developed with the NIMROD code [2,3] to study the details of this process and the properties of the plasma detachment from the nozzle. The code has been customized with the introduction of a plasma source term and open-end boundary conditions for a cylindrical geometry. In the simulation, a source injects plasma in the nozzle-shaped external magnetic field. The initial plasma pulse expands in the vacuum region following the field lines and eventually evolves into a steady state profile where the plasma flow that crosses the open-end boundaries is balanced by the flow injected at the source. The NIMROD runs have been benchmarked with 2D simulations with a particle trajectory code. Initial comparisons with experimental probe measurements are also presented. The results of these test runs are being used to optimize the design parameters of the engine plasma acceleration section and of the magnetic nozzle field profile. [1] F. R. Chang-Diaz, Scientific American, p. 90, Nov. 2000. [2] A. H. Glasser, et al., Plasma Phys. Control. Fusion, 41, A74 (1999). [3] C. R. Sovinec, Int. Sherwood Fusion Theory Conf., Los Angeles, CA (USA), March 2000

  19. An Advanced Leakage Scheme for Neutrino Treatment in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-01

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  20. AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS

    SciTech Connect

    Perego, A.; Cabezón, R. M.; Käppeli, R.

    2016-04-15

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmann transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.

  1. GRMHD Simulations of Jet Formation with a New Code

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic (GRMHD) code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-interpolated, constrained transport scheme is used to maintain a divergence-free magnetic field. Various one-dimensional test problems in both special and general relativity show significant improvements over our previous model. We have performed simulations of jet formations from a geometrically thin accretion disk near both nonrotating and rotating black holes. The new simulation results show that the jet is formed in the same manner as in previous work and propagates outward. In the rotating black hole cases, jets form much closer to the black hole's ergosphere and the magnetic field is strongly twisted due the frame-dragging effect. As the magnetic field strength becomes weaker, a larger amount of matter is launched with the jet. On the other hand, when the magnetic field strength becomes stronger, the jet has less matter and becomes poynting-flux dominated. We will also discuss how the jet properties depend on the rotation of a black hole.

  2. MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo

    2010-11-01

    A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.

  3. Advanced Modeling, Simulation and Analysis (AMSA) Capability Roadmap Progress Review

    NASA Technical Reports Server (NTRS)

    Antonsson, Erik; Gombosi, Tamas

    2005-01-01

    Contents include the following: NASA capability roadmap activity. Advanced modeling, simulation, and analysis overview. Scientific modeling and simulation. Operations modeling. Multi-special sensing (UV-gamma). System integration. M and S Environments and Infrastructure.

  4. Numerical simulation of turbomachinery flows with advanced turbulence models

    NASA Technical Reports Server (NTRS)

    Lakshminarayana, B.; Kunz, R.; Luo, J.; Fan, S.

    1992-01-01

    A three dimensional full Navier-Stokes (FNS) code is used to simulate complex turbomachinery flows. The code incorporates an explicit multistep scheme and solves a conservative form of the density averaged continuity, momentum, and energy equations. A compressible low Reynolds number form of the k-epsilon turbulence model, and a q-omega model and an algebraic Reynolds stress model have been incorporated in a fully coupled manner to approximate Reynolds stresses. The code is used to predict the viscous flow field in a backswept transonic centrifugal compressor for which laser two focus data is available. The code is also used to simulate the tip clearance flow in a cascade. The code has been extended to include unsteady Euler solutions for predicting the unsteady flow through a cascade due to incoming wakes, simulating rotor-stator interactions.

  5. THEHYCO-3DT: Thermal hydrodynamic code for the 3 dimensional transient calculation of advanced LMFBR core

    SciTech Connect

    Vitruk, S.G.; Korsun, A.S.; Ushakov, P.A.

    1995-09-01

    The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.

  6. Particle tracking code of simulating global RF feedback

    SciTech Connect

    Mestha, L.K.

    1991-09-01

    It is well known in the control community'' that a good feedback controller design is deeply rooted in the physics of the system. For example, when accelerating the beam we must keep several parameters under control so that the beam travels within the confined space. Important parameters include the frequency and phase of the rf signal, the dipole field, and the cavity voltage. Because errors in these parameters will progressively mislead the beam from its projected path in the tube, feedback loops are used to correct the behavior. Since the feedback loop feeds energy to the system, it changes the overall behavior of the system and may drive it to instability. Various types of controllers are used to stabilize the feedback loop. Integrating the beam physics with the feedback controllers allows us to carefully analyze the beam behavior. This will not only guarantee optimal performance but will also significantly enhance the ability of the beam control engineer to deal effectively with the interaction of various feedback loops. Motivated by this theme, we developed a simple one-particle tracking code to simulate particle behavior with feedback controllers. In order to achieve our fundamental objective, we can ask some key questions: What are the input and output parameters How can they be applied to the practical machine How can one interface the rf system dynamics such as the transfer characteristics of the rf cavities and phasing between the cavities Answers to these questions can be found by considering a simple case of a single cavity with one particle, tracking it turn-by-turn with appropriate initial conditions, then introducing constraints on crucial parameters. Critical parameters are rf frequency, phase, and amplitude once the dipole field has been given. These are arranged in the tracking code so that we can interface the feedback system controlling them.

  7. Advancements in Afterbody Radiative Heating Simulations for Earth Entry

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.; Panesi, Marco; Brandis, Aaron M.

    2016-01-01

    Four advancements to the simulation of backshell radiative heating for Earth entry are presented. The first of these is the development of a flow field model that treats electronic levels of the dominant backshell radiator, N, as individual species. This is shown to allow improvements in the modeling of electron-ion recombination and two-temperature modeling, which are shown to increase backshell radiative heating by 10 to 40%. By computing the electronic state populations of N within the flow field solver, instead of through the quasi-steady state approximation in the radiation code, the coupling of radiative transition rates to the species continuity equations for the levels of N, including the impact of non-local absorption, becomes feasible. Implementation of this additional level of coupling between the flow field and radiation codes represents the second advancement presented in this work, which is shown to increase the backshell radiation by another 10 to 50%. The impact of radiative transition rates due to non-local absorption indicates the importance of accurate radiation transport in the relatively complex flow geometry of the backshell. This motivates the third advancement, which is the development of a ray-tracing radiation transport approach to compute the radiative transition rates and divergence of the radiative flux at every point for coupling to the flow field, therefore allowing the accuracy of the commonly applied tangent-slab approximation to be assessed for radiative source terms. For the sphere considered at lunar-return conditions, the tangent-slab approximation is shown to provide a sufficient level of accuracy for the radiative source terms, even for backshell cases. This is in contrast to the agreement between the two approaches for computing the radiative flux to the surface, which differ by up to 40%. The final advancement presented is the development of a nonequilibrium model for NO radiation, which provides significant backshell

  8. Compiled reports on the applicability of selected codes and standards to advanced reactors

    SciTech Connect

    Benjamin, E.L.; Hoopingarner, K.R.; Markowski, F.J.; Mitts, T.M.; Nickolaus, J.R.; Vo, T.V.

    1994-08-01

    The following papers were prepared for the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission under contract DE-AC06-76RLO-1830 NRC FIN L2207. This project, Applicability of Codes and Standards to Advance Reactors, reviewed selected mechanical and electrical codes and standards to determine their applicability to the construction, qualification, and testing of advanced reactors and to develop recommendations as to where it might be useful and practical to revise them to suit the (design certification) needs of the NRC.

  9. Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.

  10. Advanced beam-dynamics simulation tools for RIA.

    SciTech Connect

    Garnett, R. W.; Wangler, T. P.; Billen, J. H.; Qiang, J.; Ryne, R.; Crandall, K. R.; Ostroumov, P.; York, R.; Zhao, Q.; Physics; LANL; LBNL; Tech Source; Michigan State Univ.

    2005-01-01

    We are developing multi-particle beam-dynamics simulation codes for RIA driver-linac simulations extending from the low-energy beam transport (LEBT) line to the end of the linac. These codes run on the NERSC parallel supercomputing platforms at LBNL, which allow us to run simulations with large numbers of macroparticles. The codes have the physics capabilities needed for RIA, including transport and acceleration of multiple-charge-state beams, beam-line elements such as high-voltage platforms within the linac, interdigital accelerating structures, charge-stripper foils, and capabilities for handling the effects of machine errors and other off-normal conditions. This year will mark the end of our project. In this paper we present the status of the work, describe some recent additions to the codes, and show some preliminary simulation results.

  11. Development of a CFD code for casting simulation

    NASA Technical Reports Server (NTRS)

    Murph, Jesse E.

    1992-01-01

    The task of developing a computational fluid dynamics (CFD) code to accurately model the mold filling phase of a casting operation was accomplished in a systematic manner. First the state-of-the-art was determined through a literature search, a code search, and participation with casting industry personnel involved in consortium startups. From this material and inputs from industry personnel, an evaluation of the currently available codes was made. It was determined that a few of the codes already contained sophisticated CFD algorithms and further validation of one of these codes could preclude the development of a new CFD code for this purpose. With industry concurrence, ProCAST was chosen for further evaluation. Two benchmark cases were used to evaluate the code's performance using a Silicon Graphics Personal Iris system. The results of these limited evaluations (because of machine and time constraints) are presented along with discussions of possible improvements and recommendations for further evaluation.

  12. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  13. SOFT-RT: Software for IMRT simulations based on MCNPx code.

    PubMed

    Fonseca, Telma Cristina Ferreira; Campos, Tarcisio Passos Ribeiro

    2016-11-01

    Intensity Modulated Radiation Therapy (IMRT) is an advanced treatment technique, widely used in external radiotherapy. This paper presents the SOFT-RT which allows the simulation of an entire IMRT treatment protocol. The SOFT-RT performs a full three-dimensional renderization of a set of patient images, including the definitions of region of interest with organs in risk (OIR), and the target tumor volume and margins (PTV). Thus, a more accurate analysis and planning can be performed, taking into account the features and orientation of the radiation beams. The exposed tissues as well as the amount of absorbed dose is depicted in healthy and/or cancerous tissues. As conclusion, SOFT-RT can predict dose on the PTV accurately, preserving the surrounding healthy tissues. SOFT-RT is coupled with SISCODES code. The SISCODES code is firstly applied to segment the set of CT or MRI patient images in distinct tissues pointing out its respective density and chemical compositions. Later, the voxel model is export to the SOFT-RT IMRT planning module in which a full treatment planning is created. All geometrical parameters are sent to the general-purpose Monte Carlo transport code-MCNP-to simulate the interaction of each incident beam towards to the PTV avoiding OIR. Computational simulations is running on MCNPx. The normalized dose results are exported to the SOFT-RT output-module, in which the three-dimensional model visualization is shown in a transparent glass procedure adopting gray scale for the dependence on the mass density of the correlated tissue; while, a color scale to depict dose values in a superimpose protocol.

  14. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    SciTech Connect

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow in the wellbore

  15. Three-dimensional finite-element code for electrosurgery and thermal ablation simulations (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Humphries, Stanley; Johnson, Kristin; Rick, Kyle; Liu, Zheng-jun; Goldberg, S. Nahum

    2005-04-01

    ETherm3 is a finite-element software suite for simulations of electrosurgery and RF thermal ablation processes. Program components cover the complete calculation process from mesh generation to solution analysis. The solutions employ three-dimensional conformal meshes to handle cluster probes and other asymmetric assemblies. The conformal-mesh approach is essential for high-accuracy surface integrals of net electrode currents. ETherm3 performs coupled calculations of RF electric fields in conductive dielectrics and thermal transport via dynamic solutions of the bioheat equation. The boundary-value RF field solution is updated periodically to reflect changes in material properties. ETherm3 features advanced material models with the option for arbitrary temperature variations of thermal and electrical conductivity, perfusion rate, and other quantities. The code handles irreversible changes by switching the material reference of individual elements at specified transition temperatures. ETherm3 is controlled through a versatile interpreter language to enable complex run sequences. The code can automatically maintain constant current or power, switch to different states in response to temperature or impedance information, and adjust parameters on the basis of user-supplied control functions. In this paper, we discuss the physical basis and novel features of the code suite and review application examples.

  16. PEGASUS. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, T.J.

    1998-12-01

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  17. 3D Direct Simulation Monte Carlo Code Which Solves for Geometrics

    SciTech Connect

    Bartel, Timothy J.

    1998-01-13

    Pegasus is a 3D Direct Simulation Monte Carlo Code which solves for geometries which can be represented by bodies of revolution. Included are all the surface chemistry enhancements in the 2D code Icarus as well as a real vacuum pump model. The code includes multiple species transport.

  18. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  19. PLASIM: A computer code for simulating charge exchange plasma propagation

    NASA Technical Reports Server (NTRS)

    Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.

    1982-01-01

    The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.

  20. Functions of Code-Switching among Iranian Advanced and Elementary Teachers and Students

    ERIC Educational Resources Information Center

    Momenian, Mohammad; Samar, Reza Ghafar

    2011-01-01

    This paper reports on the findings of a study carried out on the advanced and elementary teachers' and students' functions and patterns of code-switching in Iranian English classrooms. This concept has not been adequately examined in L2 (second language) classroom contexts than in outdoor natural contexts. Therefore, besides reporting on the…

  1. DNA strand breaks induced by electrons simulated with Nanodosimetry Monte Carlo Simulation Code: NASIC.

    PubMed

    Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong

    2015-09-01

    The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway.

  2. Additions and Improvements to the FLASH Code for Simulating High Energy Density Physics Experiments

    NASA Astrophysics Data System (ADS)

    Lamb, D. Q.; Daley, C.; Dubey, A.; Fatenejad, M.; Flocke, N.; Graziani, C.; Lee, D.; Tzeferacos, P.; Weide, K.

    2015-11-01

    FLASH is an open source, finite-volume Eulerian, spatially adaptive radiation hydrodynamics and magnetohydrodynamics code that incorporates capabilities for a broad range of physical processes, performs well on a wide range of computer architectures, and has a broad user base. Extensive capabilities have been added to FLASH to make it an open toolset for the academic high energy density physics (HEDP) community. We summarize these capabilities, with particular emphasis on recent additions and improvements. These include advancements in the optical ray tracing laser package, with methods such as bi-cubic 2D and tri-cubic 3D interpolation of electron number density, adaptive stepping and 2nd-, 3rd-, and 4th-order Runge-Kutta integration methods. Moreover, we showcase the simulated magnetic field diagnostic capabilities of the code, including induction coils, Faraday rotation, and proton radiography. We also describe several collaborations with the National Laboratories and the academic community in which FLASH has been used to simulate HEDP experiments. This work was supported in part at the University of Chicago by the DOE NNSA ASC through the Argonne Institute for Computing in Science under field work proposal 57789; and the NSF under grant PHY-0903997.

  3. A Self-consistent Simulation of KSTAR Reverse-shear Operation Mode using ASTRA Code

    NASA Astrophysics Data System (ADS)

    Kim, J. Y.; Jhang, Hogun

    2001-10-01

    A simulation study is presented on the reverse-shear operation mode of KSTAR (Korea Superconducting Tokamak Advanced Research) device, using the ASTRA (Automatic System of TRansport Analysis in a tokamak) code. The heat deposition and the current profile evolution are modeled self-consistently from the ASTRA code into which several heating and current-drive modules of NBI, ICRH/FWCD, and LHCD have been implemented. The anomalous transport is modeled more elaborately by using the theory-based models, such as IFS/PPPL one, rather than conventional empirical or artificial formulas. The effect of equilibrium flow shear and its time evolution are also included in the modeling for a more realistic description of the formation of ITB and its spatial and temporal evolution. Finally, based on the simulation results we will discuss the possible way to get an AT mode plasma with high-beta and high bootstrap current fraction, avoiding a steep pressure gradient and related local MHD instabilities.

  4. Emulation of an Advanced G-Seat on the Advanced Simulator for Pilot Training.

    DTIC Science & Technology

    1978-04-01

    ASPT ) which culminated in the emulation of an advanced approach to G-seat simulation. The development of the software, the design of the advanced seat...components, the implementation of the advanced design on the ASPT , and the results of the study are presented. (Author)

  5. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Pitsch, Heinz

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation; a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet transformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  6. Enabling Advanced Modeling and Simulations for Fuel-Flexible Combustors

    SciTech Connect

    Heinz Pitsch

    2010-05-31

    The overall goal of the present project is to enable advanced modeling and simulations for the design and optimization of fuel-flexible turbine combustors. For this purpose we use a high-fidelity, extensively-tested large-eddy simulation (LES) code and state-of-the-art models for premixed/partially-premixed turbulent combustion developed in the PI's group. In the frame of the present project, these techniques are applied, assessed, and improved for hydrogen enriched premixed and partially premixed gas-turbine combustion. Our innovative approaches include a completely consistent description of flame propagation, a coupled progress variable/level set method to resolve the detailed flame structure, and incorporation of thermal-diffusion (non-unity Lewis number) effects. In addition, we have developed a general flamelet-type transformation holding in the limits of both non-premixed and premixed burning. As a result, a model for partially premixed combustion has been derived. The coupled progress variable/level method and the general flamelet tranformation were validated by LES of a lean-premixed low-swirl burner that has been studied experimentally at Lawrence Berkeley National Laboratory. The model is extended to include the non-unity Lewis number effects, which play a critical role in fuel-flexible combustor with high hydrogen content fuel. More specifically, a two-scalar model for lean hydrogen and hydrogen-enriched combustion is developed and validated against experimental and direct numerical simulation (DNS) data. Results are presented to emphasize the importance of non-unity Lewis number effects in the lean-premixed low-swirl burner of interest in this project. The proposed model gives improved results, which shows that the inclusion of the non-unity Lewis number effects is essential for accurate prediction of the lean-premixed low-swirl flame.

  7. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  8. Top 10 Tips for Using Advance Care Planning Codes in Palliative Medicine and Beyond.

    PubMed

    Jones, Christopher A; Acevedo, Jean; Bull, Janet; Kamal, Arif H

    2016-12-01

    Although recommended for all persons with serious illness, advance care planning (ACP) has historically been a charitable clinical service. Inadequate or unreliable provisions for reimbursement, among other barriers, have spurred a gap between the evidence demonstrating the importance of timely ACP and recognition by payers for its delivery.(1) For the first time, healthcare is experiencing a dramatic shift in billing codes that support increased care management and care coordination. ACP, chronic care management, and transitional care management codes are examples of this newer recognition of the value of these types of services. ACP discussions are an integral component of comprehensive, high-quality palliative care delivery. The advent of reimbursement mechanisms to recognize these services has an enormous potential to impact palliative care program sustainability and growth. In this article, we highlight 10 tips to effectively using the new ACP codes reimbursable under Medicare. The importance of documentation, proper billing, and nuances regarding coding is addressed.

  9. DgSMC-B code: A robust and autonomous direct simulation Monte Carlo code for arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Kargaran, H.; Minuchehr, A.; Zolfaghari, A.

    2016-07-01

    In this paper, we describe the structure of a new Direct Simulation Monte Carlo (DSMC) code that takes advantage of combinatorial geometry (CG) to simulate any rarefied gas flows Medias. The developed code, called DgSMC-B, has been written in FORTRAN90 language with capability of parallel processing using OpenMP framework. The DgSMC-B is capable of handling 3-dimensional (3D) geometries, which is created with first-and second-order surfaces. It performs independent particle tracking for the complex geometry without the intervention of mesh. In addition, it resolves the computational domain boundary and volume computing in border grids using hexahedral mesh. The developed code is robust and self-governing code, which does not use any separate code such as mesh generators. The results of six test cases have been presented to indicate its ability to deal with wide range of benchmark problems with sophisticated geometries such as airfoil NACA 0012. The DgSMC-B code demonstrates its performance and accuracy in a variety of problems. The results are found to be in good agreement with references and experimental data.

  10. Code Blue: methodology for a qualitative study of teamwork during simulated cardiac arrest

    PubMed Central

    Clarke, Samuel; Carolina Apesoa-Varano, Ester; Barton, Joseph

    2016-01-01

    Introduction In-hospital cardiac arrest (IHCA) is a particularly vexing entity from the perspective of preparedness, as it is neither common nor truly rare. Survival from IHCA requires the coordinated efforts of multiple providers with different skill sets who may have little prior experience working together. Survival rates have remained low despite advances in therapy, suggesting that human factors may be at play. Methods and analysis This qualitative study uses a quasiethnographic data collection approach combining focus group interviews with providers involved in IHCA resuscitation as well as analysis of video recordings from in situ-simulated cardiac arrest events. Using grounded theory-based analysis, we intend to understand the organisational, interpersonal, cognitive and behavioural dimensions of IHCA resuscitation, and to build a descriptive model of code team functioning. Ethics and dissemination This ongoing study has been approved by the IRB at UC Davis Medical Center. Results The results will be disseminated in a subsequent manuscript. PMID:26758258

  11. Simulation and calculation of particle trapping using a quasistatic 2D simulation code

    NASA Astrophysics Data System (ADS)

    Morshed, Sepehr; Antonsen, Thomas; Huang, Chengkun; Mori, Warren

    2008-11-01

    In LWFA schemes the laser pulse must propagate several centimeters and maintain its coherence over this distance, which corresponds to many Rayleigh lengths. These Wakefields and their effect on the laser can be simulated in quasistatic approximation [1, 2]. In this approximation the assumption is that the driver (laser) does not change shape during the time it takes for it to pass by a plasma particle. As a result the particles that are trapped and moving with near-luminal velocity can not be treated with this approximation. Here we have modified the 2D code WAKE with an alternate algorithm so that when a plasma particle gains sufficient energy from wakefields it is promoted to beam particle status which later on may become trapped in the wakefields of laser. Similar implementations have been made in the 3D code QUICKPIC [2]. We also have done comparison between WAKE and results from 200 TW laser simulations using OSIRIS [3]. These changes in WAKE will give users a tool that can be used on a desk top machine to simulate GeV acceleration.[0pt] [1] P. Mora and T. M. Antonsen Jr., Phys Plasma 4, 217 (1997)[0pt] [2] C. Huang et al. Comp Phys. 217 (2006)[0pt] [3] W. Lu et al. PRST, Accelerators and Beams 10, 061301 (2007)

  12. Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation

    NASA Technical Reports Server (NTRS)

    Padilla, Jose F.

    2010-01-01

    Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.

  13. Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes

    NASA Technical Reports Server (NTRS)

    Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.

    1989-01-01

    The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.

  14. Plug-in to Eclipse environment for VHDL source code editor with advanced formatting of text

    NASA Astrophysics Data System (ADS)

    Niton, B.; Pozniak, K. T.; Romaniuk, R. S.

    2011-10-01

    The paper describes an idea and realization of a smart plug-in to the Eclipse software environment. The plug-in is predicted for editing of the VHDL source code. It extends considerably the capabilities of the VEditor program, which bases on the open license. There are presented the results of the formatting procedures performed on chosen examples of the VHDL source codes. The work is a part of a bigger project of building smart programming environment for design of advanced photonic and electronic systems. The examples of such systems are quoted in references.

  15. The IDA Advanced Technology Combat Simulation Project

    DTIC Science & Technology

    1990-09-01

    Codes Dt Avail and/or r DtDDist Special4 A I I ! I I 5 PREFACE This paper was prepared as part of IDA Project 9000-623 under the IDA Central Research...Grotte, Ken Ratkiewicz , Phillip Merkey, Paul B. Schneck, Eleanor L. Schwartz, Shawn Sheridan, William Stoltz, Victor U.goff, Lowell Miller, Valyncia...benefit from the use of these methods. v HI I CONTENTS1 P R E F A C E

  16. Simulator design for advanced ISDN satellite design and experiments

    NASA Technical Reports Server (NTRS)

    Pepin, Gerald R.

    1992-01-01

    This simulation design task completion report documents the simulation techniques associated with the network models of both the Interim Service ISDN (integrated services digital network) Satellite (ISIS) and the Full Service ISDN Satellite (FSIS) architectures. The ISIS network model design represents satellite systems like the Advanced Communication Technology Satellite (ACTS) orbiting switch. The FSIS architecture, the ultimate aim of this element of the Satellite Communications Applications Research (SCAR) program, moves all control and switching functions on-board the next generation ISDN communication satellite. The technical and operational parameters for the advanced ISDN communications satellite design will be obtained from the simulation of ISIS and FSIS engineering software models for their major subsystems. Discrete events simulation experiments will be performed with these models using various traffic scenarios, design parameters and operational procedures. The data from these simulations will be used to determine the engineering parameters for the advanced ISDN communications satellite.

  17. Enhanced Capabilities of Advanced Airborne Radar Simulation.

    DTIC Science & Technology

    1996-01-01

    RCF UNIX-Based Machine 65 BAUHAUS A-l Illustrations to Understand How GTD Files are Read 78 C-l Input File for Sidelobe Jammer Nulling...on the UNIX-based machine BAUHAUS are provided to illustrate the enhancements in run time, as compared to the original version of the simulation [1...Figure 27 presents some CPU run times for executing the enhanced simulation on the RCF UNIX-based machine BAUHAUS . The run times are shown only for

  18. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling

    PubMed Central

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to

  19. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling.

    PubMed

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to

  20. Predicting Performance in Technical Preclinical Dental Courses Using Advanced Simulation.

    PubMed

    Gottlieb, Riki; Baechle, Mary A; Janus, Charles; Lanning, Sharon K

    2017-01-01

    The aim of this study was to investigate whether advanced simulation parameters, such as simulation exam scores, number of student self-evaluations, time to complete the simulation, and time to complete self-evaluations, served as predictors of dental students' preclinical performance. Students from three consecutive classes (n=282) at one U.S. dental school completed advanced simulation training and exams within the first four months of their dental curriculum. The students then completed conventional preclinical instruction and exams in operative dentistry (OD) and fixed prosthodontics (FP) courses, taken during the first and second years of dental school, respectively. Two advanced simulation exam scores (ASES1 and ASES2) were tested as predictors of performance in the two preclinical courses based on final course grades. ASES1 and ASES2 were found to be predictors of OD and FP preclinical course grades. Other advanced simulation parameters were not significantly related to grades in the preclinical courses. These results highlight the value of an early psychomotor skills assessment in dentistry. Advanced simulation scores may allow early intervention in students' learning process and assist in efficient allocation of resources such as faculty coverage and tutor assignment.

  1. Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.

    PubMed

    Xu, X Q

    2008-07-01

    We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.

  2. 14 CFR Appendix H to Part 121 - Advanced Simulation

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... check airmen must include training policies and procedures, instruction methods and techniques... and a means for achieving flightcrew training in advanced airplane simulators. The requirements in... Simulation Training Program For an operator to conduct Level C or D training under this appendix all...

  3. Molecular dynamics simulations: advances and applications

    PubMed Central

    Hospital, Adam; Goñi, Josep Ramon; Orozco, Modesto; Gelpí, Josep L

    2015-01-01

    Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed. PMID:26604800

  4. SKIRT: An advanced dust radiative transfer code with a user-friendly architecture

    NASA Astrophysics Data System (ADS)

    Camps, P.; Baes, M.

    2015-03-01

    We discuss the architecture and design principles that underpin the latest version of SKIRT, a state-of-the-art open source code for simulating continuum radiation transfer in dusty astrophysical systems, such as spiral galaxies and accretion disks. SKIRT employs the Monte Carlo technique to emulate the relevant physical processes including scattering, absorption and emission by the dust. The code features a wealth of built-in geometries, radiation source spectra, dust characterizations, dust grids, and detectors, in addition to various mechanisms for importing snapshots generated by hydrodynamical simulations. The configuration for a particular simulation is defined at run-time through a user-friendly interface suitable for both occasional and power users. These capabilities are enabled by careful C++ code design. The programming interfaces between components are well defined and narrow. Adding a new feature is usually as simple as adding another class; the user interface automatically adjusts to allow configuring the new options. We argue that many scientific codes, like SKIRT, can benefit from careful object-oriented design and from a friendly user interface, even if it is not a graphical user interface.

  5. GYSELA, a full-f global gyrokinetic Semi-Lagrangian code for ITG turbulence simulations

    SciTech Connect

    Grandgirard, V.; Sarazin, Y.; Garbet, X.; Dif-Pradalier, G.; Ghendrih, Ph.; Besse, N.; Bertrand, P.

    2006-11-30

    This work addresses non-linear global gyrokinetic simulations of ion temperature gradient (ITG) driven turbulence with the GYSELA code. The particularity of GYSELA code is to use a fixed grid with a Semi-Lagrangian (SL) scheme and this for the entire distribution function. The 4D non-linear drift-kinetic version of the code already showns the interest of such a SL method which exhibits good properties of energy conservation in non-linear regime as well as an accurate description of fine spatial scales. The code has been upgrated to run 5D simulations of toroidal ITG turbulence. Linear benchmarks and non-linear first results prove that semi-lagrangian codes can be a credible alternative for gyrokinetic simulations.

  6. A code to simulate nuclear reactor inventories and associated gamma-ray spectra.

    PubMed

    Cresswell, A J; Allyson, J D; Sanderson, D C

    2001-01-01

    A computer code has been developed to simulate the gamma-ray spectra that would be measured by airborne gamma spectrometry (AGS) systems from sources containing short-lived fission products. The code uses simple numerical methods to simulate the production and decay of fission products and generates spectra for sodium iodide (NaI) detectors using Monte Carlo codes. A new Monte Carlo code using a virtual array of detectors to reduce simulation times for airborne geometries is described. Spectra generated for a short irradiation and laboratory geometry have been compared with an experimental data set. The agreement is good. Spectra have also been generated for airborne geometries and longer irradiation periods. The application of this code to generate AGS spectra for accident scenarios and their uses in the development and evaluation of spectral analysis methods for such situations are discussed.

  7. Advanced Simulation and Computing Business Plan

    SciTech Connect

    Rummel, E.

    2015-07-09

    To maintain a credible nuclear weapons program, the National Nuclear Security Administration’s (NNSA’s) Office of Defense Programs (DP) needs to make certain that the capabilities, tools, and expert staff are in place and are able to deliver validated assessments. This requires a complete and robust simulation environment backed by an experimental program to test ASC Program models. This ASC Business Plan document encapsulates a complex set of elements, each of which is essential to the success of the simulation component of the Nuclear Security Enterprise. The ASC Business Plan addresses the hiring, mentoring, and retaining of programmatic technical staff responsible for building the simulation tools of the nuclear security complex. The ASC Business Plan describes how the ASC Program engages with industry partners—partners upon whom the ASC Program relies on for today’s and tomorrow’s high performance architectures. Each piece in this chain is essential to assure policymakers, who must make decisions based on the results of simulations, that they are receiving all the actionable information they need.

  8. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  9. Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Editor); Eklund, Dean R.; Romero, Vicente J.; Pearce, Jeffrey A.; Keim, Nicholas S.

    2016-01-01

    Decision makers and other users of simulations need to know quantified simulation credibility to make simulation-based critical decisions and effectively use simulations, respectively. The credibility of a simulation is quantified by its accuracy in terms of uncertainty, and the responsibility of establishing credibility lies with the creator of the simulation. In this volume, we present some state-of-the-art philosophies, principles, and frameworks. The contributing authors involved in this publication have been dedicated to advancing simulation credibility. They detail and provide examples of key advances over the last 10 years in the processes used to quantify simulation credibility: verification, validation, and uncertainty quantification. The philosophies and assessment methods presented here are anticipated to be useful to other technical communities conducting continuum physics-based simulations; for example, issues related to the establishment of simulation credibility in the discipline of propulsion are discussed. We envision that simulation creators will find this volume very useful to guide and assist them in quantitatively conveying the credibility of their simulations.

  10. Simulation and calculation of particle trapping using a quasistatic simulation code

    NASA Astrophysics Data System (ADS)

    Morshed, Sepehr; Palastro, John; Antonsen, Thomas; Huang, Chengkun; Mori, Warren

    2007-11-01

    In LWFA schemes the laser pulse must propagate several centimeters and maintain its coherence over this distance, which corresponds to many Rayleigh lengths. These Wakefields and their effect on the laser can be simulated in quasistatic approximation [1, 2]. In this approximation the assumption is that the driver (laser) does not change shape during the time it takes for it to pass by a plasma particle. As a result the particles that are trapped and moving with near-luminal velocity can not be treated with this approximation. Here we have modified the 2D code WAKE with an alternate algorithm so that when a plasma particle gains sufficient energy from wakefields it becomes trapped to satisfy the trapping conditions. Similar implementations have been made in the 3D cod QUICKPIC [2]. We also have done simulation and comparison of results for centimeter scale GeV electron accelerator experiments from LBL [3] with WAKE. These changes in WAKE will give users a tool that can be used on a desk top machine to simulate GeV acceleration. [1] P. Mora and T. M. Antonsen Jr., Phys Plasma 4, 217 (1997) [2] C. Huang et al. Comp Phys. 217 (2006) [3] W. P. Leemans et al. Nature Phys 2, 696 (2006) Letters

  11. Medium-rate speech coding simulator for mobile satellite systems

    NASA Astrophysics Data System (ADS)

    Copperi, Maurizio; Perosino, F.; Rusina, F.; Albertengo, G.; Biglieri, E.

    1986-01-01

    Channel modeling and error protection schemes for speech coding are described. A residual excited linear predictive (RELP) coder for bit rates 4.8, 7.2, and 9.6 kbit/sec is outlined. The coder at 9.6 kbit/sec incorporates a number of channel error protection techniques, such as bit interleaving, error correction codes, and parameter repetition. Results of formal subjective experiments (DRT and DAM tests) under various channel conditions, reveal that the proposed coder outperforms conventional LPC-10 vocoders by 2 subjective categories, thus confirming the suitability of the RELP coder at 9.6 kbit/sec for good quality speech transmission in mobile satellite systems.

  12. Flash Galaxy Cluster Merger, Simulated using the Flash Code, Mass Ratio 1:1

    ScienceCinema

    None

    2016-07-12

    Since structure in the universe forms in a bottom-up fashion, with smaller structures merging to form larger ones, modeling the merging process in detail is crucial to our understanding of cosmology. At the current epoch, we observe clusters of galaxies undergoing mergers. It is seen that the two major components of galaxy clusters, the hot intracluster gas and the dark matter, behave very differently during the course of a merger. Using the N-body and hydrodynamics capabilities in the FLASH code, we have simulated a suite of representative galaxy cluster mergers, including the dynamics of both the dark matter, which is collisionless, and the gas, which has the properties of a fluid. 3-D visualizations such as these demonstrate clearly the different behavior of these two components over time. Credits: Science: John Zuhone (Harvard-Smithsonian Center for Astrophysics Visualization: Jonathan Gallagher (Flash Center, University of Chicago)

 This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Dept. of Energy (DOE) under contract DE-AC02-06CH11357. This research was supported by the National Nuclear Security Administration's (NNSA) Advanced Simulation and Computing (ASC) Academic Strategic Alliance Program (ASAP).

  13. Simulated data and code for analysis of herpetofauna response to forest management in the Missouri Ozarks.

    PubMed

    Rota, Christopher T; Wolf, Alexander J; Renken, Rochelle B; Gitzen, Robert A; Fantz, Debby K; Montgomery, Robert A; Olson, Matthew G; Vangilder, Larry D; Millspaugh, Joshua J

    2016-12-01

    We present predictor variables and R and Stan code for simulating and analyzing counts of Missouri Ozark herpetofauna in response to three forest management strategies. Our code performs four primary purposes: import predictor variables from spreadsheets; simulate synthetic response variables based on imported predictor variables and user-supplied values for data-generating parameters; format synthetic data for export to Stan; and analyze synthetic data.

  14. Process simulation for advanced composites production

    SciTech Connect

    Allendorf, M.D.; Ferko, S.M.; Griffiths, S.

    1997-04-01

    The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coating techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.

  15. Interoperable Technologies for Advanced Petascale Simulations

    SciTech Connect

    Li, Xiaolin

    2013-01-14

    Our final report on the accomplishments of ITAPS at Stony Brook during period covered by the research award includes component service, interface service and applications. On the component service, we have designed and implemented a robust functionality for the Lagrangian tracking of dynamic interface. We have migrated the hyperbolic, parabolic and elliptic solver from stage-wise second order toward global second order schemes. We have implemented high order coupling between interface propagation and interior PDE solvers. On the interface service, we have constructed the FronTier application programer's interface (API) and its manual page using doxygen. We installed the FronTier functional interface to conform with the ITAPS specifications, especially the iMesh and iMeshP interfaces. On applications, we have implemented deposition and dissolution models with flow and implemented the two-reactant model for a more realistic precipitation at the pore level and its coupling with Darcy level model. We have continued our support to the study of fluid mixing problem for problems in inertial comfinement fusion. We have continued our support to the MHD model and its application to plasma liner implosion in fusion confinement. We have simulated a step in the reprocessing and separation of spent fuels from nuclear power plant fuel rods. We have implemented the fluid-structure interaction for 3D windmill and parachute simulations. We have continued our collaboration with PNNL, BNL, LANL, ORNL, and other SciDAC institutions.

  16. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs

  17. Aerodynamic analysis of three advanced configurations using the TranAir full-potential code

    NASA Technical Reports Server (NTRS)

    Madson, M. D.; Carmichael, R. L.; Mendoza, J. P.

    1989-01-01

    Computational results are presented for three advanced configurations: the F-16A with wing tip missiles and under wing fuel tanks, the Oblique Wing Research Aircraft, and an Advanced Turboprop research model. These results were generated by the latest version of the TranAir full potential code, which solves for transonic flow over complex configurations. TranAir embeds a surface paneled geometry definition in a uniform rectangular flow field grid, thus avoiding the use of surface conforming grids, and decoupling the grid generation process from the definition of the configuration. The new version of the code locally refines the uniform grid near the surface of the geometry, based on local panel size and/or user input. This method distributes the flow field grid points much more efficiently than the previous version of the code, which solved for a grid that was uniform everywhere in the flow field. TranAir results are presented for the three configurations and are compared with wind tunnel data.

  18. Large Eddy Simulation of Flow in Turbine Cascades Using LEST and UNCLE Codes

    NASA Technical Reports Server (NTRS)

    Ashpis, David (Technical Monitor); Huang, P. G.

    2004-01-01

    During the period December 23, 1997 and December August 31, 2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Sparlart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.

  19. Large Eddy Simulation of Flow in Turbine Cascades Using LESTool and UNCLE Codes

    NASA Technical Reports Server (NTRS)

    Huang, P. G.

    2004-01-01

    During the period December 23,1997 and December August 31,2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Spalart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.

  20. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  1. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  2. A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation

    SciTech Connect

    sun, yipeng

    2012-05-03

    In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.

  3. GPU-optimized Code for Long-term Simulations of Beam-beam Effects in Colliders

    SciTech Connect

    Roblin, Yves; Morozov, Vasiliy; Terzic, Balsa; Aturban, Mohamed A.; Ranjan, D.; Zubair, Mohammed

    2013-06-01

    We report on the development of the new code for long-term simulation of beam-beam effects in particle colliders. The underlying physical model relies on a matrix-based arbitrary-order symplectic particle tracking for beam transport and the Bassetti-Erskine approximation for beam-beam interaction. The computations are accelerated through a parallel implementation on a hybrid GPU/CPU platform. With the new code, a previously computationally prohibitive long-term simulations become tractable. We use the new code to model the proposed medium-energy electron-ion collider (MEIC) at Jefferson Lab.

  4. Applications of the lahet simulation code to relativistic heavy ion detectors

    SciTech Connect

    Waters, L.; Gavron, A.

    1991-12-31

    The Los Alamos High Energy Transport (LAHET) simulation code has been applied to test beam data from the lead/scintillator Participant Calorimeter of BNL AGS experiment E814. The LAHET code treats hadronic interactions with the LANL version of the Oak Ridge code HETC. LAHET has now been expanded to handle hadrons with kinetic energies greater than 5 GeV with the FLUKA code, while HETC is used exclusively below 2.0 GeV. FLUKA is phased in linearly between 2.0 and 5.0 GeV. Transport of electrons and photons is done with EGS4, and an interface to the Los Alamos HMCNP3B library based code is provided to analyze neutrons with kinetic energies less than 20 MeV. Excellent agreement is found between the test data and simulation, and results for 2.46 GeV/c protons and pions are illustrated in this article.

  5. Electromagnetic simulations of the ASDEX Upgrade ICRF Antenna with the TOPICA code

    SciTech Connect

    Krivska, A.; Milanesio, D.; Bobkov, V.; Braun, F.; Noterdaeme, J.-M.

    2009-11-26

    Accurate and efficient simulation tools are necessary to optimize the ICRF antenna design for a set of operational conditions. The TOPICA code was developed for performance prediction and for the analysis of ICRF antenna systems in the presence of plasma, given realistic antenna geometries. Fully 3D antenna geometries can be adopted in TOPICA, just as in available commercial codes. But while those commercial codes cannot operate with a plasma loading, the TOPICA code correctly accounts for realistic plasma loading conditions, by means of the coupling with 1D FELICE code. This paper presents the evaluation of the electric current distribution on the structure, of the parallel electric field in the region between the straps and the plasma and the computation of sheaths driving RF potentials. Results of TOPICA simulations will help to optimize and re-design the ICRF ASDEX Upgrade antenna in order to reduce tungsten (W) sputtering attributed to the rectified sheath effect during ICRF operation.

  6. Interoperable Technologies for Advanced Petascale Simulations (ITAPS)

    SciTech Connect

    Shephard, Mark S

    2010-02-05

    Efforts during the past year have contributed to the continued development of the ITAPS interfaces and services as well as specific efforts to support ITAPS applications. The ITAPS interface efforts have two components. The first is working with the ITAPS team on improving the ITAPS software infrastructure and level of compliance of our implementations of ITAPS interfaces (iMesh, iMeshP, iRel and iGeom). The second is being involved with the discussions on the design of the iField fields interface. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. Efforts to move the ITAPS technologies to petascale computers has identified a number of key technical developments that are required to effectively execute the ITAPS interfaces and services. Research to address these parallel method developments has been a major emphasis of the RPI’s team efforts over the past year. The development of parallel unstructured mesh methods has considered the need to scale unstructured mesh solves to massively parallel computers. These efforts, summarized in section 2.1 show that with the addition of the ITAPS procedures described in sections 2.2 and 2.3 we are able to obtain excellent strong scaling with our unstructured mesh CFD code on up to 294,912 cores of IBM Blue Gene/P which is the highest core count machine available. The ITAPS developments that have contributed to the scaling and performance of PHASTA include an iterative migration algorithm to improve the combined region and vertex balance of the mesh partition, which increases scalability, and mesh data reordering, which improves computational performance. The other developments are associated with the further development of the ITAPS parallel unstructured mesh

  7. Evaluation of the Aleph PIC Code on Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Boerner, Jeremiah; Pacheco, Jose; Grillet, Anne

    2016-09-01

    Aleph is a massively parallel, 3D unstructured mesh, Particle-in-Cell (PIC) code, developed to model low temperature plasma applications. In order to verify and validate performance, Aleph is benchmarked against a series of canonical problems to demonstrate statistical indistinguishability in the results. Here, a series of four problems is studied: Couette flows over a range of Knudsen number, sheath formation in an undriven plasma, the two-stream instability, and a capacitive discharge. These problems respectively exercise collisional processes, particle motion in electrostatic fields, electrostatic field solves coupled to particle motion, and a fully coupled reacting plasma. Favorable comparison with accepted results establishes confidence in Aleph's capability and accuracy as a general purpose PIC code. Finally, Aleph is used to investigate the sensitivity of a triggered vacuum gap switch to the particle injection conditions associated with arc breakdown at the trigger. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Brush seal numerical simulation: Concepts and advances

    NASA Technical Reports Server (NTRS)

    Braun, M. J.; Kudriavtsev, V. V.

    1994-01-01

    The development of the brush seal is considered to be most promising among the advanced type seals that are presently in use in the high speed turbomachinery. The brush is usually mounted on the stationary portions of the engine and has direct contact with the rotating element, in the process of limiting the 'unwanted' leakage flows between stages, or various engine cavities. This type of sealing technology is providing high (in comparison with conventional seals) pressure drops due mainly to the high packing density (around 100 bristles/sq mm), and brush compliance with the rotor motions. In the design of modern aerospace turbomachinery leakage flows between the stages must be minimal, thus contributing to the higher efficiency of the engine. Use of the brush seal instead of the labyrinth seal reduces the leakage flow by one order of magnitude. Brush seals also have been found to enhance dynamic performance, cost less, and are lighter than labyrinth seals. Even though industrial brush seals have been successfully developed through extensive experimentation, there is no comprehensive numerical methodology for the design or prediction of their performance. The existing analytical/numerical approaches are based on bulk flow models and do not allow the investigation of the effects of brush morphology (bristle arrangement), or brushes arrangement (number of brushes, spacing between them), on the pressure drops and flow leakage. An increase in the brush seal efficiency is clearly a complex problem that is closely related to the brush geometry and arrangement, and can be solved most likely only by means of a numerically distributed model.

  9. Development of an Implicit, Charge and Energy Conserving 2D Electromagnetic PIC Code on Advanced Architectures

    NASA Astrophysics Data System (ADS)

    Payne, Joshua; Taitano, William; Knoll, Dana; Liebs, Chris; Murthy, Karthik; Feltman, Nicolas; Wang, Yijie; McCarthy, Colleen; Cieren, Emanuel

    2012-10-01

    In order to solve problems such as the ion coalescence and slow MHD shocks fully kinetically we developed a fully implicit 2D energy and charge conserving electromagnetic PIC code, PlasmaApp2D. PlasmaApp2D differs from previous implicit PIC implementations in that it will utilize advanced architectures such as GPUs and shared memory CPU systems, with problems too large to fit into cache. PlasmaApp2D will be a hybrid CPU-GPU code developed primarily to run on the DARWIN cluster at LANL utilizing four 12-core AMD Opteron CPUs and two NVIDIA Tesla GPUs per node. MPI will be used for cross-node communication, OpenMP will be used for on-node parallelism, and CUDA will be used for the GPUs. Development progress and initial results will be presented.

  10. On the Development of a Gridless Inflation Code for Parachute Simulations

    SciTech Connect

    STRICKLAND,JAMES H.; HOMICZ,GREGORY F.; GOSSLER,ALBERT A.; WOLFE,WALTER P.; PORTER,VICKI L.

    2000-08-29

    In this paper the authors present the current status of an unsteady 3D parachute simulation code which is being developed at Sandia National Laboratories under the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The Vortex Inflation PARachute code (VIPAR) which embodies this effort will eventually be able to perform complete numerical simulations of ribbon parachute deployment, inflation, and steady descent. At the present time they have a working serial version of the uncoupled fluids code which can simulate unsteady 3D incompressible flows around bluff bodies made up of triangular membrane elements. A parallel version of the code has just been completed which will allow one to compute flows over complex geometries utilizing several thousand processors on one of the new DOE teraFLOP computers.

  11. Simulation of Surface Ship Dynamics Using Unsteady RANS Codes

    DTIC Science & Technology

    2003-03-01

    d’acquisition des vehicules militaires par la modelisation avancee et la simulation de produit virtuel] To order the complete compilation report, use...AIAA CFD Conference, Norfolk, VA (Jun 1999). 10. Gaither, J. A., "A Solid Modeling Topology Data Structure for General Grid Generation," Master’s Thesis

  12. A computer code for beam dynamics simulations in SFRFQ structure

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Chen, J. E.; Lu, Y. R.; Yan, X. Q.; Zhu, K.; Fang, J. X.; Guo, Z. Y.

    2007-03-01

    A computer code (SFRFQCODEv1.0) is developed to analyze the beam dynamics of Separated Function Radio Frequency Quadruples (SFRFQ) structure. Calculations show that the transverse and longitudinal stability can be ensured by selecting proper dynamic and structure parameters. This paper describes the beam dynamical mechanism of SFRFQ, and presents a design example of SFRFQ cavity, which will be used as a post accelerator of a 26 MHz 1 MeV O + Integrated Split Ring (ISR) RFQ and accelerate O + from 1 to 1.5 MeV. Three electrostatic quadruples are adopted to realize the transverse beam matching from ISR RFQ to SFRFQ cavity. This setting is also useful for the beam size adjustment and its applications.

  13. Simulation of Laser Wake Field Acceleration using a 2.5D PIC Code

    SciTech Connect

    An, W. M.; Hua, J. F.; Huang, W. H.; Tang, Ch. X.; Lin, Y. Z.

    2006-11-27

    A 2.5D PIC simulation code is developed to study the LWFA( Laser WakeField Acceleration ). The electron self-injection and the generation of mono-energetic electron beam in LWFA is briefly discussed through the simulation. And the experiment of this year at SILEX-I laser facility is also introduced.

  14. A lightweight in situ visualization and analysis infrastructure for multi-physics HPC simulation codes

    SciTech Connect

    Harrison, Cyrus; Larsen, Matt; Brugger, Eric

    2016-12-05

    Strawman is a system designed to explore the in situ visualization and analysis needs of simulation code teams running multi-physics calculations on many-core HPC architectures. It porvides rendering pipelines that can leverage both many-core CPUs and GPUs to render images of simulation meshes.

  15. Code modernization and modularization of APEX and SWAT watershed simulation models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  16. The development of CACTUS : a wind and marine turbine performance simulation code.

    SciTech Connect

    Barone, Matthew Franklin; Murray, Jonathan

    2010-12-01

    CACTUS (Code for Axial and Cross-flow TUrbine Simulation) is a turbine performance simulation code, based on a free wake vortex method, under development at Sandia National Laboratories (SNL) as part of a Department of Energy program to study marine hydrokinetic (MHK) devices. The current effort builds upon work previously done at SNL in the area of vertical axis wind turbine simulation, and aims to add models to handle generic device geometry and physical models specific to the marine environment. An overview of the current state of the project and validation effort is provided.

  17. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    NASA Technical Reports Server (NTRS)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  18. MOCCA code for star cluster simulation: comparison with optical observations using COCOA

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz

    2016-02-01

    We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.

  19. Intercomparision of Monte Carlo Radiation Transport Codes MCNPX, GEANT4, and FLUKA for Simulating Proton Radiotherapy of the Eye

    PubMed Central

    Randeniya, S. D.; Taddei, P. J.; Newhauser, W. D.; Yepes, P.

    2010-01-01

    Monte Carlo simulations of an ocular treatment beam-line consisting of a nozzle and a water phantom were carried out using MCNPX, GEANT4, and FLUKA to compare the dosimetric accuracy and the simulation efficiency of the codes. Simulated central axis percent depth-dose profiles and cross-field dose profiles were compared with experimentally measured data for the comparison. Simulation speed was evaluated by comparing the number of proton histories simulated per second using each code. The results indicate that all the Monte Carlo transport codes calculate sufficiently accurate proton dose distributions in the eye and that the FLUKA transport code has the highest simulation efficiency. PMID:20865141

  20. Simulations of Edge Current Driven Kink Modes with BOUT + + code

    NASA Astrophysics Data System (ADS)

    Li, G. Q.; Xu, X. Q.; Snyder, P. B.; Turnbull, A. D.; Xia, T. Y.; Ma, C. H.; Xi, P. W.

    2013-10-01

    Edge kink modes (or peeling modes) play a key role in the ELMs. The edge kink modes are driven by peak edge current, which comes from the bootstrap current. We calculated sequences of equilibria with different edge current using CORSICA by keeping total current and pressure profile fixed. Based on these equilibria, with the 3-field BOUT + + code, we calculated the MHD instabilities driven by edge current. For linear low-n ideal MHD modes, BOUT + + results agree with GATO results. With the edge current increasing, the dominant modes are changed from high-n ballooning modes to low-n kink modes. The edge current provides also stabilizing effects on high-n ballooning modes. Furthermore, for edge current scan without keeping total current fixed, the increasing edge current can stabilize the high-n ballooning modes and cannot drive kink modes. The diamagnetic effect can stabilize the high-n ballooning modes, but has no effect on the low-n kink modes. Also, the nonlinear behavior of kink modes is analyzed. Work supported by China MOST grant 2013GB111000 and by China NSF grant 10975161. Also performed for USDOE by LLNL under DE-AC52-07NA27344.

  1. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    NASA Technical Reports Server (NTRS)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  2. Laser-Plasma Modeling Using PERSEUS Extended-MHD Simulation Code for HED Plasmas

    NASA Astrophysics Data System (ADS)

    Hamlin, Nathaniel; Seyler, Charles

    2016-10-01

    We discuss the use of the PERSEUS extended-MHD simulation code for high-energy-density (HED) plasmas in modeling laser-plasma interactions in relativistic and nonrelativistic regimes. By formulating the fluid equations as a relaxation system in which the current is semi-implicitly time-advanced using the Generalized Ohm's Law, PERSEUS enables modeling of two-fluid phenomena in dense plasmas without the need to resolve the smallest electron length and time scales. For relativistic and nonrelativistic laser-target interactions, we have validated a cycle-averaged absorption (CAA) laser driver model against the direct approach of driving the electromagnetic fields. The CAA model refers to driving the radiation energy and flux rather than the fields, and using hyperbolic radiative transport, coupled to the plasma equations via energy source terms, to model absorption and propagation of the radiation. CAA has the advantage of not requiring adequate grid resolution of each laser wavelength, so that the system can span many wavelengths without requiring prohibitive CPU time. For several laser-target problems, we compare existing MHD results to extended-MHD results generated using PERSEUS with the CAA model, and examine effects arising from Hall physics. This work is supported by the National Nuclear Security Administration stewardship sciences academic program under Department of Energy cooperative agreements DE-FOA-0001153 and DE-NA0001836.

  3. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ RE; CANDY J; HINTON FL; ESTRADA-MILA C; KINSEY JE

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or a globally with physical profile variation. Rohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, plasma pinches and impurity flow, and simulations at fixed flow rather than fixed gradient are illustrated and discussed.

  4. Simulations of Failure via Three-Dimensional Cracking in Fuel Cladding for Advanced Nuclear Fuels

    SciTech Connect

    Lu, Hongbing; Bukkapatnam, Satish; Harimkar, Sandip; Singh, Raman; Bardenhagen, Scott

    2014-01-09

    Enhancing performance of fuel cladding and duct alloys is a key means of increasing fuel burnup. This project will address the failure of fuel cladding via three-dimensional cracking models. Researchers will develop a simulation code for the failure of the fuel cladding and validate the code through experiments. The objective is to develop an algorithm to determine the failure of fuel cladding in the form of three-dimensional cracking due to prolonged exposure under varying conditions of pressure, temperature, chemical environment, and irradiation. This project encompasses the following tasks: 1. Simulate 3D crack initiation and growth under instantaneous and/or fatigue loads using a new variant of the material point method (MPM); 2. Simulate debonding of the materials in the crack path using cohesive elements, considering normal and shear traction separation laws; 3. Determine the crack propagation path, considering damage of the materials incorporated in the cohesive elements to allow the energy release rate to be minimized; 4. Simulate the three-dimensional fatigue crack growth as a function of loading histories; 5. Verify the simulation code by comparing results to theoretical and numerical studies available in the literature; 6. Conduct experiments to observe the crack path and surface profile in unused fuel cladding and validate against simulation results; and 7. Expand the adaptive mesh refinement infrastructure parallel processing environment to allow adaptive mesh refinement at the 3D crack fronts and adaptive mesh merging in the wake of cracks. Fuel cladding is made of materials such as stainless steels and ferritic steels with added alloying elements, which increase stability and durability under irradiation. As fuel cladding is subjected to water, chemicals, fission gas, pressure, high temperatures, and irradiation while in service, understanding performance is essential. In the fast fuel used in advanced burner reactors, simulations of the nuclear

  5. SpectralPlasmaSolver: a Spectral Code for Multiscale Simulations of Collisionless, Magnetized Plasmas

    NASA Astrophysics Data System (ADS)

    Vencels, Juris; Delzanno, Gian Luca; Manzini, Gianmarco; Markidis, Stefano; Peng, Ivy Bo; Roytershteyn, Vadim

    2016-05-01

    We present the design and implementation of a spectral code, called SpectralPlasmaSolver (SPS), for the solution of the multi-dimensional Vlasov-Maxwell equations. The method is based on a Hermite-Fourier decomposition of the particle distribution function. The code is written in Fortran and uses the PETSc library for solving the non-linear equations and preconditioning and the FFTW library for the convolutions. SPS is parallelized for shared- memory machines using OpenMP. As a verification example, we discuss simulations of the two-dimensional Orszag-Tang vortex problem and successfully compare them against a fully kinetic Particle-In-Cell simulation. An assessment of the performance of the code is presented, showing a significant improvement in the code running-time achieved by preconditioning, while strong scaling tests show a factor of 10 speed-up using 16 threads.

  6. Numerical Simulation of Non-Isothermal CO2 Injection Using the Thermo-Hydro-Mechanical Code CODE_BRIGHT

    NASA Astrophysics Data System (ADS)

    Vilarrasa, V.; Olivella, S.; Silva, O.; Carrera, J.

    2012-04-01

    Storage of carbon dioxide (CO2) in deep geological formations is considered an option for reducing greenhouse gas emissions to the atmosphere. Injecting CO2 into aquifers at depths greater than 800 m brings CO2 to a supercritical state where its density is large enough to ensure an efficient use of pore space. However, CO2 will always be lighter than the resident brine. Therefore, it will flow along the top of the aquifer because of buoyancy. Thus, suitable aquifers should be capped by a low-permeability rock to avoid CO2 migration to upper aquifers and the surface. Therefore, ensuring mechanical stability of the caprock is critical to avoid CO2 leakage. Yet, CO2 injection can result in significant pressure buildup, which affects the stress field and may induce large deformations (Vilarrasa et al., 2010b). These can eventually damage the caprock and open up new flow paths. Moreover, inflowing CO2 may not be in thermal equilibrium with the aquifer, which induces stress changes that may affect the caprock stability. We use the coupled thermo-hydro-mechanical finite element numerical code CODE_BRIGHT (Olivella et al., 1994, 1996) to simulate these processes. We have extended the code to simulate CO2 as a non-wetting phase. To this end, we have implemented the Redlich-Kwong equation of state for CO2. As a first step, two-phase flow studies (Vilarrasa et al., 2010a) were carried out. Next, coupled hydro-mechanical simulations were performed (Vilarrasa et al., 2010b). Finally, we have implemented CO2 thermal properties to simulate non-isothermal CO2 injection in deformable deep saline formations. Coupled thermo-hydro-mechanical simulations of CO2 injection produce a region in thermal equilibrium with the injected CO2. The thermal transition is abrupt. A small rise in the temperature of the supercritical CO2 region is produced by the exothermal reaction of CO2 dissolution into the brine. An induced thermal stress change due to thermal contraction/expansion of the rock

  7. Implications of advanced collision operators for gyrokinetic simulation

    NASA Astrophysics Data System (ADS)

    Belli, E. A.; Candy, J.

    2017-04-01

    In this work, we explore both the potential improvements and pitfalls that arise when using advanced collision models in gyrokinetic simulations of plasma microinstabilities. Comparisons are made between the simple-but-standard electron Lorentz operator and specific variations of the advanced Sugama operator. The Sugama operator describes multi-species collisions including energy diffusion, momentum and energy conservation terms, and is valid for arbitrary wavelength. We report scans over collision frequency for both low and high {k}θ {ρ }s modes, with relevance for multiscale simulations that couple ion and electron scale physics. The influence of the ion–ion collision terms—not retained in the electron Lorentz model—on the damping of zonal flows is also explored. Collision frequency scans for linear and nonlinear simulations of ion-temperature-gradient instabilities including impurity ions are presented. Finally, implications for modeling turbulence in the highly collisional edge are discussed.

  8. Gasification CFD Modeling for Advanced Power Plant Simulations

    SciTech Connect

    Zitney, S.E.; Guenther, C.P.

    2005-09-01

    In this paper we have described recent progress on developing CFD models for two commercial-scale gasifiers, including a two-stage, coal slurry-fed, oxygen-blown, pressurized, entrained-flow gasifier and a scaled-up design of the PSDF transport gasifier. Also highlighted was NETL’s Advanced Process Engineering Co-Simulator for coupling high-fidelity equipment models with process simulation for the design, analysis, and optimization of advanced power plants. Using APECS, we have coupled the entrained-flow gasifier CFD model into a coal-fired, gasification-based FutureGen power and hydrogen production plant. The results for the FutureGen co-simulation illustrate how the APECS technology can help engineers better understand and optimize gasifier fluid dynamics and related phenomena that impact overall power plant performance.

  9. The GEANT low energy Compton scattering (GLECS) package for use in simulating advanced Compton telescopes

    NASA Astrophysics Data System (ADS)

    Kippen, R. Marc

    2004-02-01

    Compton γ-ray imaging is inherently based on the assumption of γ-rays scattering with free electrons. In reality, the non-zero momentum of target electrons bound in atoms blurs this ideal scattering response in a process known as Doppler broadening. The design and understanding of advanced Compton telescopes, thus, depends critically on the ability to accurately account for Doppler broadening effects. For this purpose, a Monte Carlo package that simulates detailed Doppler broadening has been developed for use with the powerful, general-purpose GEANT3 and GEANT4 radiation transport codes. This paper describes the design of this package, and illustrates results of comparison with selected experimental data.

  10. On the Quantification of Incertitude in Astrophysical Simulation Codes

    NASA Astrophysics Data System (ADS)

    Hoffman, Melissa; Katz, Maximilian P.; Willcox, Donald E.; Ferson, Scott; Swesty, F. Douglas; Calder, Alan

    2017-01-01

    We present a pedagogical study of uncertainty quantification (UQ) due to epistemic uncertainties (incertitude) in astrophysical modeling using the stellar evolution software instrument MESA (Modules and Experiments for Stellar Astrophysics). We present a general methodology for UQ and examine the specific case of stars evolving from the main sequence to carbon/oxygen white dwarfs. Our study considers two epistemic variables: the wind parameters during the Red Giant and Asymptotic Giant branch phases of evolution. We choose uncertainty intervals for each variable, and use these as input to MESA simulations. Treating MESA as a "black box," we apply two UQ techniques, Cauchy deviates and Quadratic Response Surface Models, to obtain bounds for the final white dwarf masses. Our study is a proof of concept applicable to other computational problems to enable a more robust understanding of incertitude. This work was supported in part by the US Department of Energy under grant DE-FG02-87ER40317.

  11. Advancement of DOE's EnergyPlus Building Energy Simulation Payment

    SciTech Connect

    Gu, Lixing; Shirey, Don; Raustad, Richard; Nigusse, Bereket; Sharma, Chandan; Lawrie, Linda; Strand, Rick; Pedersen, Curt; Fisher, Dan; Lee, Edwin; Witte, Mike; Glazer, Jason; Barnaby, Chip

    2011-09-30

    EnergyPlus{sup TM} is a new generation computer software analysis tool that has been developed, tested, and commercialized to support DOE's Building Technologies (BT) Program in terms of whole-building, component, and systems R&D (http://www.energyplus.gov). It is also being used to support evaluation and decision making of zero energy building (ZEB) energy efficiency and supply technologies during new building design and existing building retrofits. The 5-year project was managed by the National Energy Technology Laboratory and was divided into 5 budget period between 2006 and 2011. During the project period, 11 versions of EnergyPlus were released. This report summarizes work performed by an EnergyPlus development team led by the University of Central Florida's Florida Solar Energy Center (UCF/FSEC). The team members consist of DHL Consulting, C. O. Pedersen Associates, University of Illinois at Urbana-Champaign, Oklahoma State University, GARD Analytics, Inc., and WrightSoft Corporation. The project tasks involved new feature development, testing and validation, user support and training, and general EnergyPlus support. The team developed 146 new features during the 5-year period to advance the EnergyPlus capabilities. Annual contributions of new features are 7 in budget period 1, 19 in period 2, 36 in period 3, 41 in period 4, and 43 in period 5, respectively. The testing and validation task focused on running test suite and publishing report, developing new IEA test suite cases, testing and validating new source code, addressing change requests, and creating and testing installation package. The user support and training task provided support for users and interface developers, and organized and taught workshops. The general support task involved upgrading StarTeam (team sharing) software and updating existing utility software. The project met the DOE objectives and completed all tasks successfully. Although the EnergyPlus software was enhanced significantly

  12. Understanding Performance of Parallel Scientific Simulation Codes using Open|SpeedShop

    SciTech Connect

    Ghosh, K K

    2011-11-07

    Conclusions of this presentation are: (1) Open SpeedShop's (OSS) is convenient to use for large, parallel, scientific simulation codes; (2) Large codes benefit from uninstrumented execution; (3) Many experiments can be run in a short time - might need multiple shots e.g. usertime for caller-callee, hwcsamp for HW counters; (4) Decent idea of code's performance is easily obtained; (5) Statistical sampling calls for decent number of samples; and (6) HWC data is very useful for micro-analysis but can be tricky to analyze.

  13. Monte-Carlo simulation of a coded aperture SPECT apparatus using uniformly redundant arrays

    NASA Astrophysics Data System (ADS)

    Gemmill, Paul E.; Chaney, Roy C.; Fenyves, Ervin J.

    1995-09-01

    Coded apertures are used in tomographic imaging systems to improve the signal-to-noise ratio (SNR) of the apparatus with a larger aperture transmissions area while maintaining the spatial resolution of the single pinhole. Coded apertures developed from uniformly redundant arrays (URA) have an aperture transmission area of slightly over one half of the total aperture. Computer simulations show that the spatial resolution of a SPECT apparatus using a URA generated coded aperture compared favorably with theoretical expectations and has a SNR that is approximately 3.5 to 4 times that of a single pinhole camera for a variety of cases.

  14. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  15. Testing and Modeling of a 3-MW Wind Turbine Using Fully Coupled Simulation Codes (Poster)

    SciTech Connect

    LaCava, W.; Guo, Y.; Van Dam, J.; Bergua, R.; Casanovas, C.; Cugat, C.

    2012-06-01

    This poster describes the NREL/Alstom Wind testing and model verification of the Alstom 3-MW wind turbine located at NREL's National Wind Technology Center. NREL,in collaboration with ALSTOM Wind, is studying a 3-MW wind turbine installed at the National Wind Technology Center(NWTC). The project analyzes the turbine design using a state-of-the-art simulation code validated with detailed test data. This poster describes the testing and the model validation effort, and provides conclusions about the performance of the unique drive train configuration used in this wind turbine. The 3-MW machine has been operating at the NWTC since March 2011, and drive train measurements will be collected through the spring of 2012. The NWTC testing site has particularly turbulent wind patterns that allow for the measurement of large transient loads and the resulting turbine response. This poster describes the 3-MW turbine test project, the instrumentation installed, and the load cases captured. The design of a reliable wind turbine drive train increasingly relies on the use of advanced simulation to predict structural responses in a varying wind field. This poster presents a fully coupled, aero-elastic and dynamic model of the wind turbine. It also shows the methodology used to validate the model, including the use of measured tower modes, model-to-model comparisons of the power curve, and mainshaft bending predictions for various load cases. The drivetrain is designed to only transmit torque to the gearbox, eliminating non-torque moments that are known to cause gear misalignment. Preliminary results show that the drivetrain is able to divert bending loads in extreme loading cases, and that a significantly smaller bending moment is induced on the mainshaft compared to a three-point mounting design.

  16. Simulation of a ceramic impact experiment using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-08-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPHINX. We describe a new brittle fracture model that we have implemented into SPHINX, and we discuss how the model differs from others. To illustrate the code`s current capability, we simulate an experiment in which a tungsten rod strikes a target of heavily confined ceramic. Simulations in 3D at relatively coarse resolution yield poor results. However, 2D plane-strain approximations to the test produce crack patterns that are strikingly similar to the data, although the fracture model needs further refinement to match some of the finer details. We conclude with an outline of plans for continuing research and development.

  17. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.

    2016-01-01

    This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.

  18. Development and Test of 2.5-Dimensional Electromagnetic PIC Simulation Code

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Yun; Lee, Ensang; Kim, Khan-Hyuk; Seon, Jongho; Lee, Dong-Hun; Ryu, Kwang-Sun

    2015-03-01

    We have developed a 2.5-dimensional electromagnetic particle simulation code using the particle-in-cell (PIC) method to investigate electromagnetic phenomena that occur in space plasmas. Our code is based on the leap-frog method and the centered difference method for integration and differentiation of the governing equations. We adopted the relativistic Buneman-Boris method to solve the Lorentz force equation and the Esirkepov method to calculate the current density while maintaining charge conservation. Using the developed code, we performed test simulations for electron two-stream instability and electron temperature anisotropy induced instability with the same initial parameters as used in previously reported studies. The test simulation results are almost identical with those of the previous papers.

  19. Enhancement and Extension of Porosity Model in the FDNS-500 Code to Provide Enhanced Simulations of Rocket Engine Components

    NASA Technical Reports Server (NTRS)

    Cheng, Gary

    2003-01-01

    In the past, the design of rocket engines has primarily relied on the cold flow/hot fire test, and the empirical correlations developed based on the database from previous designs. However, it is very costly to fabricate and test various hardware designs during the design cycle, whereas the empirical model becomes unreliable in designing the advanced rocket engine where its operating conditions exceed the range of the database. The main goal of the 2nd Generation Reusable Launching Vehicle (GEN-II RLV) is to reduce the cost per payload and to extend the life of the hardware, which poses a great challenge to the rocket engine design. Hence, understanding the flow characteristics in each engine components is thus critical to the engine design. In the last few decades, the methodology of computational fluid dynamics (CFD) has been advanced to be a mature tool of analyzing various engine components. Therefore, it is important for the CFD design tool to be able to properly simulate the hot flow environment near the liquid injector, and thus to accurately predict the heat load to the injector faceplate. However, to date it is still not feasible to conduct CFD simulations of the detailed flowfield with very complicated geometries such as fluid flow and heat transfer in an injector assembly and through a porous plate, which requires gigantic computer memories and power to resolve the detailed geometry. The rigimesh (a sintered metal material), utilized to reduce the heat load to the faceplate, is one of the design concepts for the injector faceplate of the GEN-II RLV. In addition, the injector assembly is designed to distribute propellants into the combustion chamber of the liquid rocket engine. A porosity mode thus becomes a necessity for the CFD code in order to efficiently simulate the flow and heat transfer in these porous media, and maintain good accuracy in describing the flow fields. Currently, the FDNS (Finite Difference Navier-Stakes) code is one of the CFD codes

  20. RAY-RAMSES: a code for ray tracing on the fly in N-body simulations

    NASA Astrophysics Data System (ADS)

    Barreira, Alexandre; Llinares, Claudio; Bose, Sownak; Li, Baojiu

    2016-05-01

    We present a ray tracing code to compute integrated cosmological observables on the fly in AMR N-body simulations. Unlike conventional ray tracing techniques, our code takes full advantage of the time and spatial resolution attained by the N-body simulation by computing the integrals along the line of sight on a cell-by-cell basis through the AMR simulation grid. Moroever, since it runs on the fly in the N-body run, our code can produce maps of the desired observables without storing large (or any) amounts of data for post-processing. We implemented our routines in the RAMSES N-body code and tested the implementation using an example of weak lensing simulation. We analyse basic statistics of lensing convergence maps and find good agreement with semi-analytical methods. The ray tracing methodology presented here can be used in several cosmological analysis such as Sunyaev-Zel'dovich and integrated Sachs-Wolfe effect studies as well as modified gravity. Our code can also be used in cross-checks of the more conventional methods, which can be important in tests of theory systematics in preparation for upcoming large scale structure surveys.

  1. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  2. Lessons Learned From Dynamic Simulations of Advanced Fuel Cycles

    SciTech Connect

    Steven J. Piet; Brent W. Dixon; Jacob J. Jacobson; Gretchen E. Matthern; David E. Shropshire

    2009-04-01

    Years of performing dynamic simulations of advanced nuclear fuel cycle options provide insights into how they could work and how one might transition from the current once-through fuel cycle. This paper summarizes those insights from the context of the 2005 objectives and goals of the Advanced Fuel Cycle Initiative (AFCI). Our intent is not to compare options, assess options versus those objectives and goals, nor recommend changes to those objectives and goals. Rather, we organize what we have learned from dynamic simulations in the context of the AFCI objectives for waste management, proliferation resistance, uranium utilization, and economics. Thus, we do not merely describe “lessons learned” from dynamic simulations but attempt to answer the “so what” question by using this context. The analyses have been performed using the Verifiable Fuel Cycle Simulation of Nuclear Fuel Cycle Dynamics (VISION). We observe that the 2005 objectives and goals do not address many of the inherently dynamic discriminators among advanced fuel cycle options and transitions thereof.

  3. ADVANCES IN COMPREHENSIVE GYROKINETIC SIMULATIONS OF TRANSPORT IN TOKAMAKS

    SciTech Connect

    WALTZ,R.E; CANDY,J; HINTON,F.L; ESTRADA-MILA,C; KINSEY,J.E

    2004-10-01

    A continuum global gyrokinetic code GYRO has been developed to comprehensively simulate core turbulent transport in actual experimental profiles and enable direct quantitative comparisons to the experimental transport flows. GYRO not only treats the now standard ion temperature gradient (ITG) mode turbulence, but also treats trapped and passing electrons with collisions and finite {beta}, equilibrium ExB shear stabilization, and all in real tokamak geometry. Most importantly the code operates at finite relative gyroradius ({rho}{sub *}) so as to treat the profile shear stabilization and nonlocal effects which can break gyroBohm scaling. The code operates in either a cyclic flux-tube limit (which allows only gyroBohm scaling) or globally with physical profile variation. Bohm scaling of DIII-D L-mode has been simulated with power flows matching experiment within error bars on the ion temperature gradient. Mechanisms for broken gyroBohm scaling, neoclassical ion flows embedded in turbulence, turbulent dynamos and profile corrugations, are illustrated.

  4. A CellML simulation compiler and code generator using ODE solving schemes.

    PubMed

    Punzalan, Florencio Rusty; Yamashita, Yoshiharu; Soejima, Naoki; Kawabata, Masanari; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi; Amano, Akira

    2012-10-19

    : Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler.

  5. Integration of Advanced Simulation and Visualization for Manufacturing Process Optimization

    NASA Astrophysics Data System (ADS)

    Zhou, Chenn; Wang, Jichao; Tang, Guangwu; Moreland, John; Fu, Dong; Wu, Bin

    2016-05-01

    The integration of simulation and visualization can provide a cost-effective tool for process optimization, design, scale-up and troubleshooting. The Center for Innovation through Visualization and Simulation (CIVS) at Purdue University Northwest has developed methodologies for such integration with applications in various manufacturing processes. The methodologies have proven to be useful for virtual design and virtual training to provide solutions addressing issues on energy, environment, productivity, safety, and quality in steel and other industries. In collaboration with its industrial partnerships, CIVS has provided solutions to companies, saving over US38 million. CIVS is currently working with the steel industry to establish an industry-led Steel Manufacturing Simulation and Visualization Consortium through the support of National Institute of Standards and Technology AMTech Planning Grant. The consortium focuses on supporting development and implementation of simulation and visualization technologies to advance steel manufacturing across the value chain.

  6. Interfacing VPSC with finite element codes. Demonstration of irradiation growth simulation in a cladding tube

    SciTech Connect

    Patra, Anirban; Tome, Carlos

    2016-03-23

    This Milestone report shows good progress in interfacing VPSC with the FE codes ABAQUS and MOOSE, to perform component-level simulations of irradiation-induced deformation in Zirconium alloys. In this preliminary application, we have performed an irradiation growth simulation in the quarter geometry of a cladding tube. We have benchmarked VPSC-ABAQUS and VPSC-MOOSE predictions with VPSC-SA predictions to verify the accuracy of the VPSCFE interface. Predictions from the FE simulations are in general agreement with VPSC-SA simulations and also with experimental trends.

  7. Three dimensional nonlinear simulations of edge localized modes on the EAST tokamak using BOUT++ code

    SciTech Connect

    Liu, Z. X. Xia, T. Y.; Liu, S. C.; Ding, S. Y.; Xu, X. Q.; Joseph, I.; Meyer, W. H.; Gao, X.; Xu, G. S.; Shao, L. M.; Li, G. Q.; Li, J. G.

    2014-09-15

    Experimental measurements of edge localized modes (ELMs) observed on the EAST experiment are compared to linear and nonlinear theoretical simulations of peeling-ballooning modes using the BOUT++ code. Simulations predict that the dominant toroidal mode number of the ELM instability becomes larger for lower current, which is consistent with the mode structure captured with visible light using an optical CCD camera. The poloidal mode number of the simulated pressure perturbation shows good agreement with the filamentary structure observed by the camera. The nonlinear simulation is also consistent with the experimentally measured energy loss during an ELM crash and with the radial speed of ELM effluxes measured using a gas puffing imaging diagnostic.

  8. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    SciTech Connect

    McCoy, Michel; Archer, Bill; Hendrickson, Bruce; Wade, Doug; Hoang, Thuc

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  9. Requirements for advanced simulation of nuclear reactor and chemicalseparation plants.

    SciTech Connect

    Palmiotti, G.; Cahalan, J.; Pfeiffer, P.; Sofu, T.; Taiwo, T.; Wei,T.; Yacout, A.; Yang, W.; Siegel, A.; Insepov, Z.; Anitescu, M.; Hovland,P.; Pereira, C.; Regalbuto, M.; Copple, J.; Willamson, M.

    2006-12-11

    This report presents requirements for advanced simulation of nuclear reactor and chemical processing plants that are of interest to the Global Nuclear Energy Partnership (GNEP) initiative. Justification for advanced simulation and some examples of grand challenges that will benefit from it are provided. An integrated software tool that has its main components, whenever possible based on first principles, is proposed as possible future approach for dealing with the complex problems linked to the simulation of nuclear reactor and chemical processing plants. The main benefits that are associated with a better integrated simulation have been identified as: a reduction of design margins, a decrease of the number of experiments in support of the design process, a shortening of the developmental design cycle, and a better understanding of the physical phenomena and the related underlying fundamental processes. For each component of the proposed integrated software tool, background information, functional requirements, current tools and approach, and proposed future approaches have been provided. Whenever possible, current uncertainties have been quoted and existing limitations have been presented. Desired target accuracies with associated benefits to the different aspects of the nuclear reactor and chemical processing plants were also given. In many cases the possible gains associated with a better simulation have been identified, quantified, and translated into economical benefits.

  10. Hybrid and electric advanced vehicle systems (heavy) simulation

    NASA Technical Reports Server (NTRS)

    Hammond, R. A.; Mcgehee, R. K.

    1981-01-01

    A computer program to simulate hybrid and electric advanced vehicle systems (HEAVY) is described. It is intended for use early in the design process: concept evaluation, alternative comparison, preliminary design, control and management strategy development, component sizing, and sensitivity studies. It allows the designer to quickly, conveniently, and economically predict the performance of a proposed drive train. The user defines the system to be simulated using a library of predefined component models that may be connected to represent a wide variety of propulsion systems. The development of three models are discussed as examples.

  11. Preface to advances in numerical simulation of plasmas

    NASA Astrophysics Data System (ADS)

    Parker, Scott E.; Chacon, Luis

    2016-10-01

    This Journal of Computational Physics Special Issue, titled "Advances in Numerical Simulation of Plasmas," presents a snapshot of the international state of the art in the field of computational plasma physics. The articles herein are a subset of the topics presented as invited talks at the 24th International Conference on the Numerical Simulation of Plasmas (ICNSP), August 12-14, 2015 in Golden, Colorado. The choice of papers was highly selective. The ICNSP is held every other year and is the premier scientific meeting in the field of computational plasma physics.

  12. Flight investigation of cockpit-displayed traffic information utilizing coded symbology in an advanced operational environment

    NASA Technical Reports Server (NTRS)

    Abbott, T. S.; Moen, G. C.; Person, L. H., Jr.; Keyser, G. L., Jr.; Yenni, K. R.; Garren, J. F., Jr.

    1980-01-01

    Traffic symbology was encoded to provide additional information concerning the traffic, which was displayed on the pilot's electronic horizontal situation indicators (EHSI). A research airplane representing an advanced operational environment was used to assess the benefit of coded traffic symbology in a realistic work-load environment. Traffic scenarios, involving both conflict-free and conflict situations, were employed. Subjective pilot commentary was obtained through the use of a questionnaire and extensive pilot debriefings. These results grouped conveniently under two categories: display factors and task performance. A major item under the display factor category was the problem of display clutter. The primary contributors to clutter were the use of large map-scale factors, the use of traffic data blocks, and the presentation of more than a few airplanes. In terms of task performance, the cockpit-displayed traffic information was found to provide excellent overall situation awareness. Additionally, mile separation prescribed during these tests.

  13. Fire simulation in nuclear facilities: the FIRAC code and supporting experiments

    SciTech Connect

    Burkett, M.W.; Martin, R.A.; Fenton, D.L.; Gunaji, M.V.

    1984-01-01

    The fire accident analysis computer code FIRAC was designed to estimate radioactive and nonradioactive source terms and predict fire-induced flows and thermal and material transport within the ventilation systems of nuclear fuel cycle facilities. FIRAC maintains its basic structure and features and has been expanded and modified to include the capabilities of the zone-type compartment fire model computer code FIRIN developed by Battelle Pacific Northwest Laboratory. The two codes have been coupled to provide an improved simulation of a fire-induced transient within a facility. The basic material transport capability of FIRAC has been retained and includes estimates of entrainment, convection, deposition, and filtration of material. The interrelated effects of filter plugging, heat transfer, gas dynamics, material transport, and fire and radioactive source terms also can be simulated. Also, a sample calculation has been performed to illustrate some of the capabilities of the code and how a typical facility is modeled with FIRAC. In addition to the analytical work being performed at Los Alamos, experiments are being conducted at the New Mexico State University to support the FIRAC computer code development and verification. This paper summarizes two areas of the experimental work that support the material transport capabiities of the code: the plugging of high-efficiency particulate air (HEPA) filters by combustion aerosols and the transport and deposition of smoke in ventilation system ductwork.

  14. An introduction to LIME 1.0 and its use in coupling codes for multiphysics simulations.

    SciTech Connect

    Belcourt, Noel; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-11-01

    LIME is a small software package for creating multiphysics simulation codes. The name was formed as an acronym denoting 'Lightweight Integrating Multiphysics Environment for coupling codes.' LIME is intended to be especially useful when separate computer codes (which may be written in any standard computer language) already exist to solve different parts of a multiphysics problem. LIME provides the key high-level software (written in C++), a well defined approach (with example templates), and interface requirements to enable the assembly of multiple physics codes into a single coupled-multiphysics simulation code. In this report we introduce important software design characteristics of LIME, describe key components of a typical multiphysics application that might be created using LIME, and provide basic examples of its use - including the customized software that must be written by a user. We also describe the types of modifications that may be needed to individual physics codes in order for them to be incorporated into a LIME-based multiphysics application.

  15. Development of a Computational Framework on Fluid-Solid Mixture Flow Simulations for the COMPASS Code

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is designed based on the moving particle semi-implicit method to simulate various complex mesoscale phenomena relevant to core disruptive accidents of sodium-cooled fast reactors. In this study, a computational framework for fluid-solid mixture flow simulations was developed for the COMPASS code. The passively moving solid model was used to simulate hydrodynamic interactions between fluid and solids. Mechanical interactions between solids were modeled by the distinct element method. A multi-time-step algorithm was introduced to couple these two calculations. The proposed computational framework for fluid-solid mixture flow simulations was verified by the comparison between experimental and numerical studies on the water-dam break with multiple solid rods.

  16. Modified-Gravity-GADGET: a new code for cosmological hydrodynamical simulations of modified gravity models

    NASA Astrophysics Data System (ADS)

    Puchwein, Ewald; Baldi, Marco; Springel, Volker

    2013-11-01

    We present a new massively parallel code for N-body and cosmological hydrodynamical simulations of modified gravity models. The code employs a multigrid-accelerated Newton-Gauss-Seidel relaxation solver on an adaptive mesh to efficiently solve for perturbations in the scalar degree of freedom of the modified gravity model. As this new algorithm is implemented as a module for the P-GADGET3 code, it can at the same time follow the baryonic physics included in P-GADGET3, such as hydrodynamics, radiative cooling and star formation. We demonstrate that the code works reliably by applying it to simple test problems that can be solved analytically, as well as by comparing cosmological simulations to results from the literature. Using the new code, we perform the first non-radiative and radiative cosmological hydrodynamical simulations of an f (R)-gravity model. We also discuss the impact of active galactic nucleus feedback on the matter power spectrum, as well as degeneracies between the influence of baryonic processes and modifications of gravity.

  17. A general CellML simulation code generator using ODE solving scheme description.

    PubMed

    Amano, Akira; Soejima, Naoki; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2011-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify target computation resources or numerical calculation methods or simulation conditions. Typical biological function simulation software consists of 1) model equation, 2) boundary conditions and 3) ODE solving scheme. Introducing the description model file such as CellML is useful for generalizing the first point and partly second point, however, third point is difficult to handle. We introduce a simulation software generation system which use markup language based description of ODE solving scheme together with cell model description file. By using this software, we can easily generate biological simulation program code with different ODE solving schemes. To show the efficiency of our system, experimental results of several simulation models with different ODE scheme and different computation resources are shown.

  18. A program code generator for multiphysics biological simulation using markup languages.

    PubMed

    Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi

    2012-01-01

    To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.

  19. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods

    PubMed Central

    Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.

    2011-01-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276

  20. On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods.

    PubMed

    Lee, Anthony; Yau, Christopher; Giles, Michael B; Doucet, Arnaud; Holmes, Christopher C

    2010-12-01

    We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.

  1. Advanced cardiac life support refresher course using standardized objective-based Mega Code testing.

    PubMed

    Kaye, W; Mancini, M E; Rallis, S F

    1987-01-01

    The American Heart Association (AHA) recommends that those whose daily work requires knowledge and skills in advanced cardiac life support (ACLS) not only be trained in ACLS, but also be given a refresher training at least every 2 yr. However, AMA offers no recommended course for retraining; no systematic studies of retraining have been conducted on which to base these recommendations. In this paper we review and present our recommendation for a standardized approach to refresher training. Using the goals and objectives of the ACLS training program as evaluation criteria, we tested with the Mega Code a sample population who had previously been trained in ACLS. The results revealed deficiencies in ACLS knowledge and skills in the areas of assessment, defibrillation, drug therapy, and determining the cause of an abnormal blood gas value. We combined this information with our knowledge of other deficiencies identified during actual resuscitation attempts and other basic life-support and ACLS teaching experiences. We then designed a refresher course which was consistent with the overall goals and objectives of the ACLS training program, but which placed emphasis on the deficiencies identified in the pretesting. We taught our newly designed refresher course in three sessions, which included basic life support, endotracheal intubation, arrhythmia recognition and therapeutic modalities, defibrillation, and Mega Code practice. In a fourth session, using Mega Code testing, we evaluated knowledge and skill learning immediately after training. We similarly tested retention 2 to 4 months later. Performance immediately after refresher training showed improvement in all areas where performance had been weak.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. A Complex-Geometry Validation Experiment for Advanced Neutron Transport Codes

    SciTech Connect

    David W. Nigg; Anthony W. LaPorta; Joseph W. Nielsen; James Parry; Mark D. DeHart; Samuel E. Bays; William F. Skerjanc

    2013-11-01

    The Idaho National Laboratory (INL) has initiated a focused effort to upgrade legacy computational reactor physics software tools and protocols used for support of core fuel management and experiment management in the Advanced Test Reactor (ATR) and its companion critical facility (ATRC) at the INL.. This will be accomplished through the introduction of modern high-fidelity computational software and protocols, with appropriate new Verification and Validation (V&V) protocols, over the next 12-18 months. Stochastic and deterministic transport theory based reactor physics codes and nuclear data packages that support this effort include MCNP5[1], SCALE/KENO6[2], HELIOS[3], SCALE/NEWT[2], and ATTILA[4]. Furthermore, a capability for sensitivity analysis and uncertainty quantification based on the TSUNAMI[5] system has also been implemented. Finally, we are also evaluating the Serpent[6] and MC21[7] codes, as additional verification tools in the near term as well as for possible applications to full three-dimensional Monte Carlo based fuel management modeling in the longer term. On the experimental side, several new benchmark-quality code validation measurements based on neutron activation spectrometry have been conducted using the ATRC. Results for the first four experiments, focused on neutron spectrum measurements within the Northwest Large In-Pile Tube (NW LIPT) and in the core fuel elements surrounding the NW LIPT and the diametrically opposite Southeast IPT have been reported [8,9]. A fifth, very recent, experiment focused on detailed measurements of the element-to-element core power distribution is summarized here and examples of the use of the measured data for validation of corresponding MCNP5, HELIOS, NEWT, and Serpent computational models using modern least-square adjustment methods are provided.

  3. MPEG-2/4 Low-Complexity Advanced Audio Coding Optimization and Implementation on DSP

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Huang, Hao-Yu; Chen, Yen-Lin; Peng, Hsin-Yuan; Huang, Jia-Hsiung

    This study presents several optimization approaches for the MPEG-2/4 Audio Advanced Coding (AAC) Low Complexity (LC) encoding and decoding processes. Considering the power consumption and the peripherals required for consumer electronics, this study adopts the TI OMAP5912 platform for portable devices. An important optimization issue for implementing AAC codec on embedded and mobile devices is to reduce computational complexity and memory consumption. Due to power saving issues, most embedded and mobile systems can only provide very limited computational power and memory resources for the coding process. As a result, modifying and simplifying only one or two blocks is insufficient for optimizing the AAC encoder and enabling it to work well on embedded systems. It is therefore necessary to enhance the computational efficiency of other important modules in the encoding algorithm. This study focuses on optimizing the Temporal Noise Shaping (TNS), Mid/Side (M/S) Stereo, Modified Discrete Cosine Transform (MDCT) and Inverse Quantization (IQ) modules in the encoder and decoder. Furthermore, we also propose an efficient memory reduction approach that provides a satisfactory balance between the reduction of memory usage and the expansion of the encoded files. In the proposed design, both the AAC encoder and decoder are built with fixed-point arithmetic operations and implemented on a DSP processor combined with an ARM-core for peripheral controlling. Experimental results demonstrate that the proposed AAC codec is computationally effective, has low memory consumption, and is suitable for low-cost embedded and mobile applications.

  4. Non-coding RNAs deregulation in oral squamous cell carcinoma: advances and challenges.

    PubMed

    Yu, T; Li, C; Wang, Z; Liu, K; Xu, C; Yang, Q; Tang, Y; Wu, Y

    2016-05-01

    Oral squamous cell carcinoma (OSCC) is a common cause of cancer death. Despite decades of improvements in exploring new treatments and considerable advance in multimodality treatment, satisfactory curative rates have not yet been reached. The difficulty of early diagnosis and the high prevalence of metastasis associated with OSCC contribute to its dismal prognosis. In the last few decades the emerging data from both tumor biology and clinical trials led to growing interest in the research for predictive biomarkers. Non-coding RNAs (ncRNAs) are promising biomarkers. Among numerous kinds of ncRNAs, short ncRNAs, such as microRNAs (miRNAs), have been extensively investigated with regard to their biogenesis, function, and importance in carcinogenesis. In contrast to miRNAs, long non-coding RNAs (lncRNAs) are much less known concerning their functions in human cancers especially in OSCC. The present review highlighted the roles of miRNAs and newly discovered lncRNAs in oral tumorigenesis, metastasis, and their clinical implication.

  5. Assessment of SFR fuel pin performance codes under advanced fuel for minor actinide transmutation

    SciTech Connect

    Bouineau, V.; Lainet, M.; Chauvin, N.; Pelletier, M.

    2013-07-01

    Americium is a strong contributor to the long term radiotoxicity of high activity nuclear waste. Transmutation by irradiation in nuclear reactors of long-lived nuclides like {sup 241}Am is, therefore, an option for the reduction of radiotoxicity and residual power packages as well as the repository area. In the SUPERFACT Experiment four different oxide fuels containing high and low concentrations of {sup 237}Np and {sup 241}Am, representing the homogeneous and heterogeneous in-pile recycling concepts, were irradiated in the PHENIX reactor. The behavior of advanced fuel materials with minor actinide needs to be fully characterized, understood and modeled in order to optimize the design of this kind of fuel elements and to evaluate its performances. This paper assesses the current predictability of fuel performance codes TRANSURANUS and GERMINAL V2 on the basis of post irradiation examinations of the SUPERFACT experiment for pins with low minor actinide content. Their predictions have been compared to measured data in terms of geometrical changes of fuel and cladding, fission gases behavior and actinide and fission product distributions. The results are in good agreement with the experimental results, although improvements are also pointed out for further studies, especially if larger content of minor actinide will be taken into account in the codes. (authors)

  6. Stochastic Simulation of a Commander’s Decision Cycle (SSIM Code)

    DTIC Science & Technology

    2001-06-01

    STOCHASTIC SIMULATION OF A COMMANDERS DECISION CYCLE (SSIM CODE) Contract or Grant Number Program Element Number Authors Project Number Task Number Work ...Department of Defense modeling and simulation community. This work was detailed during a Military Operations Research Society workshop titled: Evolving the...FGJ + AEFH + BEGH + DEHJ CJ DH + ABE + EFG + ACDF + AFHJ + BCDG + BGHJ DE ABH + ACG + BCF + FGH + AEFJ + BEGJ + CEHJ EF BCD + BHJ + CGJ + DGH

  7. Subgrid Scale Modeling in Solar Convection Simulations Using the ASH Code

    DTIC Science & Technology

    2003-12-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP014789 TITLE: Subgrid Scale Modeling in Solar Convection Simulations...ADP014788 thru ADP014827 UNCLASSIFIED Center for Turbulence Research 15 Annual Research Briefs 2003 Subgrid scale modeling in solar convection simulations...using the ASH code By Y.-N. Young, M. Miescht and N. N. Mansour 1. Motivation and objectives The turbulent solar convection zone has remained one of

  8. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    SciTech Connect

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  9. Simulations of one- and two-dimensional complex plasmas using a modular, object-oriented code

    NASA Astrophysics Data System (ADS)

    Jefferson, R. A.; Cianciosa, M.; Thomas, E.

    2010-11-01

    In a complex plasma, charged microparticles ("dust") are added to a background of ions, electrons, and neutral particles. This dust fully interacts with the surrounding plasma and self-consistently alters the plasma environment leading to the emergence of new plasma behavior. Numerical tools that complement experimental investigations can provide important insights into the properties of complex plasmas. This paper discusses a newly developed code, named DEMON (dynamic exploration of microparticle clouds optimized numerically), for simulating a complex plasma. The DEMON code models the behavior of the charged particle component of a complex plasma in a uniform plasma background. The key feature of the DEMON code is the use of a modular force model that allows a wide variety of experimental configurations to be studied without varying the core code infrastructure. Examples of the flexibility of this modular approach are presented using examples of one- and two-dimensional complex plasmas.

  10. A New Code SORD for Simulation of Polarized Light Scattering in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-01-01

    We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel atmosphere of the Earth. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/.

  11. Parameter optimization capability in the trajectory code PMAST (Point-Mass Simulation Tool)

    SciTech Connect

    Outka, D.E.

    1987-01-28

    Trajectory optimization capability has been added to PMAST through addition of the Recursive Quadratic Programming code VF02AD. The scope of trajectory optimization problems the resulting code can solve is very broad, as it takes advantage of the versatility of the original PMAST code. Most three-degree-of-freedom flight-vehicle problems can be simulated with PMAST, and up to 25 parameters specifying initial conditions, weights, control histories and other problem-deck inputs can be used to meet trajectory constraints in some optimal manner. This report outlines the mathematical formulation of the optimization technique, describes the input requirements and suggests guidelines for problem formulation. An example problem is presented to demonstrate the use and features of the optimization portions of the code.

  12. A new code SORD for simulation of polarized light scattering in the Earth atmosphere

    NASA Astrophysics Data System (ADS)

    Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent

    2016-05-01

    We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel Earth atmosphere. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering1, 2). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork3 (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/ or ftp://maiac.gsfc.nasa.gov/pub/SORD.zip

  13. Nonlinear Simulation of Alfven Eigenmodes driven by Energetic Particles: Comparison between HMGC and TAEFL Codes

    NASA Astrophysics Data System (ADS)

    Bierwage, Andreas; Spong, Donald A.

    2009-05-01

    Hybrid-MHD-Gyrokinetic Code (HMGC) [1] and the gyrofluid code TAEFL [2,3] are used for nonlinear simulation of Alfven Eigenmodes in Tokamak plasma. We compare results obtained in two cases: (I) a case designed for cross-code benchmark of TAE excitation; (II) a case based on a dedicated DIII-D shot #132707 where RSAE and TAE activity is observed. Differences between the numerical simulation results are discussed and future directions are outlined. [1] S. Briguglio, G. Vlad, F. Zonca and C. Kar, Phys. Plasmas 2 (1995) 3711. [2] D.A. Spong, B.A. Carreras and C.L. Hedrick, Phys. Fluids B4 (1992) 3316. [3] D.A. Spong, B.A. Carreras and C.L. Hedrick, Phys. Plasmas 1 (1994) 1503.

  14. Reliability of astrophysical jet simulations in 2D. On inter-code reliability and numerical convergence

    NASA Astrophysics Data System (ADS)

    Krause, M.; Camenzind, M.

    2001-12-01

    In the present paper, we examine the convergence behavior and inter-code reliability of astrophysical jet simulations in axial symmetry. We consider both pure hydrodynamic jets and jets with a dynamically significant magnetic field. The setups were chosen to match the setups of two other publications, and recomputed with the MHD code NIRVANA. We show that NIRVANA and the two other codes give comparable, but not identical results. We explain the differences by the different application of artificial viscosity in the three codes and numerical details, which can be summarized in a resolution effect, in the case without magnetic field: NIRVANA turns out to be a fair code of medium efficiency. It needs approximately twice the resolution as the code by Lind (Lind et al. 1989) and half the resolution as the code by Kössl (Kössl & Müller 1988). We find that some global properties of a hydrodynamical jet simulation, like e.g. the bow shock velocity, converge at 100 points per beam radius (ppb) with NIRVANA. The situation is quite different after switching on the toroidal magnetic field: in this case, global properties converge even at 10 ppb. In both cases, details of the inner jet structure and especially the terminal shock region are still insufficiently resolved, even at our highest resolution of 70 ppb in the magnetized case and 400 ppb for the pure hydrodynamic jet. The magnetized jet even suffers from a fatal retreat of the Mach disk towards the inflow boundary, which indicates that this simulation does not converge, in the end. This is also in definite disagreement with earlier simulations, and challenges further studies of the problem with other codes. In the case of our highest resolution simulation, we can report two new features: first, small scale Kelvin-Helmholtz instabilities are excited at the contact discontinuity next to the jet head. This slows down the development of the long wavelength Kelvin-Helmholtz instability and its turbulent cascade to smaller

  15. X-ray simulation with the Monte Carlo code PENELOPE. Application to Quality Control.

    PubMed

    Pozuelo, F; Gallardo, S; Querol, A; Verdú, G; Ródenas, J

    2012-01-01

    A realistic knowledge of the energy spectrum is very important in Quality Control (QC) of X-ray tubes in order to reduce dose to patients. However, due to the implicit difficulties to measure the X-ray spectrum accurately, it is not normally obtained in routine QC. Instead, some parameters are measured and/or calculated. PENELOPE and MCNP5 codes, based on the Monte Carlo method, can be used as complementary tools to verify parameters measured in QC. These codes allow estimating Bremsstrahlung and characteristic lines from the anode taking into account specific characteristics of equipment. They have been applied to simulate an X-ray spectrum. Results are compared with theoretical IPEM 78 spectrum. A sensitivity analysis has been developed to estimate the influence on simulated spectra of important parameters used in simulation codes. With this analysis it has been obtained that the FORCE factor is the most important parameter in PENELOPE simulations. FORCE factor, which is a variance reduction method, improves the simulation but produces hard increases of computer time. The value of FORCE should be optimized so that a good agreement of simulated and theoretical spectra is reached, but with a reduction of computer time. Quality parameters such as Half Value Layer (HVL) can be obtained with the PENELOPE model developed, but FORCE takes such a high value that computer time is hardly increased. On the other hand, depth dose assessment can be achieved with acceptable results for small values of FORCE.

  16. Parallelization issues of a code for physically-based simulation of fabrics

    NASA Astrophysics Data System (ADS)

    Romero, Sergio; Gutiérrez, Eladio; Romero, Luis F.; Plata, Oscar; Zapata, Emilio L.

    2004-10-01

    The simulation of fabrics, clothes, and flexible materials is an essential topic in computer animation of realistic virtual humans and dynamic sceneries. New emerging technologies, as interactive digital TV and multimedia products, make necessary the development of powerful tools to perform real-time simulations. Parallelism is one of such tools. When analyzing computationally fabric simulations we found these codes belonging to the complex class of irregular applications. Frequently this kind of codes includes reduction operations in their core, so that an important fraction of the computational time is spent on such operations. In fabric simulators these operations appear when evaluating forces, giving rise to the equation system to be solved. For this reason, this paper discusses only this phase of the simulation. This paper analyzes and evaluates different irregular reduction parallelization techniques on ccNUMA shared memory machines, applied to a real, physically-based, fabric simulator we have developed. Several issues are taken into account in order to achieve high code performance, as exploitation of data access locality and parallelism, as well as careful use of memory resources (memory overhead). In this paper we use the concept of data affinity to develop various efficient algorithms for reduction parallelization exploiting data locality.

  17. The Consortium for Advanced Simulation of Light Water Reactors

    SciTech Connect

    Ronaldo Szilard; Hongbin Zhang; Doug Kothe; Paul Turinsky

    2011-10-01

    The Consortium for Advanced Simulation of Light Water Reactors (CASL) is a DOE Energy Innovation Hub for modeling and simulation of nuclear reactors. It brings together an exceptionally capable team from national labs, industry and academia that will apply existing modeling and simulation capabilities and develop advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated as the Virtual Environment for Reactor Applications (VERA), will incorporate science-based models, state-of-the-art numerical methods, modern computational science and engineering practices, and uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs). It will couple state-of-the-art fuel performance, neutronics, thermal-hydraulics (T-H), and structural models with existing tools for systems and safety analysis and will be designed for implementation on both today's leadership-class computers and the advanced architecture platforms now under development by the DOE. CASL focuses on a set of challenge problems such as CRUD induced power shift and localized corrosion, grid-to-rod fretting fuel failures, pellet clad interaction, fuel assembly distortion, etc. that encompass the key phenomena limiting the performance of PWRs. It is expected that much of the capability developed will be applicable to other types of reactors. CASL's mission is to develop and apply modeling and simulation capabilities to address three critical areas of performance for nuclear power plants: (1) reduce capital and operating costs per unit energy by enabling power uprates and plant lifetime extension, (2) reduce nuclear waste volume generated by enabling higher fuel burnup, and (3) enhance nuclear safety by enabling high-fidelity predictive capability for component performance.

  18. Numerical Zooming Between a NPSS Engine System Simulation and a One-Dimensional High Compressor Analysis Code

    NASA Technical Reports Server (NTRS)

    Follen, Gregory; auBuchon, M.

    2000-01-01

    Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer along with the concept of numerical zooming between zero-dimensional to one-, two-, and three-dimensional component engine codes. In addition, the NPSS is refining the computing and communication technologies necessary to capture complex physical processes in a timely and cost-effective manner. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Of the different technology areas that contribute to the development of the NPSS Environment, the subject of this paper is a discussion on numerical zooming between a NPSS engine simulation and higher fidelity representations of the engine components (fan, compressor, burner, turbines, etc.). What follows is a description of successfully zooming one-dimensional (row-by-row) high-pressure compressor analysis results back to a zero-dimensional NPSS engine simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the capability of the engine system simulation and increase the level of virtual test conducted prior to committing the design to hardware.

  19. Code OK2—A simulation code of ion-beam illumination on an arbitrary shape and structure target

    NASA Astrophysics Data System (ADS)

    Ogoyski, A. I.; Kawata, S.; Someya, T.

    2004-08-01

    For computer simulations on heavy ion beam (HIB) irradiation on a spherical fuel pellet in heavy ion fusion (HIF) the code OK1 was developed and presented in [Comput. Phys. Commun. 157 (2004) 160-172]. The new code OK2 is a modified upgraded computer program for more common purposes in research fields of medical treatment, material processing as well as HIF. OK2 provides computational capabilities of a three-dimensional ion beam energy deposition on a target with an arbitrary shape and structure. Program summaryTitle of program: OK2 Catalogue identifier: ADTZ Other versions of this program [1] : Title of the program: OK1 Catalogue identifier: ADST Program summary URL:http://cpc.cs.qub.as.uk/summaries/ADTZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC (Pentium 4, ˜1 GHz or more recommended) Operating system: Windows or UNIX Program language used: C++ Memory required to execute with typical data: 2048 MB No. of bits in a word: 32 No. of processors used: 1CPU Has the code been vectorized or parallelized: No No. of bytes in distributed program, including test data: 17 334 No of lines in distributed program, including test date: 1487 Distribution format: tar gzip file Nature of physical problem: In research areas of HIF (Heavy Ion Beam Inertial Fusion) energy [1-4] and medical material sciences [5], ion energy deposition profiles need to be evaluated and calculated precisely. Due to a favorable energy deposition behavior of ions in matter [1-4] it is expected that ion beams would be one of preferable candidates in various fields including HIF and material processing. Especially in HIF for a successful fuel ignition and a sufficient fusion energy release, a stringent requirement is imposed on the HIB irradiation non-uniformity, which should be less than a few percent [4,6,7]. In order to meet this requirement we need to evaluate the uniformity of a realistic HIB irradiation and energy deposition pattern. The HIB

  20. A Modular Computer Code for Simulating Reactive Multi-Species Transport in 3-Dimensional Groundwater Systems

    SciTech Connect

    TP Clement

    1999-06-24

    RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in

  1. Advanced simulation study on bunch gap transient effect

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuya; Akai, Kazunori

    2016-06-01

    Bunch phase shift along the train due to a bunch gap transient is a concern in high-current colliders. In KEKB operation, the measured phase shift along the train agreed well with a simulation and a simple analytical form in most part of the train. However, a rapid phase change was observed at the leading part of the train, which was not predicted by the simulation or by the analytical form. In order to understand the cause of this observation, we have developed an advanced simulation, which treats the transient loading in each of the cavities of the three-cavity system of the accelerator resonantly coupled with energy storage (ARES) instead of the equivalent single cavities used in the previous simulation, operating in the accelerating mode. In this paper, we show that the new simulation reproduces the observation, and clarify that the rapid phase change at the leading part of the train is caused by a transient loading in the three-cavity system of ARES. KEKB is being upgraded to SuperKEKB, which is aiming at 40 times higher luminosity than KEKB. The gap transient in SuperKEKB is investigated using the new simulation, and the result shows that the rapid phase change at the leading part of the train is much larger due to higher beam currents. We will also present measures to mitigate possible luminosity reduction or beam performance deterioration due to the rapid phase change caused by the gap transient.

  2. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  3. Simulation of the passive condensation cooling tank of the PASCAL test facility using the component thermal-hydraulic analysis code CUPID

    SciTech Connect

    Cho, H. K.; Lee, S. J.; Kang, K. H.; Yoon, H. Y.

    2012-07-01

    For the analysis of transient two-phase flows in nuclear reactor components, a three-dimensional thermal hydraulics code, named CUPID, has been being developed. In the present study, the CUPID code was applied for the simulation of the PASCAL (PAFS Condensing Heat Removal Assessment Loop) test facility constructed with an aim of validating the cooling and operational performance of the PAFS (Passive Auxiliary Feedwater System). The PAFS is one of the advanced safety features adopted in the APR+ (Advanced Power Reactor +), which is intended to completely replace the conventional active auxiliary feedwater system. This paper presents the preliminary simulation results of the PASCAL facility performed with the CUPID code in order to verify its applicability to the thermal-hydraulic phenomena inside the system. A standalone calculation for the passive condensation cooling tank was performed by imposing a heat source boundary condition and the transient thermal-hydraulic behaviors inside the system, such as the water level, temperature and velocity, were qualitatively investigated. The simulation results verified that the natural circulation and boiling phenomena in the water pool can be well reproduced by the CUPID code. (authors)

  4. A Compact Code for Simulations of Quantum Error Correction in Classical Computers

    SciTech Connect

    Nyman, Peter

    2009-03-10

    This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.

  5. Simulating magnetospheres with numerical relativity: The GiRaFFE code

    NASA Astrophysics Data System (ADS)

    Babiuc-Hamilton, Maria; Etienne, Zach

    2016-01-01

    Numerical Relativity has shown success over the past several years, especially in the simulation of black holes and gravitational waves. In recent years, teams have tackled the problem of the interaction of gravitational and electromagnetic waves. But where there are plasmas, the simulations often have trouble reproducing nature. Neutron stars, black hole accretion disks, astrophysical jets—all of these represent extreme environments both gravitationally and electromagnetically. We are creating the first open-source, dynamical spacetime general relativity force-free electrodynamics code: GiRaFFE.We present here the performance of GiRaFFE in testing. With this code, we will simulate neutron star magnetospheres, collisions between neutron stars and black holes, and particular attention will be paid to the production of jets through the Blandford-Znajek mechanism.GiRaFFE will be made available to the community.

  6. Intercomparison of numerical simulation codes for geologic disposal of CO2

    SciTech Connect

    Pruess, Karsten; Garcia, Julio; Kovscek, Tony; Oldenburg, Curt; Rutqvist, Jonny; Steefel, Carl; Xu, Tianfu

    2002-11-27

    Numerical simulation codes were exercised on a suite of eight test problems that address CO2 disposal into geologic storage reservoirs, including depleted oil and gas reservoirs, and brine aquifers. Processes investigated include single- and multi-phase flow, gas diffusion, partitioning of CO2 into aqueous and oil phases, chemical interactions of CO2 with aqueous fluids and rock minerals, and mechanical changes due to changes in fluid pressures. Representation of fluid properties was also examined. In most cases results obtained from different simulation codes were in satisfactory agreement, providing confidence in the ability of current numerical simulation approaches to handle the physical and chemical processes that would be induced by CO2 disposal in geologic reservoirs. Some discrepancies were also identified and can be traced to differences in fluid property correlations, and space and time discretization.

  7. Development of the simulation platform between EAST plasma control system and the tokamak simulation code based on Simulink

    NASA Astrophysics Data System (ADS)

    Sen, WANG; Qiping, YUAN; Bingjia, XIAO

    2017-03-01

    Plasma control system (PCS), mainly developed for real-time feedback control calculation, plays a significant part during normal discharges in a magnetic fusion device, while the tokamak simulation code (TSC) is a nonlinear numerical model that studies the time evolution of an axisymmetric magnetized tokamak plasma. The motivation to combine these two codes for an integrated simulation is specified by the facts that the control system module in TSC is relatively simple compared to PCS, and meanwhile, newly-implemented control algorithms in PCS, before applied to experimental validations, require numerical validations against a tokamak plasma simulator that TSC can act as. In this paper, details of establishment of the integrated simulation framework between the EAST PCS and TSC are generically presented, and the poloidal power supply model and data acquisition model that have been implemented in this framework are described as well. In addition, the correctness of data interactions among the EAST PCS, Simulink and TSC is clearly confirmed during an interface test, and in a simulation test, the RZIP control scheme in the EAST PCS is numerically validated using this simulation platform. Supported by the National Magnetic Confinement Fusion Science Program of China (No. 2014GB103000) and the National Natural Science Foundation of China (No. 11205200).

  8. Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI

    NASA Astrophysics Data System (ADS)

    Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan

    2016-10-01

    Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.

  9. Code Comparison Study Fosters Confidence in the Numerical Simulation of Enhanced Geothermal Systems

    SciTech Connect

    White, Mark D.; Phillips, Benjamin R.

    2015-01-26

    Numerical simulation has become a standard analytical tool for scientists and engineers to evaluate the potential and performance of enhanced geothermal systems. A variety of numerical simulators developed by industry, universities, and national laboratories are currently available and being applied to better understand enhanced geothermal systems at the field scale. To yield credible predictions and be of value to site operators, numerical simulators must be able to accurately represent the complex coupled processes induced by producing geothermal systems, such as fracture aperture changes due to thermal stimulation, fracture shear displacement with fluid injection, rate of thermal depletion of reservoir rocks, and permeability alteration with mineral precipitation or dissolution. A suite of numerical simulators was exercised on a series of test problems that considered coupled thermal, hydraulic, geomechanical, and geochemical (THMC) processes. Problems were selected and designed to isolate selected coupled processes, to be executed on workstation class computers, and have simple but illustrative metrics for result comparisons. This paper summarizes the initial suite of seven benchmark problems, describes the code comparison activities, provides example results for problems and documents the capabilities of currently available numerical simulation codes to represent coupled processes that occur during the production of geothermal resources. Code comparisons described in this paper use the ISO (International Organization for Standardization) standard ISO-13538 for proficiency testing of numerical simulators. This approach was adopted for a recent code comparison study within the radiation transfer-modeling field of atmospheric sciences, which was focused on canopy reflectance models. This standard specifies statistical methods for analyzing laboratory data from proficiency testing schemes to demonstrate that the measurement results do not exhibit evidence of an

  10. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  11. Computer code simulations of explosions in flow networks and comparison with experiments

    NASA Astrophysics Data System (ADS)

    Gregory, W. S.; Nichols, B. D.; Moore, J. A.; Smith, P. R.; Steinke, R. G.; Idzorek, R. D.

    1987-10-01

    A program of experimental testing and computer code development for predicting the effects of explosions in air-cleaning systems is being carried out for the Department of Energy. This work is a combined effort by the Los Alamos National Laboratory and New Mexico State University (NMSU). Los Alamos has the lead responsibility in the project and develops the computer codes; NMSU performs the experimental testing. The emphasis in the program is on obtaining experimental data to verify the analytical work. The primary benefit of this work will be the development of a verified computer code that safety analysts can use to analyze the effects of hypothetical explosions in nuclear plant air cleaning systems. The experimental data show the combined effects of explosions in air-cleaning systems that contain all of the important air-cleaning elements (blowers, dampers, filters, ductwork, and cells). A small experimental set-up consisting of multiple rooms, ductwork, a damper, a filter, and a blower was constructed. Explosions were simulated with a shock tube, hydrogen/air-filled gas balloons, and blasting caps. Analytical predictions were made using the EVENT84 and NF85 computer codes. The EVENT84 code predictions were in good agreement with the effects of the hydrogen/air explosions, but they did not model the blasting cap explosions adequately. NF85 predicted shock entrance to and within the experimental set-up very well. The NF85 code was not used to model the hydrogen/air or blasting cap explosions.

  12. Relativistic Modeling Capabilities in PERSEUS Extended MHD Simulation Code for HED Plasmas

    NASA Astrophysics Data System (ADS)

    Hamlin, Nathaniel; Seyler, Charles

    2014-10-01

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as hybrid X-pinches and laser-plasma interactions. A major challenge of a relativistic fluid implementation is the recovery of primitive variables (density, velocity, pressure) from conserved quantities at each time step of a simulation. This recovery, which reduces to straightforward algebra in non-relativistic simulations, becomes more complicated when the equations are made relativistic, and has thus far been a major impediment to two-fluid simulations of relativistic HED plasmas. By suitable formulation of the relativistic generalized Ohm's law as an evolution equation, we have reduced the central part of the primitive variable recovery problem to a straightforward algebraic computation, which enables efficient and accurate relativistic two-fluid simulations. Our code recovers expected non-relativistic results and reveals new physics in the relativistic regime. Work supported by the National Nuclear Security Administration stewardship sciences academic program under Department of Energy cooperative Agreement DE-NA0001836.

  13. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    SciTech Connect

    Fawley, William M.

    2002-03-25

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  14. Algorithm for loading shot noise microbunching in multidimensional, free-electron laser simulation codes

    NASA Astrophysics Data System (ADS)

    Fawley, William M.

    2002-07-01

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multidimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  15. Advanced Virtual Reality Simulations in Aerospace Education and Research

    NASA Astrophysics Data System (ADS)

    Plotnikova, L.; Trivailo, P.

    2002-01-01

    Recent research developments at Aerospace Engineering, RMIT University have demonstrated great potential for using Virtual Reality simulations as a very effective tool in advanced structures and dynamics applications. They have also been extremely successful in teaching of various undergraduate and postgraduate courses for presenting complex concepts in structural and dynamics designs. Characteristic examples are related to the classical orbital mechanics, spacecraft attitude and structural dynamics. Advanced simulations, reflecting current research by the authors, are mainly related to the implementation of various non-linear dynamic techniques, including using Kane's equations to study dynamics of space tethered satellite systems and the Co-rotational Finite Element method to study reconfigurable robotic systems undergoing large rotations and large translations. The current article will describe the numerical implementation of the modern methods of dynamics, and will concentrate on the post-processing stage of the dynamic simulations. Numerous examples of building Virtual Reality stand-alone animations, designed by the authors, will be discussed in detail. These virtual reality examples will include: The striking feature of the developed technology is the use of the standard mathematical packages, like MATLAB, as a post-processing tool to generate Virtual Reality Modelling Language files with brilliant interactive, graphics and audio effects. These stand-alone demonstration files can be run under Netscape or Microsoft Explorer and do not require MATLAB. Use of this technology enables scientists to easily share their results with colleagues using the Internet, contributing to the flexible learning development at schools and Universities.

  16. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    SciTech Connect

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  17. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    SciTech Connect

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-04-25

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  18. Advanced Simulation and Computing FY07-08 Implementation Plan Volume 2

    SciTech Connect

    Kusnezov, D; Hale, A; McCoy, M; Hopson, J

    2006-06-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  19. Advanced Simulation & Computing FY09-FY10 Implementation Plan Volume 2, Rev. 0

    SciTech Connect

    Meisner, R; Perry, J; McCoy, M; Hopson, J

    2008-04-30

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  20. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  1. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    SciTech Connect

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  2. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    SciTech Connect

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  3. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    SciTech Connect

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  4. Simulated herbivory advances autumn phenology in Acer rubrum.

    PubMed

    Forkner, Rebecca E

    2014-05-01

    To determine the degree to which herbivory contributes to phenotypic variation in autumn phenology for deciduous trees, red maple (Acer rubrum) branches were subjected to low and high levels of simulated herbivory and surveyed at the end of the season to assess abscission and degree of autumn coloration. Overall, branches with simulated herbivory abscised ∼7 % more leaves at each autumn survey date than did control branches within trees. While branches subjected to high levels of damage showed advanced phenology, abscission rates did not differ from those of undamaged branches within trees because heavy damage induced earlier leaf loss on adjacent branch nodes in this treatment. Damaged branches had greater proportions of leaf area colored than undamaged branches within trees, having twice the amount of leaf area colored at the onset of autumn and having ~16 % greater leaf area colored in late October when nearly all leaves were colored. When senescence was scored as the percent of all leaves abscised and/or colored, branches in both treatments reached peak senescence earlier than did control branches within trees: dates of 50 % senescence occurred 2.5 days earlier for low herbivory branches and 9.7 days earlier for branches with high levels of simulated damage. These advanced rates are of the same time length as reported delays in autumn senescence and advances in spring onset due to climate warming. Thus, results suggest that should insect damage increase as a consequence of climate change, it may offset a lengthening of leaf life spans in some tree species.

  5. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  6. Numerical simulation of transonic propeller flow using a 3-dimensional small disturbance code employing novel helical coordinates

    NASA Technical Reports Server (NTRS)

    Snyder, Aaron

    1987-01-01

    The numerical simulation of three-dimensional transonic flow about propeller blades is discussed. The equations for the unsteady potential flow about propellers is given for an arbitrary coordinate system. From this the small disturbance form of the equation is derived for a new helical coordinate system. The new coordinate system is suited to propeller flow and allows cascade boundary conditions to be applied straightforward. A numerical scheme is employed which solves the steady flow as an asymptotic limit of unsteady flow. Solutions are presented for subsonic and transonic flow about a 5 percent thick bicircular arc blade of an eight bladed cascade. Both high and low advance ratio cases are given which include a lifting case as well as nonlifting cases. The nonlifting cases are compared to solutions from a Euler code.

  7. The Advanced Gamma-ray Imaging System (AGIS): Simulation studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Funk, S.; Konopelko, A.; Vassiliev, V.V.; /UCLA

    2011-06-14

    The Advanced Gamma-ray Imaging System (AGIS) is a next-generation ground-based gamma-ray observatory being planned in the U.S. The anticipated sensitivity of AGIS is about one order of magnitude better than the sensitivity of current observatories, allowing it to measure gamma-ray emission from a large number of Galactic and extra-galactic sources. We present here results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance - collecting area, angular resolution, background rejection, and sensitivity - are discussed.

  8. The Advanced Gamma-ray Imaging System (AGIS) - Simulation Studies

    SciTech Connect

    Maier, G.; Buckley, J.; Bugaev, V.; Fegan, S.; Vassiliev, V. V.; Funk, S.; Konopelko, A.

    2008-12-24

    The Advanced Gamma-ray Imaging System (AGIS) is a US-led concept for a next-generation instrument in ground-based very-high-energy gamma-ray astronomy. The most important design requirement for AGIS is a sensitivity of about 10 times greater than current observatories like Veritas, H.E.S.S or MAGIC. We present results of simulation studies of various possible designs for AGIS. The primary characteristics of the array performance, collecting area, angular resolution, background rejection, and sensitivity are discussed.

  9. EGR Distribution in Engine Cylinders Using Advanced Virtual Simulation

    SciTech Connect

    Fan, Xuetong

    2000-08-20

    Exhaust Gas Recirculation (EGR) is a well-known technology for reduction of NOx in diesel engines. With the demand for extremely low engine out NOx emissions, it is important to have a consistently balanced EGR flow to individual engine cylinders. Otherwise, the variation in the cylinders' NOx contribution to the overall engine emissions will produce unacceptable variability. This presentation will demonstrate the effective use of advanced virtual simulation in the development of a balanced EGR distribution in engine cylinders. An initial design is analyzed reflecting the variance in the EGR distribution, quantitatively and visually. Iterative virtual lab tests result in an optimized system.

  10. Half-Cell RF Gun Simulations with the Electromagnetic Particle-in-Cell Code VORPAL

    SciTech Connect

    Paul, K.; Dimitrov, D. A.; Busby, R.; Bruhwiler, D. L.; Smithe, D.; Cary, J. R.; Kewisch, J.; Kayran, D.; Calaga, R.; Ben-Zvi, I.

    2009-01-22

    We have simulated Brookhaven National Laboratory's half-cell superconducting RF gun design for a proposed high-current ERL using the three-dimensional, electromagnetic particle-in-cell code VORPAL. VORPAL computes the fully self-consistent electromagnetic fields produced by the electron bunches, meaning that it accurately models space-charge effects as well as bunch-to-bunch beam loading effects and the effects of higher-order cavity modes, though these are beyond the scope of this paper. We compare results from VORPAL to the well-established space-charge code PARMELA, using RF fields produced by SUPERFISH, as a benchmarking exercise in which the two codes should agree well.

  11. METHES: A Monte Carlo collision code for the simulation of electron transport in low temperature plasmas

    NASA Astrophysics Data System (ADS)

    Rabie, M.; Franck, C. M.

    2016-06-01

    We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.

  12. 3-D kinetics simulations of the NRU reactor using the DONJON code

    SciTech Connect

    Leung, T. C.; Atfield, M. D.; Koclas, J.

    2006-07-01

    The NRU reactor is highly heterogeneous, heavy-water cooled and moderated, with online refuelling capability. It is licensed to operate at a maximum power of 135 MW, with a peak thermal flux of approximately 4.0 x 10{sup 18} n.m{sup -2} . s{sup -1}. In support of the safe operation of NRU, three-dimensional kinetics calculations for reactor transients have been performed using the DONJON code. The code was initially designed to perform space-time kinetics calculations for the CANDU{sup R} power reactors. This paper describes how the DONJON code can be applied to perform neutronic simulations for the analysis of reactor transients in NRU, and presents calculation results for some transients. (authors)

  13. Some Considerations on the FE Simulation of Orthogonal Cutting Using Different Classes of Numerical Codes

    NASA Astrophysics Data System (ADS)

    Rizzuti, Stefania; Filice, Luigino; Umbrello, Domenico

    2007-05-01

    Numerical simulation of machining is now evolving toward new directions, in particular taking into account 3D applications. On the other hand, some aspects are still not well solved and different software are today available to carry out fundamental analysis on orthogonal cutting. Some of these codes are very user-friendly but don't allow any tuning of the parameters. Other ones, are "open codes" and permit a suitable set-up of the parameters, allowing the possibility to reach better results even if their use is reserved to the specialists. However, both the categories are affected by some problems when the analysis of different parameters is run. Probably global considerations on all the database on which these codes are based are interesting as well as any discussion about the implemented mathematical formulation. Some of the above aspects, in particular focused on the role played by friction, are discussed in the paper.

  14. Code validation for the simulation of supersonic viscous flow about the F-16XL

    NASA Technical Reports Server (NTRS)

    Flores, Jolen; Tu, Eugene; King, Lyndell

    1992-01-01

    The viewgraphs and discussion on code validation for the simulation of supersonic viscous flow about the F-16XL are provided. Because of the large potential gains related to laminar flow on the swept wings of supersonic aircraft, interest in the applications of laminar flow control (LFC) techniques in the supersonic regime has increased. A supersonic laminar flow control (SLFC) technology program is currently underway within NASA. The objective of this program is to develop the data base and design methods that are critical to the development of laminar flow control technology for application to supersonic transport aircraft design. Towards this end, the program integrates computational investigations underway at NASA Ames-Moffett and NASA Langley with flight-test investigations being conducted on the F-16XL at the NASA Ames-Dryden Research Facility in cooperation with Rockwell International. The computational goal at NASA Ames-Moffett is to integrate a thin-layer Reynolds averaged Navier-Stokes flow solver with a stability analysis code. The flow solver would provide boundary layer profiles to the stability analysis code which in turn would predict transition on the F-16XL wing. To utilize the stability analysis codes, reliable boundary layer data is necessary at off-design cases. Previously, much of the prediction of boundary layer transition has been accomplished through the coupling of boundary layer codes with stability theory. However, boundary layer codes may have difficulties at high Reynolds numbers, of the order of 100 million, and with the current complex geometry in question. Therefore, a reliable code which solves the thin-layer Reynolds averaged Navier-Stokes equations is needed. Two objectives are discussed, the first in greater depth. The first objective is method verification, via comparisons of computations with experiment, of the reliability and robustness of the code. To successfully implement LFC techniques to the F-16XL wing, the flow about

  15. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Bruhwiler, David L.; Cary, John R.; Cowan, Benjamin M.; Paul, Kevin; Mullowney, Paul J.; Messmer, Peter; Geddes, Cameron G. R.; Esarey, Eric; Cormier-Michel, Estelle; Leemans, Wim; Vay, Jean-Luc

    2009-01-22

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating >10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of {approx}2,000 as compared to standard particle-in-cell.

  16. Benchmarking of Advanced Control Strategies for a Simulated Hydroelectric System

    NASA Astrophysics Data System (ADS)

    Finotti, S.; Simani, S.; Alvisi, S.; Venturini, M.

    2017-01-01

    This paper analyses and develops the design of advanced control strategies for a typical hydroelectric plant during unsteady conditions, performed in the Matlab and Simulink environments. The hydraulic system consists of a high water head and a long penstock with upstream and downstream surge tanks, and is equipped with a Francis turbine. The nonlinear characteristics of hydraulic turbine and the inelastic water hammer effects were considered to calculate and simulate the hydraulic transients. With reference to the control solutions addressed in this work, the proposed methodologies rely on data-driven and model-based approaches applied to the system under monitoring. Extensive simulations and comparisons serve to determine the best solution for the development of the most effective, robust and reliable control tool when applied to the considered hydraulic system.

  17. New Developments in the Simulation of Advanced Accelerator Concepts

    SciTech Connect

    Paul, K.; Cary, J.R.; Cowan, B.; Bruhwiler, D.L.; Geddes, C.G.R.; Mullowney, P.J.; Messmer, P.; Esarey, E.; Cormier-Michel, E.; Leemans, W.P.; Vay, J.-L.

    2008-09-10

    Improved computational methods are essential to the diverse and rapidly developing field of advanced accelerator concepts. We present an overview of some computational algorithms for laser-plasma concepts and high-brightness photocathode electron sources. In particular, we discuss algorithms for reduced laser-plasma models that can be orders of magnitude faster than their higher-fidelity counterparts, as well as important on-going efforts to include relevant additional physics that has been previously neglected. As an example of the former, we present 2D laser wakefield accelerator simulations in an optimal Lorentz frame, demonstrating>10 GeV energy gain of externally injected electrons over a 2 m interaction length, showing good agreement with predictions from scaled simulations and theory, with a speedup factor of ~;;2,000 as compared to standard particle-in-cell.

  18. Recent advances of strong-strong beam-beam simulation

    SciTech Connect

    Qiang, Ji; Furman, Miguel A.; Ryne, Robert D.; Fischer, Wolfram; Ohmi,Kazuhito

    2004-09-15

    In this paper, we report on recent advances in strong-strong beam-beam simulation. Numerical methods used in the calculation of the beam-beam forces are reviewed. A new computational method to solve the Poisson equation on nonuniform grid is presented. This method reduces the computational cost by a half compared with the standard FFT based method on uniform grid. It is also more accurate than the standard method for a colliding beam with low transverse aspect ratio. In applications, we present the study of coherent modes with multi-bunch, multi-collision beam-beam interactions at RHIC. We also present the strong-strong simulation of the luminosity evolution at KEKB with and without finite crossing angle.

  19. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  20. Large eddy simulation of unsteady wind farm behavior using advanced actuator disk models

    NASA Astrophysics Data System (ADS)

    Moens, Maud; Duponcheel, Matthieu; Winckelmans, Gregoire; Chatelain, Philippe

    2014-11-01

    The present project aims at improving the level of fidelity of unsteady wind farm scale simulations through an effort on the representation and the modeling of the rotors. The chosen tool for the simulations is a Fourth Order Finite Difference code, developed at Universite catholique de Louvain; this solver implements Large Eddy Simulation (LES) approaches. The wind turbines are modeled as advanced actuator disks: these disks are coupled with the Blade Element Momentum method (BEM method) and also take into account the turbine dynamics and controller. A special effort is made here to reproduce the specific wake behaviors. Wake decay and expansion are indeed initially governed by vortex instabilities. This is an information that cannot be obtained from the BEM calculations. We thus aim at achieving this by matching the large scales of the actuator disk flow to high fidelity wake simulations produced using a Vortex Particle-Mesh method. It is obtained by adding a controlled excitation at the disk. We apply this tool to the investigation of atmospheric turbulence effects on the power production and on the wake behavior at a wind farm level. A turbulent velocity field is then used as inflow boundary condition for the simulations. We gratefully acknowledge the support of GDF Suez for the fellowship of Mrs Maud Moens.

  1. Intercode Advanced Fuels and Cladding Comparison Using BISON, FRAPCON, and FEMAXI Fuel Performance Codes

    NASA Astrophysics Data System (ADS)

    Rice, Aaren

    As part of the Department of Energy's Accident Tolerant Fuels (ATF) campaign, new cladding designs and fuel types are being studied in order to help make nuclear energy a safer and more affordable source for power. This study focuses on the implementation and analysis of the SiC cladding and UN, UC, and U3Si2 fuels into three specific nuclear fuel performance codes: BISON, FRAPCON, and FEMAXI. These fuels boast a higher thermal conductivity and uranium density than traditional UO2 fuel which could help lead to longer times in a reactor environment. The SiC cladding has been studied for its reduced production of hydrogen gas during an accident scenario, however the SiC cladding is a known brittle and unyielding material that may fracture during PCMI (Pellet Cladding Mechanical Interaction). This work focuses on steady-state operation with advanced fuel and cladding combinations. By implementing and performing analysis work with these materials, it is possible to better understand some of the mechanical interactions that could be seen as limiting factors. In addition to the analysis of the materials themselves, a further analysis is done on the effects of using a fuel creep model in combination with the SiC cladding. While fuel creep is commonly ignored in the traditional UO2 fuel and Zircaloy cladding systems, fuel creep can be a significant factor in PCMI with SiC.

  2. Moving objects extraction method in H.264/advanced video coding bit stream of a complex scene

    NASA Astrophysics Data System (ADS)

    Mingsheng, Chen; Mingxin, Qin; Guangming, Liang; Jixiang, Sun; Xu, Ning

    2013-08-01

    For the purpose of extracting moving objects from H.264/advanced video coding (AVC) bit stream of a complex scene, an algorithm based on maximum a posteriori Markov random field (MRF) framework to extract moving objects directly from H.264 compressed video is proposed in this paper. It mainly involves encoding information of motion vectors (MVs) and block partition modes in H.264/AVC bit stream and utilizes temporal continuity and spatial consistency of moving object's pieces. First, it retrieves MVs and block partition modes of identical 4×4 pixel blocks in P frames and establishes Gaussian mixture model (GMM) of the phase of MVs as a reference background, and then creates MRF model based on MVs, block partition modes, the GMM of the background, spatial, and temporal consistency. The moving objects are retrieved by solving the MRF model. The experimental results show that it can perform robustly in a complex environment and the precision and recall have been improved over the existing algorithm.

  3. EMPulse, a new 3-D simulation code for electromagnetic pulse studies

    NASA Astrophysics Data System (ADS)

    Cohen, Bruce; Eng, Chester; Farmer, William; Friedman, Alex; Grote, David; Kruger, Hans; Larson, David

    2016-10-01

    EMPulse is a comprehensive and modern 3-D simulation code for electro-magnetic pulse (EMP) formation and propagation studies, being developed at LLNL as part of a suite of codes to study E1 EMP originating from prompt gamma rays. EMPulse builds upon the open-source Warp particle-in-cell code framework developed by members of this team and collaborators at other institutions. The goal of this endeavor is a new tool enabling the detailed and self-consistent study of multi-dimensional effects in geometries that have typically been treated only approximately. Here we present an overview of the project, the models and methods that have been developed and incorporated into EMPulse, tests of these models, comparisons to simulations undertaken in CHAP-lite (derived from the legacy code CHAP due to C. Longmire and co-workers), and some approaches to increased computational efficiency being studied within our project. This work was performed under the auspices of the U.S. DOE by Lawrence Livermore National Security, LLC, Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. Simulation of a tokamak edge plasma with the kinetic code COGENT

    NASA Astrophysics Data System (ADS)

    Dorf, M.; Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Colella, P.; Martin, D.; McCorquodale, P.

    2013-10-01

    Progress on the development of the continuum gyrokinetic code COGENT for edge plasma simulations is reported. The COGENT code models an axisymmetric gyrokinetic equation coupled to the long-wavelength limit of the gyro-Poisson equation. COGENT is distinguished by application of fourth-order conservative discretization, and mapped multiblock grid technology to handle the geometric complexity of the tokamak edge. The code has also a number of model collision operator options, which have been successfully verified in neoclassical simulations. Our recent development work has focused on incorporation of the full (nonlinear) Fokker-Planck collision model. The implementation of the Fokker-Plank operator is discussed in detail, and the results of the initial verification studies are presented. In addition, we report on progress and status of the newly available divertor version of the COGENT code that includes both closed and open magnetic field line regions and a model for recycled neutral gas. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344 and at LBNL under contract DE-AC02-05CH11231.

  5. Force field development with GOMC, a fast new Monte Carlo molecular simulation code

    NASA Astrophysics Data System (ADS)

    Mick, Jason Richard

    In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.

  6. GRMHD Simulations of Jet Formation with a Newly-Developed GRMHD Code

    NASA Technical Reports Server (NTRS)

    Mizuno, Y.; Nishikawa, K.-I.; Koide, S.; Hardee, P.; Fishman, G. J.

    2006-01-01

    We have developed a new three-dimensional general relativistic magnetohydrodynamic code by using a conservative, high-resolution shock-capturing scheme. The numerical fluxes are calculated using the HLL approximate Riemann solver scheme. The flux-CT scheme is used to maintain a divergence-free magnetic field. Various 1-dimensional test problems show significant improvements over our previous GRMHD code. We have performed simulations of jet formations from a geometrically thin accretion disk near a non-rotating and a rotating black hole. The new simulation results show that the jet is formed by the same manner as in previous works and propagates outward. As the magnetic field strength becomes weaker, larger amount of matter launches with the jet. On the other hand when the magnetic field strength becomes stronger, the jet has less-matter and becomes poynting flux dominated. We will also discuss how the jet properties depend on the rotation of a black hole.

  7. Simulating hypervelocity impact effects on structures using the smoothed particle hydrodynamics code MAGI

    NASA Technical Reports Server (NTRS)

    Libersky, Larry; Allahdadi, Firooz A.; Carney, Theodore C.

    1992-01-01

    Analysis of interaction occurring between space debris and orbiting structures is of great interest to the planning and survivability of space assets. Computer simulation of the impact events using hydrodynamic codes can provide some understanding of the processes but the problems involved with this fundamental approach are formidable. First, any realistic simulation is necessarily three-dimensional, e.g., the impact and breakup of a satellite. Second, the thickness of important components such as satellite skins or bumper shields are small with respect to the dimension of the structure as a whole, presenting severe zoning problems for codes. Thirdly, the debris cloud produced by the primary impact will yield many secondary impacts which will contribute to the damage and possible breakup of the structure. The problem was approached by choosing a relatively new computational technique that has virtues peculiar to space impacts. The method is called Smoothed Particle Hydrodynamics.

  8. TERRA: a computer code for simulating the transport of environmentally released radionuclides through agriculture

    SciTech Connect

    Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.; Hermann, O.W.

    1984-11-01

    TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-defined deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.

  9. Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code

    NASA Astrophysics Data System (ADS)

    Qiao, Shan; Jackson, Edward; Coussios, Constantin-C.; Cleveland, Robin

    2015-10-01

    In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.

  10. Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code

    SciTech Connect

    Qiao, Shan Jackson, Edward; Coussios, Constantin-C; Cleveland, Robin

    2015-10-28

    In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.

  11. Graphics simulation and training aids for advanced teleoperation

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1993-01-01

    Graphics displays can be of significant aid in accomplishing a teleoperation task throughout all three phases of off-line task analysis and planning, operator training, and online operation. In the first phase, graphics displays provide substantial aid to investigate work cell layout, motion planning with collision detection and with possible redundancy resolution, and planning for camera views. In the second phase, graphics displays can serve as very useful tools for introductory training of operators before training them on actual hardware. In the third phase, graphics displays can be used for previewing planned motions and monitoring actual motions in any desired viewing angle, or, when communication time delay prevails, for providing predictive graphics overlay on the actual camera view of the remote site to show the non-time-delayed consequences of commanded motions in real time. This paper addresses potential space applications of graphics displays in all three operational phases of advanced teleoperation. Possible applications are illustrated with techniques developed and demonstrated in the Advanced Teleoperation Laboratory at JPL. The examples described include task analysis and planning of a simulated Solar Maximum Satellite Repair task, a novel force-reflecting teleoperation simulator for operator training, and preview and predictive displays for on-line operations.

  12. Parallel code NSBC: Simulations of relativistic nuclei scattering by a bent crystal

    NASA Astrophysics Data System (ADS)

    Babaev, A. A.

    2014-01-01

    The presented program was designed to simulate the passage of relativistic nuclei through a bent crystal. Namely, the input data is related to a nuclei beam. The nuclei move into the crystal under planar channeling and quasichanneling conditions. The program realizes the numerical algorithm to evaluate the trajectory of nucleus in the bent crystal. The program output is formed by the projectile motion data including the angular distribution of nuclei behind the crystal. The program could be useful to simulate the particle tracking at the accelerator facilities used the crystal collimation systems. The code has been written on C++ and designed for the multiprocessor systems (clusters).

  13. Simulation of Feynman-alpha measurements from SILENE reactor using a discrete ordinates code

    SciTech Connect

    Humbert, P.; Mechitoua, B.; Verrey, B.

    2006-07-01

    In this paper we present the simulation of Feynman-{alpha} measurements from SILENE reactor using the discrete ordinates code PANDA. A 2-D cylindrical model of SILENE reactor is designed for computer simulations. Two methods are implemented for variance to mean calculation. In the first method we used the Feynman point reactor formula where the parameters (Diven factor, reactivity, detector efficiency and alpha eigenvalue) are obtained by 2-D PANDA calculations. In the second method the time dependent adjoint equations for the first two moments are solved. The calculated results are compared to the measurements. Both methods are in excellent agreement with the experimental data. (authors)

  14. Benchmarking the codes VORPAL, OSIRIS, and QuickPIC with Laser Wakefield Acceleration Simulations

    SciTech Connect

    Paul, K.; Bruhwiler, D. L.; Cowan, B.; Cary, J. R.; Huang, C.; Mori, W. B.; Tsung, F. S.; Cormier-Michel, E.; Geddes, C. G. R.; Esarey, E.; Fonseca, R. A.; Martins, S. F.; Silva, L. O.

    2009-01-22

    Three-dimensional laser wakefield acceleration (LWFA) simulations have recently been performed to benchmark the commonly used particle-in-cell (PIC) codes VORPAL, OSIRIS, and QuickPIC. The simulations were run in parallel on over 100 processors, using parameters relevant to LWFA with ultra-short Ti-Sapphire laser pulses propagating in hydrogen gas. Both first-order and second-order particle shapes were employed. We present the results of this benchmarking exercise, and show that accelerating gradients from full PIC agree for all values of a{sub 0} and that full and reduced PIC agree well for values of a{sub 0} approaching 4.

  15. Simulations of steady-state scenarios for Tore Supra using the CRONOS code

    NASA Astrophysics Data System (ADS)

    Basiuk, V.; Artaud, J. F.; Imbeaux, F.; Litaudon, X.; Bécoulet, A.; Eriksson, L.-G.; Hoang, G. T.; Huysmans, G.; Mazon, D.; Moreau, D.; Peysson, Y.

    2003-09-01

    Scenarios of steady-state, fully non-inductive current in Tore Supra are predicted using a package of simulation codes (CRONOS). The plasma equilibrium and transport are consistently calculated with the deposition of power. The achievement of high injected energy discharges up to 1 GJ is shown. Two main scenarios are considered: a low density regime with 90% non-inductive current driven by lower hybrid waves—lower hybrid current drive (LHCD)—and a high density regime combining LHCD and ion cyclotron resonance heating with a bootstrap current fraction up to 25%. The predictive simulations of existing discharges are also reported.

  16. BIOTC: An open-source CFD code for simulating biomass fast pyrolysis

    NASA Astrophysics Data System (ADS)

    Xiong, Qingang; Aramideh, Soroush; Passalacqua, Alberto; Kong, Song-Charng

    2014-06-01

    The BIOTC code is a computer program that combines a multi-fluid model for multiphase hydrodynamics and global chemical kinetics for chemical reactions to simulate fast pyrolysis of biomass at reactor scale. The object-oriented characteristic of BIOTC makes it easy for researchers to insert their own sub-models, while the user-friendly interface provides users a friendly environment as in commercial software. A laboratory-scale bubbling fluidized bed reactor for biomass fast pyrolysis was simulated using BIOTC to demonstrate its capability.

  17. A Virtual Engineering Framework for Simulating Advanced Power System

    SciTech Connect

    Mike Bockelie; Dave Swensen; Martin Denison; Stanislav Borodai

    2008-06-18

    In this report is described the work effort performed to provide NETL with VE-Suite based Virtual Engineering software and enhanced equipment models to support NETL's Advanced Process Engineering Co-simulation (APECS) framework for advanced power generation systems. Enhancements to the software framework facilitated an important link between APECS and the virtual engineering capabilities provided by VE-Suite (e.g., equipment and process visualization, information assimilation). Model enhancements focused on improving predictions for the performance of entrained flow coal gasifiers and important auxiliary equipment (e.g., Air Separation Units) used in coal gasification systems. In addition, a Reduced Order Model generation tool and software to provide a coupling between APECS/AspenPlus and the GE GateCycle simulation system were developed. CAPE-Open model interfaces were employed where needed. The improved simulation capability is demonstrated on selected test problems. As part of the project an Advisory Panel was formed to provide guidance on the issues on which to focus the work effort. The Advisory Panel included experts from industry and academics in gasification, CO2 capture issues, process simulation and representatives from technology developers and the electric utility industry. To optimize the benefit to NETL, REI coordinated its efforts with NETL and NETL funded projects at Iowa State University, Carnegie Mellon University and ANSYS/Fluent, Inc. The improved simulation capabilities incorporated into APECS will enable researchers and engineers to better understand the interactions of different equipment components, identify weaknesses and processes needing improvement and thereby allow more efficient, less expensive plants to be developed and brought on-line faster and in a more cost-effective manner. These enhancements to APECS represent an important step toward having a fully integrated environment for performing plant simulation and engineering

  18. Mechanisms of Core-Collapse Supernovae & Simulation Results from the CHIMERA Code

    NASA Astrophysics Data System (ADS)

    Bruenn, S. W.; Mezzacappa, A.; Hix, W. R.; Blondin, J. M.; Marronetti, P.; Messer, O. E. B.; Dirk, C. J.; Yoshida, S.

    2009-05-01

    Unraveling the mechanism for core-collapse supernova explosions is an outstanding computational challenge and the problem remains essentially unsolved despite more than four decades of effort. However, much progress in realistic modeling has occurred recently through the availability of multi-teraflop machines and the increasing sophistication of supernova codes. These improvements have led to some key insights which may clarify the picture in the not too distant future. Here we briefly review the current status of the three explosion mechanisms (acoustic, MHD, and neutrino heating) that are currently under active investigation, concentrating on the neutrino heating mechanism as the one most likely responsible for producing explosions from progenitors in the mass range ~10 to ~25Msolar. We then briefly describe the CHIMERA code, a supernova code we have developed to simulate core-collapse supernovae in 1, 2, and 3 spatial dimensions. We finally describe the results of an ongoing suite of 2D simulations initiated from a 12, 15, 20, and 25Msolar progenitor. These have all exhibited explosions and are currently in the expanding phase with the shock at between 5,000 and 10,000 km. We finally very briefly describe an ongoing simulation in 3 spatial dimensions initiated from the 15Msolar progenitor.

  19. Generator of neutrino-nucleon interactions for the FLUKA based simulation code

    SciTech Connect

    Battistoni, G.; Sala, P. R.; Ferrari, A.; Lantz, M.; Smirnov, G. I.

    2009-11-25

    An event generator of neutrino-nucleon and neutrino-nucleus interactions has been developed for the general purpose Monte Carlo code FLUKA. The generator includes options for simulating quasi-elastic interactions, the neutrino-induced resonance production and deep inelastic scattering. Moreover, it shares the hadronization routines developed earlier in the framework of the FLUKA package for simulating hadron-nucleon interactions. The simulation of neutrino-nuclear interactions makes use of the well developed PEANUT event generator implemented in FLUKA for modeling of the interactions between hadrons and nuclei. The generator has been tested in the neutrino energy range from 0 to 10 TeV and it is available in the standard FLUKA distribution. Limitations related to some particular kinematical conditions are discussed. A number of upgrades is foreseen for the generator which will optimize its applications for simulating experiments in the CNGS beam.

  20. Simulation study of HL-2A-like plasma using integrated predictive modeling code

    SciTech Connect

    Poolyarat, N.; Onjun, T.; Promping, J.

    2009-11-15

    Self-consistent simulations of HL-2A-like plasma are carried out using 1.5D BALDUR integrated predictive modeling code. In these simulations, the core transport is predicted using the combination of Multi-mode (MMM95) anomalous core transport model and NCLASS neoclassical transport model. The evolution of plasma current, temperature and density is carried out. Consequently, the plasma current, temperature and density profiles, as well as other plasma parameters, are obtained as the predictions in each simulation. It is found that temperature and density profiles in these simulations are peak near the plasma center. In addition, the sawtooth period is studied using the Porcilli model and is found that before, during, and after the electron cyclotron resonance heating (ECRH) operation the sawtooth period are approximately the same. It is also observed that the mixing radius of sawtooth crashes is reduced during the ECRH operation.

  1. YT: A Multi-Code Analysis Toolkit for Astrophysical Simulation Data

    SciTech Connect

    Turk, Matthew J.; Smith, Britton D.; Oishi, Jeffrey S.; Skory, Stephen; Skillman, Samuel W.; Abel, Tom; Norman, Michael L.; /aff San Diego, CASS

    2011-06-23

    The analysis of complex multiphysics astrophysical simulations presents a unique and rapidly growing set of challenges: reproducibility, parallelization, and vast increases in data size and complexity chief among them. In order to meet these challenges, and in order to open up new avenues for collaboration between users of multiple simulation platforms, we present yt (available at http://yt.enzotools.org/) an open source, community-developed astrophysical analysis and visualization toolkit. Analysis and visualization with yt are oriented around physically relevant quantities rather than quantities native to astrophysical simulation codes. While originally designed for handling Enzo's structure adaptive mesh refinement data, yt has been extended to work with several different simulation methods and simulation codes including Orion, RAMSES, and FLASH. We report on its methods for reading, handling, and visualizing data, including projections, multivariate volume rendering, multi-dimensional histograms, halo finding, light cone generation, and topologically connected isocontour identification. Furthermore, we discuss the underlying algorithms yt uses for processing and visualizing data, and its mechanisms for parallelization of analysis tasks.

  2. A PIC-MCC code for simulation of streamer propagation in air

    SciTech Connect

    Chanrion, O. Neubert, T.

    2008-07-20

    A particle code has been developed to study the distribution and acceleration of electrons in electric discharges in air. The code can follow the evolution of a discharge from the initial stage of a single free electron in a background electric field to the formation of an electron avalanche and its transition into a streamer. The code is in 2D axi-symmetric coordinates, allowing quasi 3D simulations during the initial stages of streamer formation. This is important for realistic simulations of problems where space charge fields are essential such as in streamer formation. The charged particles are followed in a Cartesian mesh and the electric field is updated with Poisson's equation from the charged particle densities. Collisional processes between electrons and air molecules are simulated with a Monte Carlo technique, according to cross section probabilities. The code also includes photoionisation processes of air molecules by photons emitted by excited constituents. The paper describes the code and presents some results of streamer development at 70 km altitude in the mesosphere where electrical discharges (sprites) are generated above severe thunderstorms and at {approx}10 km relevant for lightning and thundercloud electrification. The code is used to study acceleration of thermal seed electrons in streamers and to understand the conditions under which electrons may reach energies in the runaway regime. This is the first study in air, with a particle model with realistic spatial dependencies of the electrostatic field. It is shown that at 1 atm pressure the electric field must exceed {approx}7.5 times the breakdown field to observe runaway electrons in a constant electric field. This value is close to the field where the electric force on an electron equals the maximum frictional force on an electron - found at {approx}100 eV. It is also found that this value is reached in a negative streamer tip at 10 km altitude when the background electric field equals

  3. Propulsion Simulations Using Advanced Turbulence Models with the Unstructured Grid CFD Tool, TetrUSS

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Frink, Neal T.; Deere, Karen A.; Pandya, Mohangna J.

    2004-01-01

    A computational investigation has been completed to assess the capability of TetrUSS for exhaust nozzle flows. Three configurations were chosen for this study (1) an axisymmetric supersonic jet, (2) a transonic axisymmetric boattail with solid sting operated at different Reynolds number and Mach number, and (3) an isolated non-axisymmetric nacelle with a supersonic cruise nozzle. These configurations were chosen because existing experimental data provided a means for measuring the ability of TetrUSS for simulating complex nozzle flows. The main objective of this paper is to validate the implementation of advanced two-equation turbulence models in the unstructured-grid CFD code USM3D for propulsion flow cases. USM3D is the flow solver of the TetrUSS system. Three different turbulence models, namely, Menter Shear Stress Transport (SST), basic k epsilon, and the Spalart-Allmaras (SA) are used in the present study. The results are generally in agreement with other implementations of these models in structured-grid CFD codes. Results indicate that USM3D provides accurate simulations for complex aerodynamic configurations with propulsion integration.

  4. Advanced modulation technology development for earth station demodulator applications. Coded modulation system development

    NASA Astrophysics Data System (ADS)

    Miller, Susan P.; Kappes, J. Mark; Layer, David H.; Johnson, Peter N.

    1990-04-01

    A jointly optimized coded modulation system is described which was designed, built, and tested by COMSAT Laboratories for NASA LeRC which provides a bandwidth efficiency of 2 bits/s/Hz at an information rate of 160 Mbit/s. A high speed rate 8/9 encoder with a Viterbi decoder and an Octal PSK modem are used to achieve this. The BER performance is approximately 1 dB from the theoretically calculated value for this system at a BER of 5 E-7 under nominal conditions. The system operates in burst mode for downlink applications and tests have demonstrated very little degradation in performance with frequency and level offset. Unique word miss rate measurements were conducted which demonstrate reliable acquisition at low values of Eb/No. Codec self tests have verified the performance of this subsystem in a stand alone mode. The codec is capable of operation at a 200 Mbit/s information rate as demonstrated using a codec test set which introduces noise digitally. The measured performance is within 0.2 dB of the computer simulated predictions. A gate array implementation of the most time critical element of the high speed Viterbi decoder was completed. This gate array add-compare-select chip significantly reduces the power consumption and improves the manufacturability of the decoder. This chip has general application in the implementation of high speed Viterbi decoders.

  5. Advanced modulation technology development for earth station demodulator applications. Coded modulation system development

    NASA Technical Reports Server (NTRS)

    Miller, Susan P.; Kappes, J. Mark; Layer, David H.; Johnson, Peter N.

    1990-01-01

    A jointly optimized coded modulation system is described which was designed, built, and tested by COMSAT Laboratories for NASA LeRC which provides a bandwidth efficiency of 2 bits/s/Hz at an information rate of 160 Mbit/s. A high speed rate 8/9 encoder with a Viterbi decoder and an Octal PSK modem are used to achieve this. The BER performance is approximately 1 dB from the theoretically calculated value for this system at a BER of 5 E-7 under nominal conditions. The system operates in burst mode for downlink applications and tests have demonstrated very little degradation in performance with frequency and level offset. Unique word miss rate measurements were conducted which demonstrate reliable acquisition at low values of Eb/No. Codec self tests have verified the performance of this subsystem in a stand alone mode. The codec is capable of operation at a 200 Mbit/s information rate as demonstrated using a codec test set which introduces noise digitally. The measured performance is within 0.2 dB of the computer simulated predictions. A gate array implementation of the most time critical element of the high speed Viterbi decoder was completed. This gate array add-compare-select chip significantly reduces the power consumption and improves the manufacturability of the decoder. This chip has general application in the implementation of high speed Viterbi decoders.

  6. Investigations and advanced concepts on gyrotron interaction modeling and simulations

    SciTech Connect

    Avramidis, K. A.

    2015-12-15

    In gyrotron theory, the interaction between the electron beam and the high frequency electromagnetic field is commonly modeled using the slow variables approach. The slow variables are quantities that vary slowly in time in comparison to the electron cyclotron frequency. They represent the electron momentum and the high frequency field of the resonant TE modes in the gyrotron cavity. For their definition, some reference frequencies need to be introduced. These include the so-called averaging frequency, used to define the slow variable corresponding to the electron momentum, and the carrier frequencies, used to define the slow variables corresponding to the field envelopes of the modes. From the mathematical point of view, the choice of the reference frequencies is, to some extent, arbitrary. However, from the numerical point of view, there are arguments that point toward specific choices, in the sense that these choices are advantageous in terms of simulation speed and accuracy. In this paper, the typical monochromatic gyrotron operation is considered, and the numerical integration of the interaction equations is performed by the trajectory approach, since it is the fastest, and therefore it is the one that is most commonly used. The influence of the choice of the reference frequencies on the interaction simulations is studied using theoretical arguments, as well as numerical simulations. From these investigations, appropriate choices for the values of the reference frequencies are identified. In addition, novel, advanced concepts for the definitions of these frequencies are addressed, and their benefits are demonstrated numerically.

  7. HADES code for numerical simulations of high-mach number astrophysical radiative flows

    NASA Astrophysics Data System (ADS)

    Michaut, C.; Di Menza, L.; Nguyen, H. C.; Bouquet, S. E.; Mancini, M.

    2017-03-01

    The understanding of astrophysical phenomena requires to deal with robust numerical tools in order to handle realistic scales in terms of energy, characteristic lengths and Mach number that cannot be easily reproduced by means of laboratory experiments. In this paper, we present the 2D numerical code HADES for the simulation of realistic astrophysical phenomena in various contexts, first taking into account radiative losses. The version of HADES including a multigroup modeling of radiative transfer will be presented in a forthcoming study. Validation of HADES is performed using several benchmark tests and some realistic applications are discussed. Optically thin radiative loss is modeled by a cooling function in the conservation law of energy. Numerical methods involve the MUSCL-Hancock finite volume scheme as well as HLLC and HLLE Riemann solvers, coupled with a second-order ODE solver by means of Strang splitting algorithm that handles source terms arising from geometrical or radiative contributions, for cartesian or axisymmetric configurations. A good agreement has been observed for all benchmark tests, either in hydrodynamic cases or in radiative cases. Furthermore, an overview of the main astrophysical studies driven with this code is proposed. First, simulations of radiative shocks in accretion columns and supernova remnant dynamics at large timescales including Vishniac instability have improved the understanding of these phenomena. Finally, astrophysical jets are investigated and the influence of the cooling effect on the jet morphology is numerically demonstrated. It is also found that periodic source enables to recover pulsating jets that mimic the structure of Herbig-Haro objects. HADES code has revealed its robustness, especially for the wall-shock test and for the so-called implosion test which turns out to be a severe one since the hydrodynamic variables are self-similar and become infinite at finite time. The simulations have proved the efficiency of

  8. A unified radiative magnetohydrodynamics code for lightning-like discharge simulations

    SciTech Connect

    Chen, Qiang Chen, Bin Xiong, Run; Cai, Zhaoyang; Chen, P. F.

    2014-03-15

    A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs flux splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.

  9. Direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    SciTech Connect

    Carroll, C.C.; Owen, J.E.

    1988-05-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  10. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  11. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    SciTech Connect

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr.; Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  12. Modeling Constituent Redistribution in U-Pu-Zr Metallic Fuel Using the Advanced Fuel Performance Code BISON

    SciTech Connect

    Douglas Porter; Steve Hayes; Various

    2014-06-01

    The Advanced Fuels Campaign (AFC) metallic fuels currently being tested have higher zirconium and plutonium concentrations than those tested in the past in EBR reactors. Current metal fuel performance codes have limitations and deficiencies in predicting AFC fuel performance, particularly in the modeling of constituent distribution. No fully validated code exists due to sparse data and unknown modeling parameters. Our primary objective is to develop an initial analysis tool by incorporating state-of-the-art knowledge, constitutive models and properties of AFC metal fuels into the MOOSE/BISON (1) framework in order to analyze AFC metallic fuel tests.

  13. SimER: An advanced three-dimensional environmental risk assessment code for contaminated land and radioactive waste disposal applications

    SciTech Connect

    Kwong, S.; Small, J.; Tahar, B.

    2007-07-01

    SimER (Simulations of Environmental Risks) is a powerful performance assessment code developed to undertake assessments of both contaminated land and radioactive waste disposal. The code can undertake both deterministic and probabilistic calculations, and is fully compatible with all available best practice guidance and regulatory requirements. SimER represents the first time-dependent performance assessment code capable of providing a detailed representation of system evolution that is designed specifically to address issues found across UK nuclear sites. The code adopts flexible input language with build-in unit checking to model the whole system (i.e. near-field, geosphere and biosphere) in a single code thus avoiding the need for any time consuming data transfer and the often laborious interface between the different codes. This greatly speeds up the assessment process and has major quality assurance advantages. SimER thus provides a cost-effective tool for undertaking projects involving risk assessment from contaminated land assessments through to full post-closure safety cases and other work supporting key site endpoint decisions. A Windows version (v1.0) of the code was first released in June 2004. The code has subsequently been subject to further testing and development. In particular, Viewers have been developed to provide users with visual information to assist the development of SimER models, and output can now be produced in a format that can be used by the FieldView software to view the results and produce animation from the SimER calculations. More recently a Linux version of the code has been produced to extend coverage to the commonly used platform bases and offer an improved operating environment for probabilistic assessments. Results from the verification of the SimER code for a sample of test cases for both contaminated land and waste disposal applications are presented. (authors)

  14. Simulation of chaotic electrokinetic transport: performance of commercial software versus custom-built direct numerical simulation codes.

    PubMed

    Karatay, Elif; Druzgalski, Clara L; Mani, Ali

    2015-05-15

    Many microfluidic and electrochemical applications involve chaotic transport phenomena that arise due to instabilities stemming from coupling of hydrodynamics with ion transport and electrostatic forces. Recent investigations have revealed the contribution of a wide range of spatio-temporal scales in such electro-chaotic systems similar to those observed in turbulent flows. Given that these scales can span several orders of magnitude, significant numerical resolution is needed for accurate prediction of these phenomena. The objective of this work is to assess accuracy and efficiency of commercial software for prediction of such phenomena. We have considered the electroconvective flow induced by concentration polarization near an ion selective surface as a model problem representing chaotic elecrokinetic phenomena. We present detailed comparison of the performance of a general-purpose commercial computational fluid dynamics (CFD) and transport solver against a custom-built direct numerical simulation code that has been tailored to the specific physics of unsteady electrokinetic flows. We present detailed statistics including velocity and ion concentration spectra over a wide range of frequencies as well as time-averaged statistics and computational time required for each simulation. Our results indicate that while accuracy can be guaranteed with proper mesh resolution and avoiding numerical dissipation, commercial solvers are generally at least an order of magnitude slower than custom-built direct numerical simulation codes.

  15. Simulation of positron backscattering and implantation profiles using Geant4 code

    NASA Astrophysics Data System (ADS)

    Huang, Shi-Juan; Pan, Zi-Wen; Liu, Jian-Dang; Han, Rong-Dian; Ye, Bang-Jiao

    2015-10-01

    For the proper interpretation of the experimental data produced in slow positron beam technique, the positron implantation properties are studied carefully using the latest Geant4 code. The simulated backscattering coefficients, the implantation profiles, and the median implantation depths for mono-energetic positrons with energy range from 1 keV to 50 keV normally incident on different crystals are reported. Compared with the previous experimental results, our simulation backscattering coefficients are in reasonable agreement, and we think that the accuracy may be related to the structures of the host materials in the Geant4 code. Based on the reasonable simulated backscattering coefficients, the adjustable parameters of the implantation profiles which are dependent on materials and implantation energies are obtained. The most important point is that we calculate the positron backscattering coefficients and median implantation depths in amorphous polymers for the first time and our simulations are in fairly good agreement with the previous experimental results. Project supported by the National Natural Science Foundation of China (Grant Nos. 11175171 and 11105139).

  16. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  17. Validation of Simulation Codes for Future Systems: Motivations, Approach and the Role of Nuclear Data

    SciTech Connect

    G. Palmiotti; M. Salvatores; G. Aliberti

    2007-10-01

    The validation of advanced simulation tools will still play a very significant role in several areas of reactor system analysis. This is the case of reactor physics and neutronics, where nuclear data uncertainties still play a crucial role for many core and fuel cycle parameters. The present paper gives a summary of validation motivations, objectives and approach. A validation effort is in particular necessary in the frame of advanced (e.g. Generation-IV or GNEP) reactors and associated fuel cycles assessment and design.

  18. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    PubMed

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  19. ETG turbulence simulation of tokamak edge plasmas via 3+1 gyrofluid code

    NASA Astrophysics Data System (ADS)

    Xi, P. W.; Xu, X. Q.; Dimits, A.; Umansky, M.; Joseph, I.; Kim, S. S.

    2012-10-01

    To study ETG driven turbulence at H-mode pedestal, which is important for the magnetic reconnection of ELM dynamics via ETG-MHD interaction, a 3+1 gyrofluid code is developed under BOUT++ framework. Four evolving quantities are density, parallel velocity, parallel pressure and perpendicular pressure for electron and adiabatic ion is used. Gyro-average is done by utilizing Pad'e approximation and parallel Landau closure for Landau damping is implemented by using a newly developed non-Fourier method. By calculating the ETG mode growth rate and real frequency for the ETG cyclone equilibrium, our code is benchmarked with gyrokinetic codes. We also calculated the electron heat transport level at turbulence saturation phase for both cyclone case and H-mode pedestal. Because the pedestal width is typically ten times larger than ETG simulation domain, the three different region of pedestal, i.e. pedestal top, peak gradient region and pedestal bottom, are simulated separately. The dramatic difference on magnetic shear and temperature length scale of these three regions lead to different ETG linear and nonlinear behaviors.

  20. Acceleration of a Particle-in-Cell Code for Space Plasma Simulations with OpenACC

    NASA Astrophysics Data System (ADS)

    Peng, Ivy Bo; Markidis, Stefano; Vaivads, Andris; Vencels, Juris; Deca, Jan; Lapenta, Giovanni; Hart, Alistair; Laure, Erwin

    2015-04-01

    We simulate space plasmas with the Particle-in-cell (PIC) method that uses computational particles to mimic electrons and protons in solar wind and in Earth magnetosphere. The magnetic and electric fields are computed by solving the Maxwell's equations on a computational grid. In each PIC simulation step, there are four major phases: interpolation of fields to particles, updating the location and velocity of each particle, interpolation of particles to grids and solving the Maxwell's equations on the grid. We use the iPIC3D code, which was implemented in C++, using both MPI and OpenMP, for our case study. By November 2014, heterogeneous systems using hardware accelerators such as Graphics Processing Unit (GPUs) and the Many Integrated Core (MIC) coprocessors for high performance computing continue growth in the top 500 most powerful supercomputers world wide. Scientific applications for numerical simulations need to adapt to using accelerators to achieve portability and scalability in the coming exascale systems. In our work, we conduct a case study of using OpenACC to offload the computation intensive parts: particle mover and interpolation of particles to grids, in a massively parallel Particle-in-Cell simulation code, iPIC3D, to multi-GPU systems. We use MPI for inter-node communication for halo exchange and communicating particles. We identify the most promising parts suitable for GPUs accelerator by profiling using CrayPAT. We implemented manual deep copy to address the challenges of porting C++ classes to GPU. We document the necessary changes in the exiting algorithms to adapt for GPU computation. We present the challenges and findings as well as our methodology for porting a Particle-in-Cell code to multi-GPU systems using OpenACC. In this work, we will present the challenges, findings and our methodology of porting a Particle-in-Cell code for space applications as follows: We profile the iPIC3D code by Cray Performance Analysis Tool (CrayPAT) and identify

  1. GAS-PASS/H : a simulation code for gas reactor plant systems.

    SciTech Connect

    Vilim, R. B.; Mertyurek, U.; Cahalan, J. E.; Nuclear Engineering Division; Texas A&M Univ.

    2004-01-01

    A simulation code for gas reactor plant systems has been developed. The code is intended for use in safety analysis and control studies for Generation-IV reactor concepts. We developed it anticipating an immediate application to direct cycle gas reactors. By programming in flexibility as to how components can be configured, we believe the code can be adapted for the indirect-cycle gas reactors relatively easy. The use of modular components and a general purpose equation solver allows for this. There are several capabilities that are included for investigating issues associated with direct cycle gas reactors. Issues include the safety characteristics of single shaft plants during coastdown and transition to shutdown heat removal following unprotected accidents, including depressurization, and the need for safety grade control systems. Basic components provided include turbine, compressor, recuperator, cooler, bypass valve, leak, accumulator, containment, and flow junction. The code permits a more rapid assessment of design concepts than is achievable using RELAP. RELAP requires detail beyond what is necessary at the design scoping stage. This increases the time to assemble an input deck and tends to make the code slower running. The core neutronics and decay heat models of GAS-PASS/H are taken from the liquid-metal version of MINISAS. The ex-reactor component models were developed from first principles. The network-based method for assembling component models into a system uses a general nonlinear solver to find the solution to the steady-state equations. The transient time-differenced equations are solved implicitly using the same solver. A direct cycle gas reactor is modeled and a loss of generator load transient is simulated for this reactor. While normally the reactor is scrammed, the plant safety case will require analysis of this event with failure of various safety systems. Therefore, we simulated the loss of load transient with a combined failure of the

  2. Advanced Simulation Capability for Environmental Management (ASCEM): Early Site Demonstration

    SciTech Connect

    Meza, Juan; Hubbard, Susan; Freshley, Mark D.; Gorton, Ian; Moulton, David; Denham, Miles E.

    2011-03-07

    The U.S. Department of Energy Office of Environmental Management, Technology Innovation and Development (EM-32), is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high performance computing tool will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. As part of the initial development process, a series of demonstrations were defined to test ASCEM components and provide feedback to developers, engage end users in applications, and lead to an outcome that would benefit the sites. The demonstration was implemented for a sub-region of the Savannah River Site General Separations Area that includes the F-Area Seepage Basins. The physical domain included the unsaturated and saturated zones in the vicinity of the seepage basins and Fourmile Branch, using an unstructured mesh fit to the hydrostratigraphy and topography of the site. The calculations modeled variably saturated flow and the resulting flow field was used in simulations of the advection of non-reactive species and the reactive-transport of uranium. As part of the demonstrations, a new set of data management, visualization, and uncertainty quantification tools were developed to analyze simulation results and existing site data. These new tools can be used to provide summary statistics, including information on which simulation parameters were most important in the prediction of uncertainty and to visualize the relationships between model input and output.

  3. Simulation of Enhanced Geothermal Systems: A Benchmarking and Code Intercomparison Study

    SciTech Connect

    Scheibe, Timothy D.; White, Mark D.; White, Signe K.; Sivaramakrishnan, Chandrika; Purohit, Sumit; Black, Gary D.; Podgorney, Robert; Boyd, Lauren W.; Phillips, Benjamin R.

    2013-06-30

    Numerical simulation codes have become critical tools for understanding complex geologic processes, as applied to technology assessment, system design, monitoring, and operational guidance. Recently the need for quantitatively evaluating coupled Thermodynamic, Hydrologic, geoMechanical, and geoChemical (THMC) processes has grown, driven by new applications such as geologic sequestration of greenhouse gases and development of unconventional energy sources. Here we focus on Enhanced Geothermal Systems (EGS), which are man-made geothermal reservoirs created where hot rock exists but there is insufficient natural permeability and/or pore fluids to allow efficient energy extraction. In an EGS, carefully controlled subsurface fluid injection is performed to enhance the permeability of pre-existing fractures, which facilitates fluid circulation and heat transport. EGS technologies are relatively new, and pose significant simulation challenges. To become a trusted analytical tool for EGS, numerical simulation codes must be tested to demonstrate that they adequately represent the coupled THMC processes of concern. This presentation describes the approach and status of a benchmarking and code intercomparison effort currently underway, supported by the U. S. Department of Energy’s Geothermal Technologies Program. This study is being closely coordinated with a parallel international effort sponsored by the International Partnership for Geothermal Technology (IPGT). We have defined an extensive suite of benchmark problems, test cases, and challenge problems, ranging in complexity and difficulty, and a number of modeling teams are applying various simulation tools to these problems. The descriptions of the problems and modeling results are being compiled using the Velo framework, a scientific workflow and data management environment accessible through a simple web-based interface.

  4. Simulations of the Dynamics of the Coupled Energetic and Relativistic Electrons Using VERB Code

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Kellerman, A. C.; Drozdov, A.

    2015-12-01

    Modeling and understanding of ring current and radiation belt coupled system has been a grand challenge since the beginning of the space age. In this study we show long term simulations with a 3D VERB code of modeling the radiation belts with boundary conditions derived from observations around geosynchronous orbit. We also present 4D VERB simulations that include convective transport, radial diffusion, pitch angle scattering and local acceleration. VERB simulations show that the lower energy inward transport is dominated by the convection and higher energy transport is dominated by the diffusive radial transport. We also show that at energies of 100s of keV a number of processes work simultaneously including convective transport, radial diffusion, local acceleration, loss to the loss cone and loss to the magnetopause. The results of the simulaiton of March 2013 storm are compared with Van Allen Probes observations.

  5. Modification of source contribution in PALS by simulation using Geant4 code

    NASA Astrophysics Data System (ADS)

    Ning, Xia; Cao, Xingzhong; Li, Chong; Li, Demin; Zhang, Peng; Gong, Yihao; Xia, Rui; Wang, Baoyi; Wei, Long

    2017-04-01

    The contribution of positron source for the results of a positron annihilation lifetime spectrum (PALS) is simulated using Geant4 code. The geometrical structure of PALS measurement system is a sandwich structure: the 22Na radiation source is encapsulated by Kapton films, and the specimens are attached on the outside of the films. The probabilities of a positron being annihilated in the films, annihilated in the targets, and the effect of positrons reflected back from the specimen surface, are simulated. The probability of a positron annihilated in the film is related to the species of targets and the source film thickness. The simulation result is in reasonable agreement with the available experimental data. Thus, modification of the source contribution calculated by Geant4 is viable, and it beneficial for the analysis of the results of PALS.

  6. Comparison of Geant4-DNA simulation of S-values with other Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    André, T.; Morini, F.; Karamitros, M.; Delorme, R.; Le Loirec, C.; Campos, L.; Champion, C.; Groetz, J.-E.; Fromm, M.; Bordage, M.-C.; Perrot, Y.; Barberet, Ph.; Bernal, M. A.; Brown, J. M. C.; Deleuze, M. S.; Francis, Z.; Ivanchenko, V.; Mascialino, B.; Zacharatou, C.; Bardiès, M.; Incerti, S.

    2014-01-01

    Monte Carlo simulations of S-values have been carried out with the Geant4-DNA extension of the Geant4 toolkit. The S-values have been simulated for monoenergetic electrons with energies ranging from 0.1 keV up to 20 keV, in liquid water spheres (for four radii, chosen between 10 nm and 1 μm), and for electrons emitted by five isotopes of iodine (131, 132, 133, 134 and 135), in liquid water spheres of varying radius (from 15 μm up to 250 μm). The results have been compared to those obtained from other Monte Carlo codes and from other published data. The use of the Kolmogorov-Smirnov test has allowed confirming the statistical compatibility of all simulation results.

  7. Simulation of charge breeding of rubidium using Monte Carlo charge breeding code and generalized ECRIS model

    SciTech Connect

    Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.

    2010-02-15

    A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.

  8. Simulation of the SRI International test Gun-27 using the PAGOSA code

    SciTech Connect

    Jacoby, J.J.

    1997-06-23

    SRI International conducted a set of impact tests with flat disks hitting water-filled chemical submunitions. One of these tests, called Gun-27, involved a 595 gram disk hitting the side of a submunition at 200 m/s. This test was simulated using the PAGOSA code with a materials model that was a good overall match to the data, and with a sequence of five mesh sizes. It was found that when a mesh was used which had at least five cells across the wall of the submunition, PAGOSA was able to provide reasonably satisfactory agreement with the test results, except for the partial fracture of a welded joint. One feature of the test that was reproduced very well by the simulation that used the finest mesh was the fracture of the diaphragm around its edge. Results are compared for all five simulations so that trends can be seen.

  9. TID Simulation of Advanced CMOS Devices for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focuses on Total Ionizing Dose (TID) effects caused by accumulation of charges at silicon dioxide, substrate/silicon dioxide interface, Shallow Trench Isolation (STI) for scaled CMOS bulk devices as well as at Buried Oxide (BOX) layer in devices based on Silicon-On-Insulator (SOI) technology to be operated in space radiation environment. The radiation induced leakage current and corresponding density/concentration electrons in leakage current path was presented/depicted for 180nm, 130nm and 65nm NMOS, PMOS transistors based on CMOS bulk as well as SOI process technologies on-board LEO and GEO satellites. On the basis of simulation results, the TID robustness analysis for advanced deep sub-micron technologies was accomplished up to 500 Krad. The correlation between the impact of technology scaling and magnitude of leakage current with corresponding total dose was established utilizing Visual TCAD Genius program.

  10. Self-consistent simulation of plasma scenarios for ITER using a combination of 1.5D transport codes and free-boundary equilibrium codes

    NASA Astrophysics Data System (ADS)

    Parail, V.; Albanese, R.; Ambrosino, R.; Artaud, J.-F.; Besseghir, K.; Cavinato, M.; Corrigan, G.; Garcia, J.; Garzotti, L.; Gribov, Y.; Imbeaux, F.; Koechl, F.; Labate, C. V.; Lister, J.; Litaudon, X.; Loarte, A.; Maget, P.; Mattei, M.; McDonald, D.; Nardon, E.; Saibene, G.; Sartori, R.; Urban, J.

    2013-11-01

    Self-consistent transport simulation of ITER scenarios is a very important tool for the exploration of the operational space and for scenario optimization. It also provides an assessment of the compatibility of developed scenarios (which include fast transient events) with machine constraints, in particular with the poloidal field coil system, heating and current drive, fuelling and particle and energy exhaust systems. This paper discusses results of predictive modelling of all reference ITER scenarios and variants using two suites of linked transport and equilibrium codes. The first suite consisting of the 1.5D core/2D SOL code JINTRAC (Wiesen S. et al 2008 JINTRAC-JET modelling suite JET ITC-Report) and the free-boundary equilibrium evolution code CREATE-NL (Albanese R. et al 2003 ISEM 2003 (Versailles, France); Albanese R. et al 2004 Nucl. Fusion 44 999), was mainly used to simulate the inductive D-T reference Scenario-2 with fusion gain Q = 10 and its variants in H, D and He (including ITER scenarios with reduced current and toroidal field). The second suite of codes was used mainly for the modelling of hybrid and steady-state ITER scenarios. It combines the 1.5D core transport code CRONOS (Artaud J.F. et al 2010 Nucl. Fusion 50 043001) and the free-boundary equilibrium evolution code DINA-CH (Kim S.H. et al 2009 Plasma Phys. Control. Fusion 51 105007).

  11. Ideas for Advancing Code Sharing: A Different Kind of Hack Day

    NASA Astrophysics Data System (ADS)

    Teuben, P.; Allen, A.; Berriman, B.; DuPrie, K.; Hanisch, R. J.; Mink, J.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Wallin, J. F.

    2014-05-01

    How do we as a community encourage the reuse of software for telescope operations, data processing, and ? How can we support making codes used in research available for others to examine? Continuing the discussion from last year Bring out your codes! BoF session, participants separated into groups to brainstorm ideas to mitigate factors which inhibit code sharing and nurture those which encourage code sharing. The BoF concluded with the sharing of ideas that arose from the brainstorming sessions and a brief summary by the moderator.

  12. Simulated Interactive Research Experiments as Educational Tools for Advanced Science.

    PubMed

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M; Hopf, Martin; Arndt, Markus

    2015-09-15

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields.

  13. Simulated Interactive Research Experiments as Educational Tools for Advanced Science

    PubMed Central

    Tomandl, Mathias; Mieling, Thomas; Losert-Valiente Kroon, Christiane M.; Hopf, Martin; Arndt, Markus

    2015-01-01

    Experimental research has become complex and thus a challenge to science education. Only very few students can typically be trained on advanced scientific equipment. It is therefore important to find new tools that allow all students to acquire laboratory skills individually and independent of where they are located. In a design-based research process we have investigated the feasibility of using a virtual laboratory as a photo-realistic and scientifically valid representation of advanced scientific infrastructure to teach modern experimental science, here, molecular quantum optics. We found a concept based on three educational principles that allows undergraduate students to become acquainted with procedures and concepts of a modern research field. We find a significant increase in student understanding using our Simulated Interactive Research Experiment (SiReX), by evaluating the learning outcomes with semi-structured interviews in a pre/post design. This suggests that this concept of an educational tool can be generalized to disseminate findings in other fields. PMID:26370627

  14. An evaluation of computer codes for simulating the Galileo Probe aerothermal entry environment

    NASA Technical Reports Server (NTRS)

    Menees, G. P.

    1981-01-01

    The approaches of three computer flow field codes (HYVIS, COLTS, and RASLE), used to determine the Galileo Probe aerothermal environment and its effect on the design of the thermal protection system, are analyzed in order to resolve differences in their predicted results. All three codes account for the hypersonic, massively blown, radiation shock layers, characteristic of Jupiter entry. Significant differences, however, are evident in their solution procedures: the governing conservation equations, the numerical differencing methods, the governing physics (chemical, radiation, diffusion, and turbulence models), and the basic physical data (thermodynamic, transport, chemical, and spectral properties for atomic and molecular species). Solutions are compared for two near peak heating entry conditions for a Galileo Probe baseline configuration, having an initial mass of 242 kg and simulating entry into the Orton nominal atmosphere. The modern numerical methodology of COLTS and RASLE appear to provide an improved capability for coupled flow-field solutions.

  15. Validating a Monotonically-Integrated Large Eddy Simulation Code for Subsonic Jet Acoustics

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Bridges, James

    2017-01-01

    The results of subsonic jet validation cases for the Naval Research Lab's Jet Engine Noise REduction (JENRE) code are reported. Two set points from the Tanna matrix, set point 3 (Ma = 0.5, unheated) and set point 7 (Ma = 0.9, unheated) are attempted on three different meshes. After a brief discussion of the JENRE code and the meshes constructed for this work, the turbulent statistics for the axial velocity are presented and compared to experimental data, with favorable results. Preliminary simulations for set point 23 (Ma = 0.5, Tj=T1 = 1.764) on one of the meshes are also described. Finally, the proposed configuration for the farfield noise prediction with JENRE's Ffowcs-Williams Hawking solver are detailed.

  16. Simulation code for the interaction of 14 MeV neutrons on cells.

    PubMed

    Nénot, M L; Alard, J P; Dionet, C; Arnold, J; Tchirkov, A; Meunier, H; Bodez, V; Rapp, M; Verrelle, P

    2002-01-01

    The structure of the survival curve of melanoma cells irradiated by 14 MeV neutrons displays unusual features at very low dose rate where a marked increase in cell killings at 0.05 Gy is followed by a plateau for survival from 0.1 to 0.32 Gy. In parallel a simulation code was constructed for the interaction of 14 MeV neutrons with cellular cultures. The code describes the interaction of the neutrons with the atomic nuclei of the cellular medium and of the external medium (flask culture and culture medium), and is used to compute the deposited energy into the cell volume. It was found that the large energy transfer events associated with heavy charged recoils can occur and that a large part of the energy deposition events are due to recoil protons emitted from the external medium. It is suggested that such events could partially explain the experimental results.

  17. Enhancing the ABAQUS thermomechanics code to simulate multipellet steady and transient LWR fuel rod behavior

    SciTech Connect

    R. L. Williamson

    2011-08-01

    A powerful multidimensional fuels performance analysis capability, applicable to both steady and transient fuel behavior, is developed based on enhancements to the commercially available ABAQUS general-purpose thermomechanics code. Enhanced capabilities are described, including: UO2 temperature and burnup dependent thermal properties, solid and gaseous fission product swelling, fuel densification, fission gas release, cladding thermal and irradiation creep, cladding irradiation growth, gap heat transfer, and gap/plenum gas behavior during irradiation. This new capability is demonstrated using a 2D axisymmetric analysis of the upper section of a simplified multipellet fuel rod, during both steady and transient operation. Comparisons are made between discrete and smeared-pellet simulations. Computational results demonstrate the importance of a multidimensional, multipellet, fully-coupled thermomechanical approach. Interestingly, many of the inherent deficiencies in existing fuel performance codes (e.g., 1D thermomechanics, loose thermomechanical coupling, separate steady and transient analysis, cumbersome pre- and post-processing) are, in fact, ABAQUS strengths.

  18. Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Hou, G. J.-W.; Jones, H. E.; Taylor, A. C., III; Korivi, V. M.

    1992-01-01

    How a combination of various computational methodologies could reduce the enormous computational costs envisioned in using advanced CFD codes in gradient based optimized multidisciplinary design (MdD) procedures is briefly outlined. Implications of these MdD requirements upon advanced CFD codes are somewhat different than those imposed by a single discipline design. A means for satisfying these MdD requirements for gradient information is presented which appear to permit: (1) some leeway in the CFD solution algorithms which can be used; (2) an extension to 3-D problems; and (3) straightforward use of other computational methodologies. Many of these observations have previously been discussed as possibilities for doing parts of the problem more efficiently; the contribution here is observing how they fit together in a mutually beneficial way.

  19. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  20. Simulation of Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes

    DTIC Science & Technology

    2015-11-01

    Memorandum Simulation of Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes ...Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes by Charles R. Fisher...Welding- Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes 5a. CONTRACT NUMBER N/A 5b. GRANT NUMBER N/A 5c

  1. An overview of the ENEA activities in the field of coupled codes NPP simulation

    SciTech Connect

    Parisi, C.; Negrenti, E.; Sepielli, M.; Del Nevo, A.

    2012-07-01

    In the framework of the nuclear research activities in the fields of safety, training and education, ENEA (the Italian National Agency for New Technologies, Energy and the Sustainable Development) is in charge of defining and pursuing all the necessary steps for the development of a NPP engineering simulator at the 'Casaccia' Research Center near Rome. A summary of the activities in the field of the nuclear power plants simulation by coupled codes is here presented with the long term strategy for the engineering simulator development. Specifically, results from the participation in international benchmarking activities like the OECD/NEA 'Kalinin-3' benchmark and the 'AER-DYN-002' benchmark, together with simulations of relevant events like the Fukushima accident, are here reported. The ultimate goal of such activities performed using state-of-the-art technology is the re-establishment of top level competencies in the NPP simulation field in order to facilitate the development of Enhanced Engineering Simulators and to upgrade competencies for supporting national energy strategy decisions, the nuclear national safety authority, and the R and D activities on NPP designs. (authors)

  2. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (1) mode blockage, (2) liner insertion loss, (3) short ducts, and (4) mode reflection.

  3. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    NASA Technical Reports Server (NTRS)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  4. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.

    1996-01-01

    Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.

  5. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. II. IMPLEMENTATION AND PERFORMANCE CHARACTERISTICS

    SciTech Connect

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-10-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the code necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE

  6. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. II. Implementation and Performance Characteristics

    NASA Astrophysics Data System (ADS)

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-10-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the code necessary to obtain forces efficiently from special purpose "GRAPE" hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE

  7. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2011-09-30

    channel interference mitigation for underwater acoustic MIMO-OFDM. 3) Turbo equalization for OFDM modulated physical layer network coding. 4) Blind CFO...Localization and tracking of underwater physical systems. 7) NAMS: A networked acoustic modem system for underwater applications . 8) OFDM receiver design in...3) Turbo Equalization for OFDM Modulated Physical Layer Network Coding. We have investigated a practical orthogonal frequency division multiplexing

  8. Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Korecki, P; Roszczynialski, T P; Sowa, K M

    2015-04-06

    In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.

  9. Assessment of a Hybrid Continuous/Discontinuous Galerkin Finite Element Code for Geothermal Reservoir Simulations

    NASA Astrophysics Data System (ADS)

    Xia, Yidong; Podgorney, Robert; Huang, Hai

    2017-03-01

    FALCON (Fracturing And Liquid CONvection) is a hybrid continuous/discontinuous Galerkin finite element geothermal reservoir simulation code based on the MOOSE (Multiphysics Object-Oriented Simulation Environment) framework being developed and used for multiphysics applications. In the present work, a suite of verification and validation (V&V) test problems for FALCON was defined to meet the design requirements, and solved to the interests of enhanced geothermal system modeling and simulation. The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of FALCON solution methods. The test problems vary in complexity from a single mechanical or thermal process, to coupled thermo-hydro-mechanical processes in geological porous medium. Numerical results obtained by FALCON agreed well with either the available analytical solutions or experimental data, indicating the verified and validated implementation of these capabilities in FALCON. Whenever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the FALCON code.

  10. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  11. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  12. Nonlinear ELM simulations based on a nonideal peeling–ballooning model using the BOUT++ code

    DOE PAGES

    Xu, X. Q.; Dudson, B. D.; Snyder, P. B.; ...

    2011-09-23

    A minimum set of equations based on the peeling–ballooning (P–B) model with nonideal physics effects (diamagnetic drift, E × B drift, resistivity and anomalous electron viscosity) is found to simulate pedestal collapse when using the BOUT++ simulation code, developed in part from the original fluid edge code BOUT. Linear simulations of P–B modes find good agreement in growth rate and mode structure with ELITE calculations. The influence of the E × B drift, diamagnetic drift, resistivity, anomalous electron viscosity, ion viscosity and parallel thermal diffusivity on P–B modes is being studied; we find that (1) the diamagnetic drift and Emore » × B drift stabilize the P–B mode in a manner consistent with theoretical expectations; (2) resistivity destabilizes the P–B mode, leading to resistive P–B mode; (3) anomalous electron and parallel ion viscosities destabilize the P–B mode, leading to a viscous P–B mode; (4) perpendicular ion viscosity and parallel thermal diffusivity stabilize the P–B mode. With addition of the anomalous electron viscosity under the assumption that the anomalous kinematic electron viscosity is comparable to the anomalous electron perpendicular thermal diffusivity, or the Prandtl number is close to unity, it is found from nonlinear simulations using a realistic high Lundquist number that the pedestal collapse is limited to the edge region and the ELM size is about 5–10% of the pedestal stored energy. Furthermore, this is consistent with many observations of large ELMs. The estimated island size is consistent with the size of fast pedestal pressure collapse. In the stable α-zones of ideal P–B modes, nonlinear simulations of viscous ballooning modes or current-diffusive ballooning mode (CDBM) for ITER H-mode scenarios are presented.« less

  13. Nonlinear ELM simulations based on a nonideal peeling–ballooning model using the BOUT++ code

    SciTech Connect

    Xu, X. Q.; Dudson, B. D.; Snyder, P. B.; Umansky, M. V.; Wilson, H. R.; Casper, T.

    2011-09-23

    A minimum set of equations based on the peeling–ballooning (P–B) model with nonideal physics effects (diamagnetic drift, E × B drift, resistivity and anomalous electron viscosity) is found to simulate pedestal collapse when using the BOUT++ simulation code, developed in part from the original fluid edge code BOUT. Linear simulations of P–B modes find good agreement in growth rate and mode structure with ELITE calculations. The influence of the E × B drift, diamagnetic drift, resistivity, anomalous electron viscosity, ion viscosity and parallel thermal diffusivity on P–B modes is being studied; we find that (1) the diamagnetic drift and E × B drift stabilize the P–B mode in a manner consistent with theoretical expectations; (2) resistivity destabilizes the P–B mode, leading to resistive P–B mode; (3) anomalous electron and parallel ion viscosities destabilize the P–B mode, leading to a viscous P–B mode; (4) perpendicular ion viscosity and parallel thermal diffusivity stabilize the P–B mode. With addition of the anomalous electron viscosity under the assumption that the anomalous kinematic electron viscosity is comparable to the anomalous electron perpendicular thermal diffusivity, or the Prandtl number is close to unity, it is found from nonlinear simulations using a realistic high Lundquist number that the pedestal collapse is limited to the edge region and the ELM size is about 5–10% of the pedestal stored energy. Furthermore, this is consistent with many observations of large ELMs. The estimated island size is consistent with the size of fast pedestal pressure collapse. In the stable α-zones of ideal P–B modes, nonlinear simulations of viscous ballooning modes or current-diffusive ballooning mode (CDBM) for ITER H-mode scenarios are presented.

  14. EPOCH code simulation of a non-thermal distribution driven by neutral beam injection in a high-beta plasma

    NASA Astrophysics Data System (ADS)

    Necas, A.; Tajima, T.; Nicks, S.; Magee, R.; Clary, R.; Roche, T.; Tri Alpha Energy Team

    2016-10-01

    In Tri Alpha Energy's C-2U experiment, advanced beam-driven field-reversed configuration (FRC) plasmas were sustained via tangential neutral beam injection. The dominant fast ion population made a dramatic impact on the overall plasma performance. To explain an experimentally observed anomalous neutron signal (100x thermonuclear), we use EPOCH PIC code to simulate possible beam driven non-destructive instabilities that transfer energy from fast ions to the plasma, causing phase space bunching. We propose that the hydrogen beam ion population drives collective modes in the deuterium target plasma, giving rise to the instability and increased fusion rate. The instability changes character from electrostatic in the low beta edge to fully electromagnetic in the core, with an associated reduction in growth rates. The DD reactivity enhancement is calculated using a two-body correlation function and compared to the experimentally observed neutron yield. The high-energy tails in the distributions of the plasma deuterons and beam protons are observed via a mass-resolving Neutral Particle Analyzer (NPA) diagnostic. This observation is qualitatively consistent with EPOCH simulation of the beam-plasma instability.

  15. Simulation of gradient drift instabilities in Hall thruster plasmas with the BOUT++ code

    NASA Astrophysics Data System (ADS)

    Frias, Winston; Smolyakov, Andrei; Raitses, Yevgeny; Kaganovich, Igor; Umansky, Maxim

    2012-10-01

    Hall thrusters plasma is a subject of several instabilities related to gradients of plasma density and electron temperature, gradient of magnetic field as well as the equilibrium electron flow due to the equilibrium axial electric field. The effect of electron collisions and electron inertia are studied using the fluid BOUT++ code and the results compared and validated with the existing analytical theory. In the future, computer simulations of nonlinear stage and associated turbulent transport in Hall thrusters will be performed. Also connections with other instabilities in Hall plasmas will be investigated.

  16. Numerical simulation of the Hall effect in magnetized accretion disks with the Pluto code

    NASA Astrophysics Data System (ADS)

    Nakhaei, Mohammad; Safaei, Ghasem; Abbassi, Shahram

    2014-01-01

    We investigate the Hall effect in a standard magnetized accretion disk which is accompanied by dissipation due to viscosity and magnetic resistivity. By considering an initial magnetic field, using the PLUTO code, we perform a numerical magnetohydrodynamic simulation in order to study the effect of Hall diffusion on the physical structure of the disk. Current density and temperature of the disk are significantly modified by Hall diffusion, but the global structure of the disk is not substantially affected. The changes in the current densities and temperature of the disk lead to a modification in the disk luminosity and radiation.

  17. Development of a dynamic simulation mode in Serpent 2 Monte Carlo code

    SciTech Connect

    Leppaenen, J.

    2013-07-01

    This paper presents a dynamic neutron transport mode, currently being implemented in the Serpent 2 Monte Carlo code for the purpose of simulating short reactivity transients with temperature feedback. The transport routine is introduced and validated by comparison to MCNP5 calculations. The method is also tested in combination with an internal temperature feedback module, which forms the inner part of a multi-physics coupling scheme in Serpent 2. The demo case for the coupled calculation is a reactivity-initiated accident (RIA) in PWR fuel. (authors)

  18. A New Internal Energy Calculation for the HELP Code and Its Implications to Conical Shaped Charge Simulations

    DTIC Science & Technology

    1979-06-01

    TECHNICAL REPORT ARBRL-TR-02168 d’ A NEtJ INTFRNAL ENERGY CALCULATION FOR THE HELP CODE AND ITS IMPLICATIONS TO CONICAL SHAPED CHARGE SIMULATIONS...Energy Calculation for the HELP Code and Its Implications to Conical Shaped Cha Simulat.ions S. PERFORMING ORG. REPORT NUMBER 7- AUTHOIR,&) 8. CONTRACT...terms of the order of the truncation errrlr in the kinetic energy calculation . A corrcc- tion is given and qualitative the.-mal agreement is achieved, for

  19. Status report on multigroup cross section generation code development for high-fidelity deterministic neutronics simulation system.

    SciTech Connect

    Yang, W. S.; Lee, C. H.

    2008-05-16

    Under the fast reactor simulation program launched in April 2007, development of an advanced multigroup cross section generation code was initiated in July 2007, in conjunction with the development of the high-fidelity deterministic neutron transport code UNIC. The general objectives are to simplify the existing multi-step schemes and to improve the resolved and unresolved resonance treatments. Based on the review results of current methods and the fact that they have been applied successfully to fast critical experiment analyses and fast reactor designs for last three decades, the methodologies of the ETOE-2/MC{sup 2}-2/SDX code system were selected as the starting set of methodologies for multigroup cross section generation for fast reactor analysis. As the first step for coupling with the UNIC code and use in a parallel computing environment, the MC{sup 2}-2 code was updated by modernizing the memory structure and replacing old data management package subroutines and functions with FORTRAN 90 based routines. Various modifications were also made in the ETOE-2 and MC{sup 2}-2 codes to process the ENDF/B-VII.0 data properly. Using the updated ETOE-2/MC{sup 2}-2 code system, the ENDF/B-VII.0 data was successfully processed for major heavy and intermediate nuclides employed in sodium-cooled fast reactors. Initial verification tests of the MC{sup 2}-2 libraries generated from ENDF/B-VII.0 data were performed by inter-comparison of twenty-one group infinite dilute total cross sections obtained from MC{sup 2}-2, VIM, and NJOY. For almost all nuclides considered, MC{sup 2}-2 cross sections agreed very well with those from VIM and NJOY. Preliminary validation tests of the ENDF/B-VII.0 libraries of MC{sup 2}-2 were also performed using a set of sixteen fast critical benchmark problems. The deterministic results based on MC{sup 2}-2/TWODANT calculations were in good agreement with MCNP solutions within {approx}0.25% {Delta}{rho}, except a few small LANL fast assemblies

  20. Three-dimensional simulations of solar granulation and blast wave using ZEUS-MP code

    NASA Astrophysics Data System (ADS)

    Nurzaman, M. Z.; Herdiwijaya, D.

    2015-09-01

    Sun is nearest and the only star that can be observed in full disk mode. Meanwhile other stars simply can be observed as dot and cannot be seen in full disk like the Sun. Due to this condition, detail events in the Sun can possibly observable. For example, flare, prominence, granulation and other features can be seen easily compared to other stars. In other word the observational data can be obtained easily. And for better understanding, computational simulation is needed too. In this paper we use ZEUS-MP, a numerical code for the simulation of fluid dynamical flows in astrophysics, to study granulation and blast wave in the Sun. ZEUS-MP allows users to use hydrodynamic (HD) or magneto hydrodynamic (MHD) simulations singly or in concert, in one, two, or three space dimensions. For granulation case, we assume that there is no influence from magnetic field. So, it's enough to just use HD simulations. Physical parameters were analyzed for this case is velocity and density. The result shows that velocity as time function indicated more complex pattern than density. For blast wave case, we use it to study one of the Sun energetic event namely Coronal Mass Ejections (CMEs). In this case, we cannot ignore influence from magnetic field. So we use MHD simulations. Physical parameters were analyzed for this case is velocity and energy. The result shows more complex pattern for both parameters. It is shown too as if they have opposite pattern. When energy is high, velocity is not too fast, conversely.

  1. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined

  2. DL_POLY_3: the CCP5 national UK code for molecular-dynamics simulations.

    PubMed

    Todorov, I T; Smith, W

    2004-09-15

    DL_POLY_3 is a general-purpose molecular-dynamics simulation package embedding a highly efficient domain decomposition (DD) parallelization strategy. It was developed at Daresbury Laboratory under the auspices of the Engineering and Physical Sciences Research Council. Written to support academic research, it has a wide range of applications and will run on a wide range of computers; from single-processor workstations to multi-processor computers, with accent on the efficient use of multi-processor power. A new DD adaptation of the smoothed particle mesh Ewald method for calculating long-range forces in molecular simulations, incorporating a novel three-dimensional fast Fourier transform (the Daresbury Advanced Fourier Transform), makes it possible to simulate systems of the order of one million particles and beyond. DL_POLY_3 structure, functionality, performance and availability are described in this feature paper.

  3. A Mathematical Model and MATLAB Code for Muscle-Fluid-Structure Simulations.

    PubMed

    Battista, Nicholas A; Baird, Austin J; Miller, Laura A

    2015-11-01

    This article provides models and code for numerically simulating muscle-fluid-structure interactions (FSIs). This work was presented as part of the symposium on Leading Students and Faculty to Quantitative Biology through Active Learning at the society-wide meeting of the Society for Integrative and Comparative Biology in 2015. Muscle mechanics and simple mathematical models to describe the forces generated by muscular contractions are introduced in most biomechanics and physiology courses. Often, however, the models are derived for simplifying cases such as isometric or isotonic contractions. In this article, we present a simple model of the force generated through active contraction of muscles. The muscles' forces are then used to drive the motion of flexible structures immersed in a viscous fluid. An example of an elastic band immersed in a fluid is first presented to illustrate a fully-coupled FSI in the absence of any external driving forces. In the second example, we present a valveless tube with model muscles that drive the contraction of the tube. We provide a brief overview of the numerical method used to generate these results. We also include as Supplementary Material a MATLAB code to generate these results. The code was written for flexibility so as to be easily modified to many other biological applications for educational purposes.

  4. X-ray FEL Simulation with the MPP version of the GINGER Code

    NASA Astrophysics Data System (ADS)

    Fawley, William

    2001-06-01

    GINGER is a polychromatic, 2D (r-z) PIC code originally developed in the 1980's to examine sideband growth in FEL amplifiers. In the last decade, GINGER simulations have examined various aspects of x-ray and XUV FEL's based upon initiation by self-amplified spontaneous emission (SASE). Recently, GINGER's source code has been substantially updated to exploit many modern features of the Fortran90 language and extended to exploit multiprocessor hardware with the result that the code now runs effectively on platforms ranging from single processor workstations in serial mode to MPP hardware at NERSC such as the Cray-T3E and IBM-SP in full parallel mode. This poster discusses some of the numerical algorithms and structural details of GINGER which permitted relatively painless porting to parallel architectures. Examples of some recent SASE FEL modeling with GINGER will be given including both existing experiments such as the LEUTL UV FEL at Argonne and proposed projects such as the LCLS x-ray FEL at SLAC.

  5. Traveling-wave-tube simulation: The IBC (Interactive Beam-Circuit) code

    SciTech Connect

    Morey, I.J.; Birdsall, C.K.

    1989-09-26

    Interactive Beam-Circuit (IBC) is a one-dimensional many particle simulation code which has been developed to run interactively on a PC or Workstation, and displaying most of the important physics of a traveling-wave-tube. The code is a substantial departure from previous efforts, since it follows all of the particles in the tube, rather than just those in one wavelength, as commonly done. This step allows for nonperiodic inputs in time, a nonuniform line and a large set of spatial diagnostics. The primary aim is to complement a microwave tube lecture course, although past experience has shown that such codes readily become research tools. Simple finite difference methods are used to model the fields of the coupled slow-wave transmission line. The coupling between the beam and the transmission line is based upon the finite difference equations of Brillouin. The space-charge effects are included, in a manner similar to that used by Hess; the original part is use of particle-in-cell techniques to model the space-charge fields. 11 refs., 11 figs.

  6. Modelling and Simulation of National Electronic Product Code Network Demonstrator Project

    NASA Astrophysics Data System (ADS)

    Mo, John P. T.

    The National Electronic Product Code (EPC) Network Demonstrator Project (NDP) was the first large scale consumer goods track and trace investigation in the world using full EPC protocol system for applying RFID technology in supply chains. The NDP demonstrated the methods of sharing information securely using EPC Network, providing authentication to interacting parties, and enhancing the ability to track and trace movement of goods within the entire supply chain involving transactions among multiple enterprise. Due to project constraints, the actual run of the NDP was 3 months only and was unable to consolidate with quantitative results. This paper discusses the modelling and simulation of activities in the NDP in a discrete event simulation environment and provides an estimation of the potential benefits that can be derived from the NDP if it was continued for one whole year.

  7. Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes.

    PubMed

    Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A

    2014-10-01

    This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed.

  8. Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes

    PubMed Central

    Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A

    2014-01-01

    This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed. PMID:26167432

  9. Comparison of a laboratory spectrum of Eu-152 with results of simulation using the MCNP code

    NASA Astrophysics Data System (ADS)

    Ródenas, J.; Gallardo, S.; Ortiz, J.

    2007-09-01

    Detectors used for gamma spectrometry must be calibrated for each geometry considered in environmental radioactivity laboratories. This calibration is performed using a standard solution containing gamma emitter sources. Nevertheless, the efficiency curves obtained are periodically checked using a source such as 152Eu emitting many gamma rays that cover a wide energy range (20-1500 keV). 152Eu presents a problem because it has a lot of peaks affected by True Coincidence Summing (TCS). Two experimental measures have been performed placing the source (a Marinelli beaker) at 0 and 10 cm from the detector. Both spectra are simulated by the MCNP 4C code, where the TCS is not reproduced. Therefore, the comparison between experimental and simulated peak net areas permits one to choose the most convenient peaks to check the efficiency curves of the detector.

  10. Assessment of PCMI Simulation Using the Multidimensional Multiphysics BISON Fuel Performance Code

    SciTech Connect

    Stephen R. Novascone; Jason D. Hales; Benjamin W. Spencer; Richard L. Williamson

    2012-09-01

    Since 2008, the Idaho National Laboratory (INL) has been developing a next-generation nuclear fuel performance code called BISON. BISON is built using INL’s Multiphysics Object-Oriented Simulation Environment, or MOOSE. MOOSE is a massively parallel, finite element-based framework to solve systems of coupled non-linear partial differential equations using the Jacobian-FreeNewton Krylov (JFNK) method. MOOSE supports the use of complex two- and three-dimensional meshes and uses implicit time integration, which is important for the widely varied time scales in nuclear fuel simulation. MOOSE’s object-oriented architecture minimizes the programming required to add new physics models. BISON has been applied to various nuclear fuel problems to assess the accuracy of its 2D and 3D capabilities. The benchmark results used in this assessment range from simulation results from other fuel performance codes to measurements from well-known and documented reactor experiments. An example of a well-documented experiment used in this assessment is the Third Risø Fission Gas Project, referred to as “Bump Test GE7”, which was performed on rod ZX115. This experiment was chosen because it allows for an evaluation of several aspects of the code, including fully coupled thermo-mechanics, contact, and several nonlinear material models. Bump Test GE7 consists of a base-irradiation period of a full-length rod in the Quad-Cities-1 BWR for nearly 7 years to a burnup of 4.17% FIMA. The base irradiation test is followed by a “bump test” of a sub-section of the original rod. The bump test takes place in the test reactor DR3 at Risø in a water-cooled HP1 rig under BWR conditions where the power level is increased by about 50% over base irradiation levels in the span of several hours. During base irradiation, the axial power profile is flat. During the bump test, the axial power profile changes so that the bottom half of the rod is at approximately 50% higher power than at the base

  11. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  12. The EPQ Code System for Simulating the Thermal Response of Plasma-Facing Components to High-Energy Electron Impact

    SciTech Connect

    Ward, Robert Cameron; Steiner, Don

    2004-06-15

    The generation of runaway electrons during a thermal plasma disruption is a concern for the safe and economical operation of a tokamak power system. Runaway electrons have high energy, 10 to 300 MeV, and may potentially cause extensive damage to plasma-facing components (PFCs) through large temperature increases, melting of metallic components, surface erosion, and possible burnout of coolant tubes. The EPQ code system was developed to simulate the thermal response of PFCs to a runaway electron impact. The EPQ code system consists of several parts: UNIX scripts that control the operation of an electron-photon Monte Carlo code to calculate the interaction of the runaway electrons with the plasma-facing materials; a finite difference code to calculate the thermal response, melting, and surface erosion of the materials; a code to process, scale, transform, and convert the electron Monte Carlo data to volumetric heating rates for use in the thermal code; and several minor and auxiliary codes for the manipulation and postprocessing of the data. The electron-photon Monte Carlo code used was Electron-Gamma-Shower (EGS), developed and maintained by the National Research Center of Canada. The Quick-Therm-Two-Dimensional-Nonlinear (QTTN) thermal code solves the two-dimensional cylindrical modified heat conduction equation using the Quickest third-order accurate and stable explicit finite difference method and is capable of tracking melting or surface erosion. The EPQ code system is validated using a series of analytical solutions and simulations of experiments. The verification of the QTTN thermal code with analytical solutions shows that the code with the Quickest method is better than 99.9% accurate. The benchmarking of the EPQ code system and QTTN versus experiments showed that QTTN's erosion tracking method is accurate within 30% and that EPQ is able to predict the occurrence of melting within the proper time constraints. QTTN and EPQ are verified and validated as able

  13. Development of Momentum Conserving Monte Carlo Simulation Code for ECCD Study in Helical Plasmas

    NASA Astrophysics Data System (ADS)

    Murakami, S.; Hasegawa, S.; Moriya, Y.

    2015-03-01

    Parallel momentum conserving collision model is developed for GNET code, in which a linearized drift kinetic equation is solved in the five dimensional phase-space to study the electron cyclotron current drive (ECCD) in helical plasmas. In order to conserve the parallel momentum, we introduce a field particle collision term in addition to the test particle collision term. Two types of the field particle collision term are considered. One is the high speed limit model, where the momentum conserving term does not depend on the velocity of the background plasma and can be expressed in a simple form. The other is the velocity dependent model, which is derived from the Fokker-Planck collision term directly. In the velocity dependent model the field particle operator can be expressed using Legendre polynominals and, introducing the Rosenbluth potential, we derive the field particle term for each Legendre polynominals. In the GNET code, we introduce an iterative process to implement the momentum conserving collision operator. The high speed limit model is applied to the ECCD simulation of the heliotron-J plasma. The simulation results show a good conservation of the momentum with the iterative scheme.

  14. Implementation of a tree-code for numerical simulations of stellar systems

    NASA Astrophysics Data System (ADS)

    Marinho, Eraldo Pereira

    1991-10-01

    An implementation of a tree code for the force calculation in gravitational N-body systems simulations is presented. The technique consists of virtualizing the entire system in a tree data-structure, which reduces the computational effort to theta(N log N) instead of the theta(N exp 2), typical of direct summation. The adopted time integrator is the simple leap-frog with second-order accuracy. A brief discussion about the truncation-error effects on the morphology of the system shows them to be essentially negligible. However, these errors do propagate in a Markovian way if a potential-adaptive time-step is used in order to maintain the expected truncation-error approximately constant in the entire system. The tests show that, even with totally arbitrary distributions, the total computation time obeys theta(N log N). As an application of the code, we evolved an initially cold and homogeneous sphere of point masses to simulate a primordial process of galaxy formation. The evolution of the global entropy of the system suggests that a quasi-equilibrium configuration is achieved after approximately 2 x 10 exp 9 years. It is shown that the final configuration displays a close resemblance to the well observed giant elliptical galaxies, in both kinematical and luminosity distribution properties. A discussion is given on the evolution of the important dynamic quantities characterizing the model. During all the computations, the energy is conserved to better than 0.1 percent.

  15. Validation of a Three-Dimensional Ablation and Thermal Response Simulation Code

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Milos, Frank S.; Gokcen, Tahir

    2010-01-01

    The 3dFIAT code simulates pyrolysis, ablation, and shape change of thermal protection materials and systems in three dimensions. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid system to simulate the shape change due to surface recession. This work is the first part of a code validation study for new capabilities that were added to 3dFIAT. These expanded capabilities include a multi-block moving grid system and an orthotropic thermal conductivity model. This paper focuses on conditions with minimal shape change in which the fluid/solid coupling is not necessary. Two groups of test cases of 3dFIAT analyses of Phenolic Impregnated Carbon Ablator in an arc-jet are presented. In the first group, axisymmetric iso-q shaped models are studied to check the accuracy of three-dimensional multi-block grid system. In the second group, similar models with various through-the-thickness conductivity directions are examined. In this group, the material thermal response is three-dimensional, because of the carbon fiber orientation. Predictions from 3dFIAT are presented and compared with arcjet test data. The 3dFIAT predictions agree very well with thermocouple data for both groups of test cases.

  16. Extension of the MURaM Radiative MHD Code for Coronal Simulations

    NASA Astrophysics Data System (ADS)

    Rempel, M.

    2017-01-01

    We present a new version of the MURaM radiative magnetohydrodynamics (MHD) code that allows for simulations spanning from the upper convection zone into the solar corona. We implement the relevant coronal physics in terms of optically thin radiative loss, field aligned heat conduction, and an equilibrium ionization equation of state. We artificially limit the coronal Alfvén and heat conduction speeds to computationally manageable values using an approximation to semi-relativistic MHD with an artificially reduced speed of light (Boris correction). We present example solutions ranging from quiet to active Sun in order to verify the validity of our approach. We quantify the role of numerical diffusivity for the effective coronal heating. We find that the (numerical) magnetic Prandtl number determines the ratio of resistive to viscous heating and that owing to the very large magnetic Prandtl number of the solar corona, heating is expected to happen predominantly through viscous dissipation. We find that reasonable solutions can be obtained with values of the reduced speed of light just marginally larger than the maximum sound speed. Overall this leads to a fully explicit code that can compute the time evolution of the solar corona in response to photospheric driving using numerical time steps not much smaller than 0.1 s. Numerical simulations of the coronal response to flux emergence covering a time span of a few days are well within reach using this approach.

  17. A PARALLEL MONTE CARLO CODE FOR SIMULATING COLLISIONAL N-BODY SYSTEMS

    SciTech Connect

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.

    2013-02-15

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N {approx} 10{sup 7} particles. Our code is based on the Henon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 10{sup 5} to 10{sup 7}. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within {approx}< 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 10{sup 5}, 128 for N = 10{sup 6} and 256 for N = 10{sup 7}. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60 Multiplication-Sign , 100 Multiplication-Sign , and 220 Multiplication-Sign , respectively.

  18. A Parallel Monte Carlo Code for Simulating Collisional N-body Systems

    NASA Astrophysics Data System (ADS)

    Pattabiraman, Bharath; Umbreit, Stefan; Liao, Wei-keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A.

    2013-02-01

    We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N ~ 107 particles. Our code is based on the Hénon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures and the introduction of a parallel random number generation scheme as well as a parallel sorting algorithm required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. Our implementation uses the Message Passing Interface library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functions up to core collapse with the number of stars, N, spanning three orders of magnitude from 105 to 107. We find that our results are in good agreement with self-similar core-collapse solutions, and the core-collapse times generally agree with expectations from the literature. Also, we observe good total energy conservation, within <~ 0.04% throughout all simulations. We analyze the performance of the code, and demonstrate near-linear scaling of the runtime with the number of processors up to 64 processors for N = 105, 128 for N = 106 and 256 for N = 107. The runtime reaches saturation with the addition of processors beyond these limits, which is a characteristic of the parallel sorting algorithm. The resulting maximum speedups we achieve are approximately 60×, 100×, and 220×, respectively.

  19. Transport simulations of the C-2 and C-2U Field Reversed Configurations with the Q2D code

    NASA Astrophysics Data System (ADS)

    Onofri, Marco; Dettrick, Sean; Barnes, Daniel; Tajima, Toshiki; TAE Team

    2016-10-01

    The Q2D code is a 2D MHD code, which includes a neutral fluid and separate ion and electron temperatures, coupled with a 3D Monte Carlo code, which is used to calculate source terms due to neutral beams. Q2D has been benchmarked against the 1D transport code Q1D and is used to simulate the evolution of the C-2 and C-2U field reversed configuration experiments [1]. Q2D simulations start from an initial equilibrium and transport coefficients are chosen to match C-2 experimental data. C-2U is an upgrade of C-2, with more beam power and angled beam injection, which demonstrates plasma sustainment for 5 + ms. The simulations use the same transport coefficients for C-2 and C-2U, showing the formation of a steady state in C-2U, sustained by fast ion pressure and current drive.

  20. Real simulation tools in introductory courses: packaging and repurposing our research code.

    NASA Astrophysics Data System (ADS)

    Heagy, L. J.; Cockett, R.; Kang, S.; Oldenburg, D.

    2015-12-01

    Numerical simulations are an important tool for scientific research and applications in industry. They provide a means to experiment with physics in a tangible, visual way, often providing insights into the problem. Over the last two years, we have been developing course and laboratory materials for an undergraduate geophysics course primarily taken by non-geophysics majors, including engineers and geologists. Our aim is to provide the students with resources to build intuition about geophysical techniques, promote curiosity driven exploration, and help them develop the skills necessary to communicate across disciplines. Using open-source resources and our existing research code, we have built modules around simulations, with supporting content to give student interactive tools for exploration into the impacts of input parameters and visualization of the resulting fields, fluxes and data for a variety of problems in applied geophysics, including magnetics, seismic, electromagnetics, and direct current resistivity. The content provides context for the problems, along with exercises that are aimed at getting students to experiment and ask 'what if...?' questions. In this presentation, we will discuss our approach for designing the structure of the simulation-based modules, the resources we have used, challenges we have encountered, general feedback from students and instructors, as well as our goals and roadmap for future improvement. We hope that our experiences and approach will be beneficial to other instructors who aim to put simulation tools in the hands of students.

  1. Code System for Monte Carlo Simulation of Electron and Photon Transport.

    SciTech Connect

    2015-07-01

    Version 01 PENELOPE performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials and complex quadric geometries. A mixed procedure is used for the simulation of electron and positron interactions (elastic scattering, inelastic scattering and bremsstrahlung emission), in which ‘hard’ events (i.e. those with deflection angle and/or energy loss larger than pre-selected cutoffs) are simulated in a detailed way, while ‘soft’ interactions are calculated from multiple scattering approaches. Photon interactions (Rayleigh scattering, Compton scattering, photoelectric effect and electron-positron pair production) and positron annihilation are simulated in a detailed way. PENELOPE reads the required physical information about each material (which includes tables of physical properties, interaction cross sections, relaxation data, etc.) from the input material data file. The material data file is created by means of the auxiliary program MATERIAL, which extracts atomic interaction data from the database of ASCII files. PENELOPE mailing list archives and additional information about the code can be found at http://www.nea.fr/lists/penelope.html. See Abstract for additional features.

  2. Monte Carlo Simulation of three dimensional Edwards Anderson model with multi-spin coding and parallel tempering using MPI and CUDA

    NASA Astrophysics Data System (ADS)

    Feng, Sheng; Fang, Ye; Tam, Ka-Ming; Thakur, Bhupender; Yun, Zhifeng; Tomko, Karen; Moreno, Juana; Ramanujam, Jagannathan; Jarrell, Mark

    2013-03-01

    The Edwards Anderson model is a typical example of random frustrated system. It has been a long standing problem in computational physics due to its long relaxation time. Some important properties of the low temperature spin glass phase are still poorly understood after decades of study. The recent advances of GPU computing provide a new opportunity to substantially improve the simulations. We developed an MPI-CUDA hybrid code with multi-spin coding for parallel tempering Monte Carlo simulation of Edwards Anderson model. Since the system size is relatively small, and a large number of parallel replicas and Monte Carlo moves are required, the problem suits well for modern GPUs with CUDA architecture. We use the code to perform an extensive simulation on the three-dimensional Edwards Anderson model with an external field. This work is funded by the NSF EPSCoR LA-SiGMA project under award number EPS-1003897. This work is partly done on the machines of Ohio Supercomputer Center.

  3. Electromagnetic self-consistent field initialization and fluid advance techniques for hybrid-kinetic PWFA code Architect

    NASA Astrophysics Data System (ADS)

    Massimo, F.; Marocchino, A.; Rossi, A. R.

    2016-09-01

    The realization of Plasma Wakefield Acceleration experiments with high quality of the accelerated bunches requires an increasing number of numerical simulations to perform first-order assessments for the experimental design and online-analysis of the experimental results. Particle in Cell codes are the state-of-the-art tools to study the beam-plasma interaction mechanism, but due to their requirements in terms of number of cores and computational time makes them unsuitable for quick parametric scans. Considerable interest has been shown thus in methods which reduce the computational time needed for the simulation of plasma acceleration. Such methods include the use of hybrid kinetic-fluid models, which treat the relativistic bunches as in a PIC code and the background plasma electrons as a fluid. A technique to properly initialize the bunch electromagnetic fields in the time explicit hybrid kinetic-fluid code Architect is presented, as well the implementation of the Flux Corrected Transport scheme for the fluid equations integrated in the code.

  4. An efficient time advancing strategy for energy-preserving simulations

    NASA Astrophysics Data System (ADS)

    Capuano, F.; Coppola, G.; de Luca, L.

    2015-08-01

    Energy-conserving numerical methods are widely employed within the broad area of convection-dominated systems. Semi-discrete conservation of energy is usually obtained by adopting the so-called skew-symmetric splitting of the non-linear convective term, defined as a suitable average of the divergence and advective forms. Although generally allowing global conservation of kinetic energy, it has the drawback of being roughly twice as expensive as standard divergence or advective forms alone. In this paper, a general theoretical framework has been developed to derive an efficient time-advancement strategy in the context of explicit Runge-Kutta schemes. The novel technique retains the conservation properties of skew-symmetric-based discretizations at a reduced computational cost. It is found that optimal energy conservation can be achieved by properly constructed Runge-Kutta methods in which only divergence and advective forms for the convective term are used. As a consequence, a considerable improvement in computational efficiency over existing practices is achieved. The overall procedure has proved to be able to produce new schemes with a specified order of accuracy on both solution and energy. The effectiveness of the method as well as the asymptotic behavior of the schemes is demonstrated by numerical simulation of Burgers' equation.

  5. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    NASA Astrophysics Data System (ADS)

    Merheb, C.; Petegnief, Y.; Talbot, J. N.

    2007-02-01

    Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed

  6. Look at nuclear artillery yield options using JANUS, a wargame simulation code

    SciTech Connect

    Andre, C.G.

    1982-06-15

    JANUS, a two-sided, interactive wargame simulation code, was used to explore how using each of several different yield options in a nuclear artillery shell might affect a tactical battlefield simulation. In a general sense, the results or outcomes of these simulations support the results or outcomes of previous studies. In these simulations the Red player knew of the anticipated nuclear capability of the Blue player. Neither side experienced a decisive win over the other, and both continued fighting and experienced losses that, under most historical circumstances, would have been termed unacceptable - that is, something else would have happened (the attack would have been called off). During play, each side had only fragmentary knowledge of the remaining resources on the other side - thus each side desired to continue fighting on the basis of known information. We found that the anticipated use of nuclear weapons by either side affects the character of a game significantly and that, if the employment of nuclear weapons is to have a decided effect on the progress and outcome of a battle, each side will have to have an adequate number of nuclear weapons. In almost all the simulations we ran using JANUS, enhanced radiation (ER) weapons were more effective than 1-kt fission weapons in imposing overall losses on Red. The typical visibility in the JANUS simulation limited each side's ability to acquire units deep into enemy territory and so the 10-kt fission weapon was not useful against enemy tanks that were not engaged in battle. (Troop safety constraints limited its use on tanks that were engaged in direct fire with the enemy).

  7. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    NASA Astrophysics Data System (ADS)

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-01

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics ;core simulator; based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  8. COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics

    NASA Astrophysics Data System (ADS)

    Barletta, Paolo

    2012-02-01

    Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability

  9. Tornado missile simulation and design methodology. Volume 1: simulation methodology, design applications, and TORMIS computer code. Final report

    SciTech Connect

    Twisdale, L.A.; Dunn, W.L.

    1981-08-01

    A probabilistic methodology has been developed to predict the probabilities of tornado-propelled missiles impacting and damaging nuclear power plant structures. Mathematical models of each event in the tornado missile hazard have been developed and sequenced to form an integrated, time-history simulation methodology. The models are data based where feasible. The data include documented records of tornado occurrence, field observations of missile transport, results of wind tunnel experiments, and missile impact tests. Probabilistic Monte Carlo techniques are used to estimate the risk probabilities. The methodology has been encoded in the TORMIS computer code to facilitate numerical analysis and plant-specific tornado missile probability assessments. Sensitivity analyses have been performed on both the individual models and the integrated methodology, and risk has been assessed for a hypothetical nuclear power plant design case study.

  10. A Global Approach to the Physics Validation of Simulation Codes for Future Nuclear Systems

    SciTech Connect

    Giuseppe Palmiotti; Massimo Salvatores; Gerardo Aliberti; Hikarui Hiruta; R. McKnight; P. Oblozinsky; W. S. Yang

    2008-09-01

    This paper presents a global approach to the validation of the parameters that enter into the neutronics simulation tools for advanced fast reactors with the objective to reduce the uncertainties associated to crucial design parameters. This global approach makes use of sensitivity/uncertainty methods; statistical data adjustments; integral experiment selection, analysis and “representativity” quantification with respect to a reference system; scientifically based cross section covariance data and appropriate methods for their use in multigroup calculations. This global approach has been applied to the uncertainty reduction on the criticality of the Advanced Burner Reactor, (both metal and oxide core versions) presently investigated in the frame of the GNEP initiative. The results obtained are very encouraging and allow to indicate some possible improvements of the ENDF/B-VII data file.

  11. Wavelet subband coding of computer simulation output using the A++ array class library

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.; Zhang, H.D.; Nuri, V.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using a bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.

  12. Subgrid Scale Modeling in Solar Convection Simulations using the ASH Code

    NASA Technical Reports Server (NTRS)

    Young, Y.-N.; Miesch, M.; Mansour, N. N.

    2003-01-01

    The turbulent solar convection zone has remained one of the most challenging and important subjects in physics. Understanding the complex dynamics in the solar con- vection zone is crucial for gaining insight into the solar dynamo problem. Many solar observatories have generated revealing data with great details of large scale motions in the solar convection zone. For example, a strong di erential rotation is observed: the angular rotation is observed to be faster at the equator than near the poles not only near the solar surface, but also deep in the convection zone. On the other hand, due to the wide range of dynamical scales of turbulence in the solar convection zone, both theory and simulation have limited success. Thus, cutting edge solar models and numerical simulations of the solar convection zone have focused more narrowly on a few key features of the solar convection zone, such as the time-averaged di erential rotation. For example, Brun & Toomre (2002) report computational finding of differential rotation in an anelastic model for solar convection. A critical shortcoming in this model is that the viscous dissipation is based on application of mixing length theory to stellar dynamics with some ad hoc parameter tuning. The goal of our work is to implement the subgrid scale model developed at CTR into the solar simulation code and examine how the differential rotation will be a affected as a result. Specifically, we implement a Smagorinsky-Lilly subgrid scale model into the ASH (anelastic spherical harmonic) code developed over the years by various authors. This paper is organized as follows. In x2 we briefly formulate the anelastic system that describes the solar convection. In x3 we formulate the Smagorinsky-Lilly subgrid scale model for unstably stratifed convection. We then present some preliminary results in x4, where we also provide some conclusions and future directions.

  13. Simulation of fast-ion-driven Alfvén eigenmodes on the Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Hu, Youjun; Todo, Y.; Pei, Youbin; Li, Guoqiang; Qian, Jinping; Xiang, Nong; Zhou, Deng; Ren, Qilong; Huang, Juan; Xu, Liqing

    2016-02-01

    Kinetic-MHD hybrid simulations are carried out to investigate possible fast-ion-driven modes on the Experimental Advanced Superconducting Tokamak. Three typical kinds of fast-ion-driven modes, namely, toroidicity-induced Alfvén eigenmodes, reversed shear Alfvén eigenmodes, and energetic-particle continuum modes, are observed simultaneously in the simulations. The simulation results are compared with the results of an ideal MHD eigenvalue code, which shows agreement with respect to the mode frequency, dominant poloidal mode numbers, and radial location. However, the modes in the hybrid simulations take a twisted structure on the poloidal plane, which is different from the results of the ideal MHD eigenvalue code. The twist is due to the radial phase variation of the eigenfunction, which may be attributed to the non-perturbative kinetic effects of the fast ions. By varying the stored energy of fast ions to change the fast ion drive in the simulations, it is demonstrated that the twist (i.e., the radial phase variation) is positively correlated with the fast ion drive.

  14. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes

    PubMed Central

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-01-01

    OBJECTIVE: To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. DESIGN: Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. SETTING: Paediatric residency program at BC Children’s Hospital, Vancouver, British Columbia. INTERVENTIONS: The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. RESULTS: A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. CONCLUSIONS: A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes. PMID:23372405

  15. Advanced wellbore thermal simulator GEOTEMP2 research report

    SciTech Connect

    Mitchell, R.F.

    1982-02-01

    The development of the GEOTEMP2 wellbore thermal simulator is described. The major technical features include a general purpose air and mist drilling simulator and a two-phase steam flow simulator that can model either injection or production.

  16. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    SciTech Connect

    Poole, G.; Heroux, M.

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  17. Advances in simulating radiance signatures for dynamic air/water interfaces

    NASA Astrophysics Data System (ADS)

    Goodenough, Adam A.; Brown, Scott D.; Gerace, Aaron

    2015-05-01

    The air-water interface poses a number of problems for both collecting and simulating imagery. At the surface, the magnitude of observed radiance can change by multiple orders of magnitude at high spatiotemporal frequency due to glinting effects. In the volume, similarly high frequency focusing of photons by a dynamic wave surface significantly changes the reflected radiance of in-water objects and the scattered return of the volume itself. These phenomena are often manifest as saturated pixels and artifacts in collected imagery (often enhanced by time delays between neighboring pixels or interpolation between adjacent filters) and as noise and greater required computation times in simulated imagery. This paper describes recent advances made to the Digital Image and Remote Sensing Image Generation (DIRSIG) model to address the simulation issues to better facilitate an understanding of a multi/hyper-spectral collection. Glint effects are simulated using a dynamic height field that can be driven by wave frequency models and generates a sea state at arbitrary time scales. The volume scattering problem is handled by coupling the geometry representing the surface (facetization by the height field) with the single scattering contribution at any point in the water. The problem is constrained somewhat by assuming that contributions come from a Snell's window above the scattering point and by assuming a direct source (sun). Diffuse single scattered and multiple scattered energy contributions are handled by Monte Carlo techniques employed previously. The model is compared to existing radiative transfer codes where possible, with the objective of providing a robust movel of time-dependent absolute radiance at many wavelengths.

  18. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Lu, Lu; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-04-01

    Background: Advanced intercross lines (AIL) are segregating populations created using a multi-generation breeding protocol for fine mapping complex trait loci (QTL) in mice and other organisms. Applying QTL mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of AIL family structure in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with na ve mapping approaches in AIL populations is that the individual is not an exchangeable unit. Methodology/Principal Findings: The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. GRAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels, which are corrected using GRAIP. GRAIP also detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance: GRAIP determines appropriate genome-wide significance thresholds and locus-specific Pvalues for AILs and other populations with similar family structures. The effect of

  19. COOL: A code for dynamic Monte Carlo simulation of molecular dynamics

    NASA Astrophysics Data System (ADS)

    Barletta, Paolo

    2011-02-01

    COOL is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in a harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. Program summaryProgram title: COOL Catalogue identifier: AEHJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 111 674 No. of bytes in distributed program, including test data, etc.: 18 618 045 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare interparticle collisions are considered with an acceptance/rejection mechanism, that is by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as

  20. Advanced Simulation Capability for Environmental Management (ASCEM) Phase II Demonstration

    SciTech Connect

    Freshley, M.; Hubbard, S.; Flach, G.; Freedman, V.; Agarwal, D.; Andre, B.; Bott, Y.; Chen, X.; Davis, J.; Faybishenko, B.; Gorton, I.; Murray, C.; Moulton, D.; Meyer, J.; Rockhold, M.; Shoshani, A.; Steefel, C.; Wainwright, H.; Waichler, S.

    2012-09-28

    In 2009, the National Academies of Science (NAS) reviewed and validated the U.S. Department of Energy Office of Environmental Management (EM) Technology Program in its publication, Advice on the Department of Energy’s Cleanup Technology Roadmap: Gaps and Bridges. The NAS report outlined prioritization needs for the Groundwater and Soil Remediation Roadmap, concluded that contaminant behavior in the subsurface is poorly understood, and recommended further research in this area as a high priority. To address this NAS concern, the EM Office of Site Restoration began supporting the development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific approach that uses an integration of toolsets for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM modeling toolset is modular and open source. It is divided into three thrust areas: Multi-Process High Performance Computing (HPC), Platform and Integrated Toolsets, and Site Applications. The ASCEM toolsets will facilitate integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. During fiscal year 2012, the ASCEM project continued to make significant progress in capabilities development. Capability development occurred in both the Platform and Integrated Toolsets and Multi-Process HPC Simulator areas. The new Platform and Integrated Toolsets capabilities provide the user an interface and the tools necessary for end-to-end model development that includes conceptual model definition, data management for model input, model calibration and uncertainty analysis, and model output processing including visualization. The new HPC Simulator capabilities target increased functionality of process model representations, toolsets for interaction with the Platform, and model confidence testing and verification for

  1. Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities.

    SciTech Connect

    PADOVANI, ENRICO

    2012-04-15

    Version: 00 US DOE 10CFR810 Jurisdiction. The Monte Carlo simulation of correlation measurements that rely on the detection of fast neutrons and photons from fission requires that particle emissions and interactions following a fission event be described as close to reality as possible. The -PoliMi extension to MCNP and to MCNPX was developed to simulate correlated-particle and the subsequent interactions as close as possible to the physical behavior. Initially, MCNP-PoliMi, a modification of MCNP4C, was developed. The first version was developed in 2001-2002 and released in early 2004 to the Radiation Safety Information Computational Center (RSICC). It was developed for research purposes, to simulate correlated counts in organic scintillation detectors, sensitive to fast neutrons and gamma rays. Originally, the field of application was nuclear safeguards; however subsequent improvements have enhanced the ability to model measurements in other research fields as well. During 2010-2011 the -PoliMi modification was ported into MCNPX-2.7.0, leading to the development of MCNPX-PoliMi. Now the -PoliMi v2.0 modifications are distributed as a patch to MCNPX-2.7.0 which currently is distributed in the RSICC PACKAGE BCC-004 MCNP6_BETA2/MCNP5/MCNPX. Also included in the package is MPPost, a versatile code that provides simulated detector response. By taking advantage of the modifications in MCNPX-PoliMi, MPPost can provide an accurate simulation of the detector response for a variety of detection scenarios.

  2. Science based integrated approach to advanced nuclear fuel development - integrated multi-scale multi-physics hierarchical modeling and simulation framework Part III: cladding

    SciTech Connect

    Tome, Carlos N; Caro, J A; Lebensohn, R A; Unal, Cetin; Arsenlis, A; Marian, J; Pasamehmetoglu, K

    2010-01-01

    Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.

  3. Five-field simulations of peeling-ballooning modes using BOUT++ code

    SciTech Connect

    Xia, T. Y.; Xu, X. Q.

    2013-05-15

    The simulations of edge localized modes (ELMs) with a 5-field peeling-ballooning (P-B) model using BOUT++ code are reported in this paper. In order to study the particle and energy transport in the pedestal region, the pressure equation is separated into ion density and ion and electron temperature equations. Through the simulations, the length scale L{sub n} of the gradient of equilibrium density n{sub i0} is found to destabilize the P-B modes in ideal MHD model. With ion diamagnetic effects, the growth rate is inversely proportional to n{sub i0} at medium toroidal mode number n. For the nonlinear simulations, the gradient of n{sub i0} in the pedestal region can more than double the ELM size. This increasing effect can be suppressed by thermal diffusivities χ{sub ∥}, employing the flux limited expression. Thermal diffusivities are sufficient to suppress the perturbations at the top of pedestal region. These suppressing effects lead to smaller ELM size of P-B modes.

  4. Simulation of Turbulent Combustion Fields of Shock-Dispersed Aluminum Using the AMR Code

    SciTech Connect

    Kuhl, A L; Bell, J B; Beckner, V E; Khasainov, B

    2006-11-02

    We present a Model for simulating experiments of combustion in Shock-Dispersed-Fuel (SDF) explosions. The SDF charge consisted of a 0.5-g spherical PETN booster, surrounded by 1-g of fuel powder (flake Aluminum). Detonation of the booster charge creates a high-temperature, high-pressure source (PETN detonation products gases) that both disperses the fuel and heats it. Combustion ensues when the fuel mixes with air. The gas phase is governed by the gas-dynamic conservation laws, while the particle phase obeys the continuum mechanics laws for heterogeneous media. The two phases exchange mass, momentum and energy according to inter-phase interaction terms. The kinetics model used an empirical particle burn relation. The thermodynamic model considers the air, fuel and booster products to be of frozen composition, while the Al combustion products are assumed to be in equilibrium. The thermodynamic states were calculated by the Cheetah code; resulting state points were fit with analytic functions suitable for numerical simulations. Numerical simulations of combustion of an Aluminum SDF charge in a 6.4-liter chamber were performed. Computed pressure histories agree with measurements.

  5. Advanced Simulation and Computing: A Summary Report to the Director's Review

    SciTech Connect

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way to sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.

  6. The GENGA Code: Gravitational Encounters in N-body simulations with GPU Acceleration.

    NASA Astrophysics Data System (ADS)

    Grimm, Simon; Stadel, Joachim

    2013-07-01

    We present a GPU (Graphics Processing Unit) implementation of a hybrid symplectic N-body integrator based on the Mercury Code (Chambers 1999), which handles close encounters with a very good energy conservation. It uses a combination of a mixed variable integration (Wisdom & Holman 1991) and a direct N-body Bulirsch-Stoer method. GENGA is written in CUDA C and runs on NVidia GPU's. The GENGA code supports three simulation modes: Integration of up to 2048 massive bodies, integration with up to a million test particles, or parallel integration of a large number of individual planetary systems. To achieve the best performance, GENGA runs completely on the GPU, where it can take advantage of the very fast, but limited, memory that exists there. All operations are performed in parallel, including the close encounter detection and grouping independent close encounter pairs. Compared to Mercury, GENGA runs up to 30 times faster. Two applications of GENGA are presented: First, the dynamics of planetesimals and the late stage of rocky planet formation due to planetesimal collisions. Second, a dynamical stability analysis of an exoplanetary system with an additional hypothetical super earth, which shows that in some multiple planetary systems, additional super earths could exist without perturbing the dynamical stability of the other planets (Elser et al. 2013).

  7. Stellarator Microinstability and Turbulence Simulations Using Gyrofluid (GryfX) and Gyrokinetic (GS2) Codes

    NASA Astrophysics Data System (ADS)

    Martin, Mike; Landreman, Matt; Mandell, Noah; Dorland, William

    2016-10-01

    GryfX is a delta-f code that evolves the gyrofluid set of equations using sophisticated nonlinear closures, with the option to evolve zonal flows (ky =0) kinetically. Since fluid models require less memory to store than a kinetic model, GryfX is ideally suited and thus written to run on a Graphics Processing Unit (GPU), yielding about a 1,200 times performance advantage over GS2. Here we present the first stellarator simulations using GryfX. Results compare linear growth rates of the Ion Temperature Gradient (ITG) mode between GryfX and the gyrokinetic code, GS2, using stellarator geometries from the National Compact Stellarator Experiment (NCSX) and Wendelstein 7-X (W7X). Strong agreement of <10% for maximum growth rates is observed between GS2 and GryfX for temperature gradients away from marginal stability for both NCSX and W7X geometries. Nonlinear stellarator results using GS2/GryfX are also presented.

  8. CODE BLUE: Three dimensional massively-parallel simulation of multi-scale configurations

    NASA Astrophysics Data System (ADS)

    Juric, Damir; Kahouadji, Lyes; Chergui, Jalel; Shin, Seungwon; Craster, Richard; Matar, Omar

    2016-11-01

    We present recent progress on BLUE, a solver for massively parallel simulations of fully three-dimensional multiphase flows which runs on a variety of computer architectures from laptops to supercomputers and on 131072 threads or more (limited only by the availability to us of more threads). The code is wholly written in Fortran 2003 and uses a domain decomposition strategy for parallelization with MPI. The fluid interface solver is based on a parallel implementation of a hybrid Front Tracking/Level Set method designed to handle highly deforming interfaces with complex topology changes. We developed parallel GMRES and multigrid iterative solvers suited to the linear systems arising from the implicit solution for the fluid velocities and pressure in the presence of strong density and viscosity discontinuities across fluid phases. Particular attention is drawn to the details and performance of the parallel Multigrid solver. EPSRC UK Programme Grant MEMPHIS (EP/K003976/1).

  9. Simulating Magnetic Reconnection Experiment (MRX) with a Guide Field using Fluid Code, HiFi

    NASA Astrophysics Data System (ADS)

    Budner, Tamas; Chen, Yangao; Meier, Eric; Ji, Hantao; MRX Team

    2015-11-01

    Magnetic reconnection is a phenomenon that occurs in plasmas when magnetic field lines effectively ``break'' and reconnect resulting in a different topological configuration. In this process, energy that was once stored in the magnetic field is transfered into the thermal velocity of the particles, effectively heating the plasma. MRX at the Princeton Plasma Physics Laboratory creates the conditions under which reconnection can occur by initially ramping the current in two adjacent coils and then rapidly decreasing with and without a guide magnetic field along the reconnecting current. We simulate this experiment using a fluid code called HiFi, an implicit and adaptive high order spectral element modeling framework, and compare our results to experimental data from MRX. The purpose is to identify physics behind the observed reconnection process for the field line break and the resultant plasma heating.

  10. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    SciTech Connect

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.

  11. CMAD: A Self-consistent Parallel Code to Simulate the Electron Cloud Build-up and Instabilities

    SciTech Connect

    Pivi, M.T.F.; /SLAC

    2007-11-07

    We present the features of CMAD, a newly developed self-consistent code which simulates both the electron cloud build-up and related beam instabilities. By means of parallel (Message Passing Interface - MPI) computation, the code tracks the beam in an existing (MAD-type) lattice and continuously resolves the interaction between the beam and the cloud at each element location, with different cloud distributions at each magnet location. The goal of CMAD is to simulate single- and coupled-bunch instability, allowing tune shift, dynamic aperture and frequency map analysis and the determination of the secondary electron yield instability threshold. The code is in its phase of development and benchmarking with existing codes. Preliminary results on benchmarking are presented in this paper.

  12. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  13. Multi-Dimensional Simulation of LWR Fuel Behavior in the BISON Fuel Performance Code

    NASA Astrophysics Data System (ADS)

    Williamson, R. L.; Capps, N. A.; Liu, W.; Rashid, Y. R.; Wirth, B. D.

    2016-11-01

    Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. To simulate this behavior requires a wide variety of material models that are often complex and nonlinear. The recently developed BISON code represents a powerful fuel performance simulation tool based on its material and physical behavior capabilities, finite-element versatility of spatial representation, and use of parallel computing. The code can operate in full three dimensional (3D) mode, as well as in reduced two dimensional (2D) modes, e.g., axisymmetric radial-axial ( R- Z) or plane radial-circumferential ( R- θ), to suit the application and to allow treatment of global and local effects. A BISON case study was used to illustrate analysis of Pellet Clad Mechanical Interaction failures from manufacturing defects using combined 2D and 3D analyses. The analysis involved commercial fuel rods and demonstrated successful computation of metrics of interest to fuel failures, including cladding peak hoop stress and strain energy density. In comparison with a failure threshold derived from power ramp tests, results corroborate industry analyses of the root cause of the pellet-clad interaction failures and illustrate the importance of modeling 3D local effects around fuel pellet defects, which can produce complex effects including cold spots in the cladding, stress concentrations, and hot spots in the fuel that can lead to enhanced cladding degradation such as hydriding, oxidation, CRUD formation, and stress corrosion cracking.

  14. WESSEL: Code for Numerical Simulation of Two-Dimensional Time-Dependent Width-Averaged Flows with Arbitrary Boundaries.

    DTIC Science & Technology

    1985-08-01

    id This report should be cited as follows: -0 Thompson , J . F ., and Bernard, R. S. 1985. "WESSEL: Code for Numerical Simulation of Two-Dimensional Time...Bodies," Ph. D. Dissertation, Mississippi State University, Mississippi State, Miss. Thompson , J . F . 1983. "A Boundary-Fitted Coordinate Code for General...Vicksburg, Miss. Thompson , J . F ., and Bernard, R. S. 1985. "Numerical Modeling of Two-Dimensional Width-Averaged Flows Using Boundary-Fitted Coordinate

  15. A highly vectorised "link-cell" FORTRAN code for the DL_POLY molecular dynamics simulation package

    NASA Astrophysics Data System (ADS)

    Kholmurodov, Kholmirzo; Smith, William; Yasuoka, Kenji; Ebisuzaki, Toshikazu

    2000-03-01

    Highly vectorised FORTRAN subroutines, based on the link-cell algorithm for DL_POLY molecular dynamics simulation package, are developed. For several specific benchmark systems the efficiency of the proposed codes on a Fujitsu VPP700/128E vector computer has been tested. It is shown that in constructing the neighbor list and in calculating atomic forces our link-cell method is significantly faster than the original code.

  16. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-03-10

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  17. A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator

    PubMed Central

    Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein

    2014-01-01

    Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm2× 10 cm2 filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model. PMID:24696804

  18. Coronal extension of the MURaM radiative MHD code: From quiet sun to flare simulations

    NASA Astrophysics Data System (ADS)

    Rempel, Matthias D.; Cheung, Mark

    2016-05-01

    We present a new version of the MURaM radiative MHD code, which includes a treatment of the solar corona in terms of MHD, optically thin radiative loss and field-aligned heat conduction. In order to relax the severe time-step constraints imposed by large Alfven velocities and heat conduction we use a combination of semi-relativistic MHD with reduced speed of light ("Boris correction") and a hyperbolic formulation of heat conduction. We apply the numerical setup to 4 different setups including a mixed polarity quiet sun, an open flux region, an arcade solution and an active region setup and find all cases an amount of coronal heating sufficient to maintain a corona with temperatures from 1 MK (quiet sun) to 2 MK (active region, arcade). In all our setups the Poynting flux is self-consistently created by photospheric and sub-photospheric magneto-convection in the lower part of our simulation domain. Varying the maximum allowed Alfven velocity ("reduced speed of light") leads to only minor changes in the coronal structure as long as the limited Alfven velocity remains larger than the speed of sound and about 1.5-3 times larger than the peak advection velocity. We also found that varying details of the numerical diffusivities that govern the resistive and viscous energy dissipation do not strongly affect the overall coronal heating, but the ratio of resistive and viscous energy dependence is strongly dependent on the effective numerical magnetic Prandtl number. We use our active region setup in order to simulate a flare triggered by the emergence of a twisted flux rope into a pre-existing bipolar active region. Our simulation yields a series of flares, with the strongest one reaching GOES M1 class. The simulation reproduces many observed properties of eruptions such as flare ribbons, post flare loops and a sunquake.

  19. A Comparison Between GATE and MCNPX Monte Carlo Codes in Simulation of Medical Linear Accelerator.

    PubMed

    Sadoughi, Hamid-Reza; Nasseri, Shahrokh; Momennezhad, Mahdi; Sadeghi, Hamid-Reza; Bahreyni-Toosi, Mohammad-Hossein

    2014-01-01

    Radiotherapy dose calculations can be evaluated by Monte Carlo (MC) simulations with acceptable accuracy for dose prediction in complicated treatment plans. In this work, Standard, Livermore and Penelope electromagnetic (EM) physics packages of GEANT4 application for tomographic emission (GATE) 6.1 were compared versus Monte Carlo N-Particle eXtended (MCNPX) 2.6 in simulation of 6 MV photon Linac. To do this, similar geometry was used for the two codes. The reference values of percentage depth dose (PDD) and beam profiles were obtained using a 6 MV Elekta Compact linear accelerator, Scanditronix water phantom and diode detectors. No significant deviations were found in PDD, dose profile, energy spectrum, radial mean energy and photon radial distribution, which were calculated by Standard and Livermore EM models and MCNPX, respectively. Nevertheless, the Penelope model showed an extreme difference. Statistical uncertainty in all the simulations was <1%, namely 0.51%, 0.27%, 0.27% and 0.29% for PDDs of 10 cm(2)× 10 cm(2) filed size, for MCNPX, Standard, Livermore and Penelope models, respectively. Differences between spectra in various regions, in radial mean energy and in photon radial distribution were due to different cross section and stopping power data and not the same simulation of physics processes of MCNPX and three EM models. For example, in the Standard model, the photoelectron direction was sampled from the Gavrila-Sauter distribution, but the photoelectron moved in the same direction of the incident photons in the photoelectric process of Livermore and Penelope models. Using the same primary electron beam, the Standard and Livermore EM models of GATE and MCNPX showed similar output, but re-tuning of primary electron beam is needed for the Penelope model.

  20. The Yambo code: a comprehensive tool to perform ab-initio simulations of equilibrium and out-of-equilibrium properties

    NASA Astrophysics Data System (ADS)

    Marini, Andrea

    Density functional theory and many-body perturbation theory methods (such as GW and Bethe-Selpether equation) are standard approaches to the equilibrium ground and excited state properties of condensed matter systems, surfaces, molecules and other several kind of materials. At the same time ultra-fast optical spectroscopy is becoming a widely used and powerful tool for the observation of the out-of-equilibrium dynamical processes. In this case the theoretical tools (such as the Baym-Kadanoff equation) are well known but, only recently, have been merged with the ab-Initio approach. And, for this reason, highly parallel and efficient codes are lacking. Nevertheless, the combination of these two areas of research represents, for the ab-initio community, a challenging prespective as it requires the development of advanced theoretical, methodological and numerical tools. Yambo is a popular community software implementing the above methods using plane-waves and pseudo-potentials. Yambo is available to the community as open-source software, and oriented to high-performance computing. The Yambo project aims at making the simulation of these equilibrium and out-of-equilibrium complex processes available to a wide community of users. Indeed the code is used, in practice, in many countries and well beyond the European borders. Yambo is a member of the suite of codes of the MAX European Center of Excellence (Materials design at the exascale) . It is also used by the user facilities of the European Spectroscopy Facility and of the NFFA European Center (nanoscience foundries & fine analysis). In this talk I will discuss some recent numerical and methodological developments that have been implemented in Yambo towards to exploitation of next generation HPC supercomputers. In particular, I will present the hybrid MPI+OpenMP parallelization and the specific case of the response function calculation. I will also discuss the future plans of the Yambo project and its potential use as

  1. Advanced methods in global gyrokinetic full f particle simulation of tokamak transport

    SciTech Connect

    Ogando, F.; Heikkinen, J. A.; Henriksson, S.; Janhunen, S. J.; Kiviniemi, T. P.; Leerink, S.

    2006-11-30

    A new full f nonlinear gyrokinetic simulation code, named ELMFIRE, has been developed for simulating transport phenomena in tokamak plasmas. The code is based on a gyrokinetic particle-in-cell algorithm, which can consider electrons and ions jointly or separately, as well as arbitrary impurities. The implicit treatment of the ion polarization drift and the use of full f methods allow for simulations of strongly perturbed plasmas including wide orbit effects, steep gradients and rapid dynamic changes. This article presents in more detail the algorithms incorporated into ELMFIRE, as well as benchmarking comparisons to both neoclassical theory and other codes.Code ELMFIRE calculates plasma dynamics by following the evolution of a number of sample particles. Because of using an stochastic algorithm its results are influenced by statistical noise. The effect of noise on relevant magnitudes is analyzed.Turbulence spectra of FT-2 plasma has been calculated with ELMFIRE, obtaining results consistent with experimental data.

  2. Preface: Recent Advances in Modeling Multiphase Flow and Transportwith the TOUGH Family of Codes

    SciTech Connect

    Liu, Hui-Hai; Illangasekare, Tissa H.

    2007-11-15

    A symposium on research carried out using the TOUGH family of numerical codes was held from May 15 to 17, 2006, at the Lawrence Berkeley National Laboratory. This special issue of the 'Vadose Zone Journal' contains revised and expanded versions of a selected set of papers presented at this symposium (TOUGH Symposium 2006; http://esd.lbl.gov/TOUGHsymposium), all of which focus on multiphase flow, including flow in the vadose zone.

  3. Simulation of IST Turbomachinery Power-Neutral Tests with the ANL Plant Dynamics Code

    SciTech Connect

    Moisseytsev, A.; Sienicki, J. J.

    2016-12-13

    The validation of the Plant Dynamics Code (PDC) developed at Argonne National Laboratory (ANL) for the steady-state and transient analysis of supercritical carbon dioxide (sCO2) systems has been continued with new test data from the Naval Nuclear Laboratory (operated by Bechtel Marine Propulsion Corporation) Integrated System Test (IST). Although data from three runs were provided to ANL, only two of the data sets were analyzed and described in this report. The common feature of these tests is the power-neutral operation of the turbine-compressor shaft, where no external power through the alternator was provided during the tests. Instead, the shaft speed was allowed to change dictated by the power balance between the turbine, the compressor, and the power losses in the shaft. The new test data turned out to be important for code validation for several reasons. First, the power-neutral operation of the shaft allows validation of the shaft dynamics equations in asynchronous mode, when the shaft is disconnected from the grid. Second, the shaft speed control with the compressor recirculation (CR) valve not only allows for testing the code control logic itself, but it also serves as a good test for validation of both the compressor surge control and the turbine bypass control actions, since the effect of the CR action on the loop conditions is similar for both of these controls. Third, the varying compressor-inlet temperature change test allows validation of the transient response of the precooler, a shell-and-tube heat exchanger. The first transient simulation of the compressor-inlet temperature variation Test 64661 showed a much slower calculated response of the precooler in the calculations than the test data. Further investigation revealed an error in calculating the heat exchanger tube mass for the PDC dynamic equations that resulted in a slower change in the tube wall temperature than measured. The transient calculations for both tests were done in two steps. The

  4. Monte Carlo simulation and scatter correction of the GE Advance PET scanner with SimSET and Geant4

    NASA Astrophysics Data System (ADS)

    Barret, Olivier; Carpenter, T. Adrian; Clark, John C.; Ansorge, Richard E.; Fryer, Tim D.

    2005-10-01

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.

  5. Monte Carlo simulation and scatter correction of the GE advance PET scanner with SimSET and Geant4.

    PubMed

    Barret, Olivier; Carpenter, T Adrian; Clark, John C; Ansorge, Richard E; Fryer, Tim D

    2005-10-21

    For Monte Carlo simulations to be used as an alternative solution to perform scatter correction, accurate modelling of the scanner as well as speed is paramount. General-purpose Monte Carlo packages (Geant4, EGS, MCNP) allow a detailed description of the scanner but are not efficient at simulating voxel-based geometries (patient images). On the other hand, dedicated codes (SimSET, PETSIM) will perform well for voxel-based objects but will be poor in their capacity of simulating complex geometries such as a PET scanner. The approach adopted in this work was to couple a dedicated code (SimSET) with a general-purpose package (Geant4) to have the efficiency of the former and the capabilities of the latter. The combined SimSET+Geant4 code (SimG4) was assessed on the GE Advance PET scanner and compared to the use of SimSET only. A better description of the resolution and sensitivity of the scanner and of the scatter fraction was obtained with SimG4. The accuracy of scatter correction performed with SimG4 and SimSET was also assessed from data acquired with the 20 cm NEMA phantom. SimG4 was found to outperform SimSET and to give slightly better results than the GE scatter correction methods installed on the Advance scanner (curve fitting and scatter modelling for the 300-650 keV and 375-650 keV energy windows, respectively). In the presence of a hot source close to the edge of the field of view (as found in oxygen scans), the GE curve-fitting method was found to fail whereas SimG4 maintained its performance.

  6. Modeling and simulation challenges pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)

    SciTech Connect

    Turinsky, Paul J.; Kothe, Douglas B.

    2016-05-15

    The Consortium for the Advanced Simulation of Light Water Reactors (CASL), the first Energy Innovation Hub of the Department of Energy, was established in 2010 with the goal of providing modeling and simulation (M&S) capabilities that support and accelerate the improvement of nuclear energy's economic competitiveness and the reduction of spent nuclear fuel volume per unit energy, and all while assuring nuclear safety. To accomplish this requires advances in M&S capabilities in radiation transport, thermal-hydraulics, fuel performance and corrosion chemistry. To focus CASL's R&D, industry challenge problems have been defined, which equate with long standing issues of the nuclear power industry that M&S can assist in addressing. To date CASL has developed a multi-physics “core simulator” based upon pin-resolved radiation transport and subchannel (within fuel assembly) thermal-hydraulics, capitalizing on the capabilities of high performance computing. CASL's fuel performance M&S capability can also be optionally integrated into the core simulator, yielding a coupled multi-physics capability with untapped predictive potential. Material models have been developed to enhance predictive capabilities of fuel clad creep and growth, along with deeper understanding of zirconium alloy clad oxidation and hydrogen pickup. Understanding of corrosion chemistry (e.g., CRUD formation) has evolved at all scales: micro, meso and macro. CFD R&D has focused on improvement in closure models for subcooled boiling and bubbly flow, and the formulation of robust numerical solution algorithms. For multiphysics integration, several iterative acceleration methods have been assessed, illuminating areas where further research is needed. Finally, uncertainty quantification and data assimilation techniques, based upon sampling approaches, have been made more feasible for practicing nuclear engineers via R&D on dimensional reduction and biased sampling. Industry adoption of CASL's evolving M

  7. Particle-In-Cell (PIC) code simulation results and comparison with theory scaling laws for photoelectron-generated radiation

    SciTech Connect

    Dipp, T.M. |

    1993-12-01

    The generation of radiation via photoelectrons induced off of a conducting surface was explored using Particle-In-Cell (PIC) code computer simulations. Using the MAGIC PIC code, the simulations were performed in one dimension to handle the diverse scale lengths of the particles and fields in the problem. The simulations involved monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. A sinusoidal, 100% modulated, 6.3263 ns pulse train, as well as unmodulated emission, were used to explore the behavior of the particles, fields, and generated radiation. A special postprocessor was written to convert the PIC code simulated electron sheath into far-field radiation parameters by means of rigorous retarded time calculations. The results of the small-spot PIC simulations were used to generate various graphs showing resonance and nonresonance radiation quantities such as radiated lobe patterns, frequency, and power. A database of PIC simulation results was created and, using a nonlinear curve-fitting program, compared with theoretical scaling laws. Overall, the small-spot behavior predicted by the theoretical scaling laws was generally observed in the PIC simulation data, providing confidence in both the theoretical scaling laws and the PIC simulations.

  8. Validation and verification of RELAP5 for Advanced Neutron Source accident analysis: Part I, comparisons to ANSDM and PRSDYN codes

    SciTech Connect

    Chen, N.C.J.; Ibn-Khayat, M.; March-Leuba, J.A.; Wendel, M.W.

    1993-12-01

    As part of verification and validation, the Advanced Neutron Source reactor RELAP5 system model was benchmarked by the Advanced Neutron Source dynamic model (ANSDM) and PRSDYN models. RELAP5 is a one-dimensional, two-phase transient code, developed by the Idaho National Engineering Laboratory for reactor safety analysis. Both the ANSDM and PRSDYN models use a simplified single-phase equation set to predict transient thermal-hydraulic performance. Brief descriptions of each of the codes, models, and model limitations were included. Even though comparisons were limited to single-phase conditions, a broad spectrum of accidents was benchmarked: a small loss-of-coolant-accident (LOCA), a large LOCA, a station blackout, and a reactivity insertion accident. The overall conclusion is that the three models yield similar results if the input parameters are the same. However, ANSDM does not capture pressure wave propagation through the coolant system. This difference is significant in very rapid pipe break events. Recommendations are provided for further model improvements.

  9. Genome Reshuffling for Advanced Intercross Permutation (GRAIP): Simulation and permutation for advanced intercross population analysis

    SciTech Connect

    Pierce, Jeremy; Broman, Karl; Chesler, Elissa J; Zhou, Guomin; Airey, David; Birmingham, Amanda; Williams, Robert

    2008-01-01

    Abstract Background Advanced intercross lines (AIL) are segregating populations created using a multigeneration breeding protocol for fine mapping complex traits in mice and other organisms. Applying quantitative trait locus (QTL) mapping methods for intercross and backcross populations, often followed by na ve permutation of individuals and phenotypes, does not account for the effect of family structure in AIL populations in which final generations have been expanded and leads to inappropriately low significance thresholds. The critical problem with a na ve mapping approach in such AIL populations is that the individual is not an exchangeable unit given the family structure. Methodology/Principal Findings The effect of family structure has immediate implications for the optimal AIL creation (many crosses, few animals per cross, and population expansion before the final generation) and we discuss these and the utility of AIL populations for QTL fine mapping. We also describe Genome Reshuffling for Advanced Intercross Permutation, (GRAIP) a method for analyzing AIL data that accounts for family structure. RAIP permutes a more interchangeable unit in the final generation crosses - the parental genome - and simulating regeneration of a permuted AIL population based on exchanged parental identities. GRAIP determines appropriate genome- ide significance thresholds and locus-specific P-values for AILs and other populations with similar family structures. We contrast GRAIP with na ve permutation using a large densely genotyped mouse AIL population (1333 individuals from 32 crosses). A na ve permutation using coat color as a model phenotype demonstrates high false-positive locus identification and uncertain significance levels in our AIL population, which are corrected by use of GRAIP. We also show that GRAIP detects an established hippocampus weight locus and a new locus, Hipp9a. Conclusions and Significance GRAIP determines appropriate genome-wide significance thresholds

  10. Preliminary investigations on 3D PIC simulation of DPHC structure using NEPTUNE3D code

    NASA Astrophysics Data System (ADS)

    Zhao, Hailong; Dong, Ye; Zhou, Haijing; Zou, Wenkang; Wang, Qiang

    2016-10-01

    Cubic region (34cm × 34cm × 18cm) including the double post-hole convolute (DPHC) structure was chosen to perform a series of fully 3D PIC simulations using NEPTUNE3D codes, massive data ( 200GB) could be acquired and solved in less than 5 hours. Cold-chamber tests were performed during which only cathode electron emission was considered without temperature rise or ion emission, current loss efficiency was estimated by comparisons between output magnetic field profiles with or without electron emission. PIC simulation results showed three stages of current transforming process with election emission in DPHC structure, the maximum ( 20%) current loss was 437kA at 15ns, while only 0.46% 0.48% was lost when driving current reached its peak. DPHC structure proved valuable functions during energy transform process in PTS facility, and NEPTUNE3D provided tools to explore this sophisticated physics. Project supported by the National Natural Science Foundation of China, Grant No. 11571293, 11505172.

  11. An Approach to Assess Delamination Propagation Simulation Capabilities in Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2008-01-01

    An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.

  12. SPILADY: A parallel CPU and GPU code for spin-lattice magnetic molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Ma, Pui-Wai; Dudarev, S. L.; Woo, C. H.

    2016-10-01

    Spin-lattice dynamics generalizes molecular dynamics to magnetic materials, where dynamic variables describing an evolving atomic system include not only coordinates and velocities of atoms but also directions and magnitudes of atomic magnetic moments (spins). Spin-lattice dynamics simulates the collective time evolution of spins and atoms, taking into account the effect of non-collinear magnetism on interatomic forces. Applications of the method include atomistic models for defects, dislocations and surfaces in magnetic materials, thermally activated diffusion of defects, magnetic phase transitions, and various magnetic and lattice relaxation phenomena. Spin-lattice dynamics retains all the capabilities of molecular dynamics, adding to them the treatment of non-collinear magnetic degrees of freedom. The spin-lattice dynamics time integration algorithm uses symplectic Suzuki-Trotter decomposition of atomic coordinate, velocity and spin evolution operators, and delivers highly accurate numerical solutions of dynamic evolution equations over extended intervals of time. The code is parallelized in coordinate and spin spaces, and is written in OpenMP C/C++ for CPU and in CUDA C/C++ for Nvidia GPU implementations. Temperatures of atoms and spins are controlled by Langevin thermostats. Conduction electrons are treated by coupling the discrete spin-lattice dynamics equations for atoms and spins to the heat transfer equation for the electrons. Worked examples include simulations of thermalization of ferromagnetic bcc iron, the dynamics of laser pulse demagnetization, and collision cascades.

  13. Advanced Simulation in Undergraduate Pilot Training: Systems Integration. Final Report (February 1972-March 1975).

    ERIC Educational Resources Information Center

    Larson, D. F.; Terry, C.

    The Advanced Simulator for Undergraduate Pilot Training (ASUPT) was designed to investigate the role of simulation in the future Undergraduate Pilot Training (UPT) program. The problem addressed in this report was one of integrating two unlike components into one synchronized system. These two components were the Basic T-37 Simulators and their…

  14. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    SciTech Connect

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  15. ProtoEXIST: advanced prototype CZT coded aperture telescopes for EXIST

    NASA Astrophysics Data System (ADS)

    Allen, Branden; Hong, Jaesub; Grindlay, Josh; Barthelmy, Scott D.; Baker, Robert G.; Gehrels, Neil A.; Garson, Trey; Krawczynski, Henric S.; Cook, Walter R.; Harrison, Fiona A.; Apple, Jeffrey A.; Ramsey, Brian D.

    2010-07-01

    ProtoEXIST1 is a pathfinder for the EXIST-HET, a coded aperture hard X-ray telescope with a 4.5 m2 CZT detector plane a 90x70 degree field of view to be flown as the primary instrument on the EXIST mission and is intended to monitor the full sky every 3 h in an effort to locate GRBs and other high energy transients. ProtoEXIST1 consists of a 256 cm2 tiled CZT detector plane containing 4096 pixels composed of an 8x8 array of individual 1.95 cm x 1.95 cm x 0.5 cm CZT detector modules each with a 8 x 8 pixilated anode configured as a coded aperture telescope with a fully coded 10° x 10° field of view employing passive side shielding and an active CsI anti-coincidence rear shield, recently completed its maiden flight out of Ft. Sumner, NM on the 9th of October 2009. During the duration of its 6 hour flight on-board calibration of the detector plane was carried out utilizing a single tagged 198.8 nCi Am-241 source along with the simultaneous measurement of the background spectrum and an observation of Cygnus X-1. Here we recount the events of the flight and report on the detector performance in a near space environment. We also briefly discuss ProtoEXIST2: the next stage of detector development which employs the NuSTAR ASIC enabling finer (32×32) anode pixilation. When completed ProtoEXIST2 will consist of a 256 cm2 tiled array and be flown simultaneously with the ProtoEXIST1 telescope.

  16. The Monte Carlo code CSSE for the simulation of realistic thermal neutron sensor devices for Humanitarian Demining

    NASA Astrophysics Data System (ADS)

    Palomba, M.; D'Erasmo, G.; Pantaleo, A.

    2003-02-01

    The CSSE code, a GEANT3-based Monte Carlo simulation program, has been developed in the framework of the EXPLODET project (Nucl. Instr. and Meth. A 422 (1999) 918) with the aim to simulate experimental set-ups employed in Thermal Neutron Analysis (TNA) for the landmines detection. Such a simulation code appears to be useful for studying the background in the γ-ray spectra obtained with this technique, especially in the region where one expects to find the explosive signature (the γ-ray peak at 10.83 MeV coming from neutron capture by nitrogen). The main features of the CSSE code are introduced and original innovations emphasized. Among the latter, an algorithm simulating the time correlation between primary particles, according with their time distributions is presented. Such a correlation is not usually achievable within standard GEANT-based codes and allows to reproduce some important phenomena, as the pulse pile-up inside the NaI(Tl) γ-ray detector employed, producing a more realistic detector response simulation. CSSE has been successfully tested by reproducing a real nuclear sensor prototype assembled at the Physics Department of Bari University.

  17. GETRAN: A generic, modularly structured computer code for simulation of dynamic behavior of aero- and power generation gas turbine engines

    NASA Astrophysics Data System (ADS)

    Schobeiri, M. T.; Attia, M.; Lippke, C.

    1994-07-01

    The design concept, the theoretical background essential for the development of the modularly structured simulation code GETRAN, and several critical simulation cases are presented in this paper. The code being developed under contract with NASA Lewis Research Center is capable of simulating the nonlinear dynamic behavior of single- and multispool core engines, turbofan engines, and power generation gas turbine engines under adverse dynamic operating conditions. The modules implemented into GETRAN correspond to components of existing and new-generation aero- and stationary gas turbine engines with arbitrary configuration and arrangement. For precise simulation of turbine and compressor components, row-by-row diabatic and adiabatic calculation procedures are implemented that account for the specific turbine and compressor cascade, blade geometry, and characteristics. The nonlinear, dynamic behavior of the subject engine is calculated solving a number of systems of partial differential equations, which describe the unsteady behavior of each component individually. To identify each differential equation system unambiguously, special attention is paid to the addressing of each component. The code is capable of executing the simulation procedure at four levels, which increase with the degree of complexity of the system and dynamic event. As representative simulations, four different transient cases with single- and multispool thrust and power generation engines were simulated. These transient cases vary from throttling the exit nozzle area, operation with fuel schedule, rotor speed control, to rotating stall and surge.

  18. Some Specific CASL Requirements for Advanced Multiphase Flow Simulation of Light Water Reactors

    SciTech Connect

    R. A. Berry

    2010-11-01

    Because of the diversity of physical phenomena occuring in boiling, flashing, and bubble collapse, and of the length and time scales of LWR systems, it is imperative that the models have the following features: • Both vapor and liquid phases (and noncondensible phases, if present) must be treated as compressible. • Models must be mathematically and numerically well-posed. • The models methodology must be multi-scale. A fundamental derivation of the multiphase governing equation system, that should be used as a basis for advanced multiphase modeling in LWR coolant systems, is given in the Appendix using the ensemble averaging method. The remainder of this work focuses specifically on the compressible, well-posed, and multi-scale requirements of advanced simulation methods for these LWR coolant systems, because without these are the most fundamental aspects, without which widespread advancement cannot be claimed. Because of the expense of developing multiple special-purpose codes and the inherent inability to couple information from the multiple, separate length- and time-scales, efforts within CASL should be focused toward development of a multi-scale approaches to solve those multiphase flow problems relevant to LWR design and safety analysis. Efforts should be aimed at developing well-designed unified physical/mathematical and high-resolution numerical models for compressible, all-speed multiphase flows spanning: (1) Well-posed general mixture level (true multiphase) models for fast transient situations and safety analysis, (2) DNS (Direct Numerical Simulation)-like models to resolve interface level phenmena like flashing and boiling flows, and critical heat flux determination (necessarily including conjugate heat transfer), and (3) Multi-scale methods to resolve both (1) and (2) automatically, depending upon specified mesh resolution, and to couple different flow models (single-phase, multiphase with several velocities and pressures, multiphase with single

  19. An expanded framework for the advanced computational testing and simulation toolkit

    SciTech Connect

    Marques, Osni A.; Drummond, Leroy A.

    2003-11-09

    The Advanced Computational Testing and Simulation (ACTS) Toolkit is a set of computational tools developed primarily at DOE laboratories and is aimed at simplifying the solution of common and important computational problems. The use of the tools reduces the development time for new codes and the tools provide functionality that might not otherwise be available. This document outlines an agenda for expanding the scope of the ACTS Project based on lessons learned from current activities. Highlights of this agenda include peer-reviewed certification of new tools; finding tools to solve problems that are not currently addressed by the Toolkit; working in collaboration with other software initiatives and DOE computer facilities; expanding outreach efforts; promoting interoperability, further development of the tools; and improving functionality of the ACTS Information Center, among other tasks. The ultimate goal is to make the ACTS tools more widely used and more effective in solving DOE's and the nation's scientific problems through the creation of a reliable software infrastructure for scientific computing.

  20. Development and validation of burnup dependent computational schemes for the analysis of assemblies with advanced lattice codes

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Karthikeyan

    The main aim of this research is the development and validation of computational schemes for advanced lattice codes. The advanced lattice code which forms the primary part of this research is "DRAGON Version4". The code has unique features like self shielding calculation with capabilities to represent distributed and mutual resonance shielding effects, leakage models with space-dependent isotropic or anisotropic streaming effect, availability of the method of characteristics (MOC), burnup calculation with reaction-detailed energy production etc. Qualified reactor physics codes are essential for the study of all existing and envisaged designs of nuclear reactors. Any new design would require a thorough analysis of all the safety parameters and burnup dependent behaviour. Any reactor physics calculation requires the estimation of neutron fluxes in various regions of the problem domain. The calculation goes through several levels before the desired solution is obtained. Each level of the lattice calculation has its own significance and any compromise at any step will lead to poor final result. The various levels include choice of nuclear data library and energy group boundaries into which the multigroup library is cast; self shielding of nuclear data depending on the heterogeneous geometry and composition; tracking of geometry, keeping error in volume and surface to an acceptable minimum; generation of regionwise and groupwise collision probabilities or MOC-related information and their subsequent normalization thereof, solution of transport equation using the previously generated groupwise information and obtaining the fluxes and reaction rates in various regions of the lattice; depletion of fuel and of other materials based on normalization with constant power or constant flux. Of the above mentioned levels, the present research will mainly focus on two aspects, namely self shielding and depletion. The behaviour of the system is determined by composition of resonant