Sample records for accelerator physics code

  1. Computational Accelerator Physics. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisognano, J.J.; Mondelli, A.A.

    1997-04-01

    The sixty two papers appearing in this volume were presented at CAP96, the Computational Accelerator Physics Conference held in Williamsburg, Virginia from September 24{minus}27,1996. Science Applications International Corporation (SAIC) and the Thomas Jefferson National Accelerator Facility (Jefferson lab) jointly hosted CAP96, with financial support from the U.S. department of Energy`s Office of Energy Research and the Office of Naval reasearch. Topics ranged from descriptions of specific codes to advanced computing techniques and numerical methods. Update talks were presented on nearly all of the accelerator community`s major electromagnetic and particle tracking codes. Among all papers, thirty of them are abstracted formore » the Energy Science and Technology database.(AIP)« less

  2. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  3. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    DOE PAGES

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...

    2016-07-12

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less

  4. Efficient modeling of laser-plasma accelerator staging experiments using INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2017-03-01

    The computational framework INF&RNO (INtegrated Fluid & paRticle simulatioN cOde) allows for fast and accurate modeling, in 2D cylindrical geometry, of several aspects of laser-plasma accelerator physics. In this paper, we present some of the new features of the code, including the quasistatic Particle-In-Cell (PIC)/fluid modality, and describe using different computational grids and time steps for the laser envelope and the plasma wake. These and other features allow for a speedup of several orders of magnitude compared to standard full 3D PIC simulations while still retaining physical fidelity. INF&RNO is used to support the experimental activity at the BELLA Center, and we will present an example of the application of the code to the laser-plasma accelerator staging experiment.

  5. Load management strategy for Particle-In-Cell simulations in high energy particle acceleration

    NASA Astrophysics Data System (ADS)

    Beck, A.; Frederiksen, J. T.; Dérouillat, J.

    2016-09-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.

  6. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; hide

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  7. Dissemination and support of ARGUS for accelerator applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User's Guide that documents the use of the code for all users. To release the code and the User's Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  8. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model.more » The code is guilt atop the Python interpreter language.« less

  9. Cloud-based design of high average power traveling wave linacs

    NASA Astrophysics Data System (ADS)

    Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.

    2017-12-01

    The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.

  10. Chromaticity calculations and code comparisons for x-ray lithography source XLS and SXLS rings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-06-16

    This note presents the chromaticity calculations and code comparison results for the (x-ray lithography source) XLS (Chasman Green, XUV Cosy lattice) and (2 magnet 4T) SXLS lattices, with the standard beam optic codes, including programs SYNCH88.5, MAD6, PATRICIA88.4, PATPET88.2, DIMAD, BETA, and MARYLIE. This analysis is a part of our ongoing accelerator physics code studies. 4 figs., 10 tabs.

  11. Dissemination and support of ARGUS for accelerator applications. Technical progress report, April 24, 1991--January 20, 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The ARGUS code is a three-dimensional code system for simulating for interactions between charged particles, electric and magnetic fields, and complex structure. It is a system of modules that share common utilities for grid and structure input, data handling, memory management, diagnostics, and other specialized functions. The code includes the fields due to the space charge and current density of the particles to achieve a self-consistent treatment of the particle dynamics. The physic modules in ARGUS include three-dimensional field solvers for electrostatics and electromagnetics, a three-dimensional electromagnetic frequency-domain module, a full particle-in-cell (PIC) simulation module, and a steady-state PIC model.more » These are described in the Appendix to this report. This project has a primary mission of developing the capabilities of ARGUS in accelerator modeling of release to the accelerator design community. Five major activities are being pursued in parallel during the first year of the project. To improve the code and/or add new modules that provide capabilities needed for accelerator design. To produce a User`s Guide that documents the use of the code for all users. To release the code and the User`s Guide to accelerator laboratories for their own use, and to obtain feed-back from the. To build an interactive user interface for setting up ARGUS calculations. To explore the use of ARGUS on high-power workstation platforms.« less

  12. Using the FLUKA Monte Carlo Code to Simulate the Interactions of Ionizing Radiation with Matter to Assist and Aid Our Understanding of Ground Based Accelerator Testing, Space Hardware Design, and Secondary Space Radiation Environments

    NASA Technical Reports Server (NTRS)

    Reddell, Brandon

    2015-01-01

    Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.

  13. Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2014-10-01

    The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.

  14. Status of MAPA (Modular Accelerator Physics Analysis) and the Tech-X Object-Oriented Accelerator Library

    NASA Astrophysics Data System (ADS)

    Cary, J. R.; Shasharina, S.; Bruhwiler, D. L.

    1998-04-01

    The MAPA code is a fully interactive accelerator modeling and design tool consisting of a GUI and two object-oriented C++ libraries: a general library suitable for treatment of any dynamical system, and an accelerator library including many element types plus an accelerator class. The accelerator library inherits directly from the system library, which uses hash tables to store any relevant parameters or strings. The GUI can access these hash tables in a general way, allowing the user to invoke a window displaying all relevant parameters for a particular element type or for the accelerator class, with the option to change those parameters. The system library can advance an arbitrary number of dynamical variables through an arbitrary mapping. The accelerator class inherits this capability and overloads the relevant functions to advance the phase space variables of a charged particle through a string of elements. Among other things, the GUI makes phase space plots and finds fixed points of the map. We discuss the object hierarchy of the two libraries and use of the code.

  15. Commnity Petascale Project for Accelerator Science And Simulation: Advancing Computational Science for Future Accelerators And Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Panagiotis; /Fermilab; Cary, John

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors.« less

  16. Code comparison for accelerator design and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsa, Z.

    1988-01-01

    We present a comparison between results obtained from standard accelerator physics codes used for the design and analysis of synchrotrons and storage rings, with programs SYNCH, MAD, HARMON, PATRICIA, PATPET, BETA, DIMAD, MARYLIE and RACE-TRACK. In our analysis we have considered 5 (various size) lattices with large and small angles including AGS Booster (10/degree/ bend), RHIC (2.24/degree/), SXLS, XLS (XUV ring with 45/degree/ bend) and X-RAY rings. The differences in the integration methods used and the treatment of the fringe fields in these codes could lead to different results. The inclusion of nonlinear (e.g., dipole) terms may be necessary inmore » these calculations specially for a small ring. 12 refs., 6 figs., 10 tabs.« less

  17. LEGO: A modular accelerator design code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Donald, M.; Irwin, J.

    1997-08-01

    An object-oriented accelerator design code has been designed and implemented in a simple and modular fashion. It contains all major features of its predecessors: TRACY and DESPOT. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Components can be moved arbitrarily in the three dimensional space. Several symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and nonlinear case. Currently, themore » code is used to design and simulate the lattices of the PEP-II. It will also be used for the commissioning.« less

  18. Indication, from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, J.D.; Lau, E.L.; Turyshev, S.G.

    Radio metric data from the Pioneer 10/11, Galileo, and Ulysses spacecraft indicate an apparent anomalous, constant, acceleration acting on the spacecraft with a magnitude {approximately}8.5{times}10{sup {minus}8} cm/s{sup 2} , directed towards the Sun. Two independent codes and physical strategies have been used to analyze the data. A number of potential causes have been ruled out. We discuss future kinematic tests and possible origins of the signal. {copyright} {ital 1998} {ital The American Physical Society}

  19. Physics and engineering design of the accelerator and electron dump for SPIDER

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.

    2011-06-01

    The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.

  20. Physical Models for Particle Tracking Simulations in the RF Gap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shishlo, Andrei P.; Holmes, Jeffrey A.

    2015-06-01

    This document describes the algorithms that are used in the PyORBIT code to track the particles accelerated in the Radio-Frequency cavities. It gives the mathematical description of the algorithms and the assumptions made in each case. The derived formulas have been implemented in the PyORBIT code. The necessary data for each algorithm are described in detail.

  1. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  2. Unaligned instruction relocation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less

  3. Unaligned instruction relocation

    DOEpatents

    Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.

    2018-01-23

    In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A.; Barnard, J.J.; Briggs, R.J.

    The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL), a collaborationof LBNL, LLNL, and PPPL, has achieved 60-fold pulse compression of ion beams on the Neutralized Drift Compression eXperiment (NDCX) at LBNL. In NDCX, a ramped voltage pulse from an induction cell imparts a velocity"tilt" to the beam; the beam's tail then catches up with its head in a plasma environment that provides neutralization. The HIFS-VNL's mission is to carry out studies of Warm Dense Matter (WDM) physics using ion beams as the energy source; an emerging thrust is basic target physics for heavy ion-driven Inertial Fusion Energy (IFE). Thesemore » goals require an improved platform, labeled NDCX-II. Development of NDCX-II at modest cost was recently enabled by the availability of induction cells and associated hardware from the decommissioned Advanced Test Accelerator (ATA) facility at LLNL. Our initial physics design concept accelerates a ~;;30 nC pulse of Li+ ions to ~;;3 MeV, then compresses it to ~;;1 ns while focusing it onto a mm-scale spot. It uses the ATA cells themselves (with waveforms shaped by passive circuits) to impart the final velocity tilt; smart pulsers provide small corrections. The ATA accelerated electrons; acceleration of non-relativistic ions involves more complex beam dynamics both transversely and longitudinally. We are using analysis, an interactive one-dimensional kinetic simulation model, and multidimensional Warp-code simulations to develop the NDCX-II accelerator section. Both LSP and Warp codes are being applied to the beam dynamics in the neutralized drift and final focus regions, and the plasma injection process. The status of this effort is described.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovelace, III, Henry H.

    In accelerator physics, models of a given machine are used to predict the behaviors of the beam, magnets, and radiofrequency cavities. The use of the computational model has become wide spread to ease the development period of the accelerator lattice. There are various programs that are used to create lattices and run simulations of both transverse and longitudinal beam dynamics. The programs include Methodical Accelerator Design(MAD) MAD8, MADX, Zgoubi, Polymorphic Tracking Code (PTC), and many others. In this discussion the BMAD (Baby Methodical Accelerator Design) is presented as an additional tool in creating and simulating accelerator lattices for the studymore » of beam dynamics in the Relativistic Heavy Ion Collider (RHIC).« less

  6. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, A.M.M.; Paulson, C.C.; Peacock, M.A.

    1995-10-01

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G.H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decisionmore » has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  7. A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.

    1995-09-15

    A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. Amore » decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less

  8. Collaborative Research: Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsouleas, Thomas; Decyk, Viktor

    Final Report for grant DE-FG02-06ER54888, "Simulation of Beam-Electron Cloud Interactions in Circular Accelerators Using Plasma Models" Viktor K. Decyk, University of California, Los Angeles Los Angeles, CA 90095-1547 The primary goal of this collaborative proposal was to modify the code QuickPIC and apply it to study the long-time stability of beam propagation in low density electron clouds present in circular accelerators. The UCLA contribution to this collaborative proposal was in supporting the development of the pipelining scheme for the QuickPIC code, which extended the parallel scaling of this code by two orders of magnitude. The USC work was as describedmore » here the PhD research for Ms. Bing Feng, lead author in reference 2 below, who performed the research at USC under the guidance of the PI Tom Katsouleas and the collaboration of Dr. Decyk The QuickPIC code [1] is a multi-scale Particle-in-Cell (PIC) code. The outer 3D code contains a beam which propagates through a long region of plasma and evolves slowly. The plasma response to this beam is modeled by slices of a 2D plasma code. This plasma response then is fed back to the beam code, and the process repeats. The pipelining is based on the observation that once the beam has passed a 2D slice, its response can be fed back to the beam immediately without waiting for the beam to pass all the other slices. Thus independent blocks of 2D slices from different time steps can be running simultaneously. The major difficulty was when particles at the edges needed to communicate with other blocks. Two versions of the pipelining scheme were developed, for the the full quasi-static code and the other for the basic quasi-static code used by this e-cloud proposal. Details of the pipelining scheme were published in [2]. The new version of QuickPIC was able to run with more than 1,000 processors, and was successfully applied in modeling e-clouds by our collaborators in this proposal [3-8]. Jean-Luc Vay at Lawrence Berkeley National Lab later implemented a similar basic quasistatic scheme including pipelining in the code WARP [9] and found good to very good quantitative agreement between the two codes in modeling e-clouds. References [1] C. Huang, V. K. Decyk, C. Ren, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and T. Katsouleas, "QUICKPIC: A highly efficient particle-in-cell code for modeling wakefield acceleration in plasmas," J. Computational Phys. 217, 658 (2006). [2] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [3] C. Huang, V. K. Decyk, M. Zhou, W. Lu, W. B. Mori, J. H. Cooley, T. M. Antonsen, Jr., and B. Feng, T. Katsouleas, J. Vieira, and L. O. Silva, "QUICKPIC: A highly efficient fully parallelized PIC code for plasma-based acceleration," Proc. of the SciDAC 2006 Conf., Denver, Colorado, June, 2006 [Journal of Physics: Conference Series, W. M. Tang, Editor, vol. 46, Institute of Physics, Bristol and Philadelphia, 2006], p. 190. [4] B. Feng, C. Huang, V. Decyk, W. B. Mori, T. Katsouleas, P. Muggli, "Enhancing Plasma Wakefield and E-cloud Simulation Performance Using a Pipelining Algorithm," Proc. 12th Workshop on Advanced Accelerator Concepts, Lake Geneva, WI, July, 2006, p. 201 [AIP Conf. Proceedings, vol. 877, Melville, NY, 2006]. [5] B. Feng, P. Muggli, T. Katsouleas, V. Decyk, C. Huang, and W. Mori, "Long Time Electron Cloud Instability Simulation Using QuickPIC with Pipelining Algorithm," Proc. of the 2007 Particle Accelerator Conference, Albuquerque, NM, June, 2007, p. 3615. [6] B. Feng, C. Huang, V. Decyk, W. B. Mori, G. H. Hoffstaetter, P. Muggli, T. Katsouleas, "Simulation of Electron Cloud Effects on Electron Beam at ERL with Pipelined QuickPIC," Proc. 13th Workshop on Advanced Accelerator Concepts, Santa Cruz, CA, July-August, 2008, p. 340 [AIP Conf. Proceedings, vol. 1086, Melville, NY, 2008]. [7] B. Feng, C. Huang, V. K. Decyk, W. B. Mori, P. Muggli, and T. Katsouleas, "Enhancing parallel quasi-static particle-in-cell simulations with a pipelining algorithm," J. Computational Phys, 228, 5430 (2009). [8] C. Huang, W. An, V. K. Decyk, W. Lu, W. B. Mori, F. S. Tsung, M. Tzoufras, S. Morshed, T. Antonsen, B. Feng, T. Katsouleas, R., A. Fonseca, S. F. Martins, J. Vieira, L. O. Silva, E. Esarey, C. G. R. Geddes, W. P. Leemans, E. Cormier-Michel, J.-L. Vay, D. L. Bruhwiler, B. Cowan, J. R. Cary, and K. Paul, "Recent results and future challenges for large scale particleion- cell simulations of plasma-based accelerator concepts," Proc. of the SciDAC 2009 Conf., San Diego, CA, June, 2009 [Journal of Physics: Conference Series, vol. 180, Institute of Physics, Bristol and Philadelphia, 2009], p. 012005. [9] J.-L. Vay, C. M. Celata, M. A. Furman, G. Penn, M. Venturini, D. P. Grote, and K. G. Sonnad, ?Update on Electron-Cloud Simulations Using the Package WARP-POSINST.? Proc. of the 2009 Particle Accelerator Conference PAC09, Vancouver, Canada, June, 2009, paper FR5RFP078.« less

  9. Pressure profiles of the BRing based on the simulation used in the CSRm

    NASA Astrophysics Data System (ADS)

    Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.

    2017-07-01

    HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.

  10. Analysis of the beam halo in negative ion sources by using 3D3V PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, K., E-mail: kmiyamot@naruto-u.ac.jp; Nishioka, S.; Goto, I.

    The physical mechanism of the formation of the negative ion beam halo and the heat loads of the multi-stage acceleration grids are investigated with the 3D PIC (particle in cell) simulation. The following physical mechanism of the beam halo formation is verified: The beam core and the halo consist of the negative ions extracted from the center and the periphery of the meniscus, respectively. This difference of negative ion extraction location results in a geometrical aberration. Furthermore, it is shown that the heat loads on the first acceleration grid and the second acceleration grid are quantitatively improved compared with thosemore » for the 2D PIC simulation result.« less

  11. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less

  12. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  13. Introductory Physics Experiments Using the Wiimote

    NASA Astrophysics Data System (ADS)

    Somers, William; Rooney, Frank; Ochoa, Romulo

    2009-03-01

    The Wii, a video game console, is a very popular device with millions of units sold worldwide over the past two years. Although computationally it is not a powerful machine, to a physics educator its most important components can be its controllers. The Wiimote (or remote) controller contains three accelerometers, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, any PC with Bluetooth capability can detect the information sent out by the Wiimote. We have designed several experiments for introductory physics courses that make use of the accelerometers and Bluetooth connectivity. We have adapted the Wiimote to measure the: variable acceleration in simple harmonic motion, centripetal and tangential accelerations in circular motion, and the accelerations generated when students lift weights. We present the results of our experiments and compare them with those obtained when using motion and/or force sensors.

  14. Chaotic dynamics in accelerator physics

    NASA Astrophysics Data System (ADS)

    Cary, J. R.

    1992-11-01

    Substantial progress was made in several areas of accelerator dynamics. We have completed a design of an FEL wiggler with adiabatic trapping and detrapping sections to develop an understanding of longitudinal adiabatic dynamics and to create efficiency enhancements for recirculating free-electron lasers. We developed a computer code for analyzing the critical KAM tori that binds the dynamic aperture in circular machines. Studies of modes that arise due to the interaction of coating beams with a narrow-spectrum impedance have begun. During this research educational and research ties with the accelerator community at large have been strengthened.

  15. New methods in WARP, a particle-in-cell code for space-charge dominated beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, D., LLNL

    1998-01-12

    The current U.S. approach for a driver for inertial confinement fusion power production is a heavy-ion induction accelerator; high-current beams of heavy ions are focused onto the fusion target. The space-charge of the high-current beams affects the behavior more strongly than does the temperature (the beams are described as being ``space-charge dominated``) and the beams behave like non-neutral plasmas. The particle simulation code WARP has been developed and used to study the transport and acceleration of space-charge dominated ion beams in a wide range of applications, from basic beam physics studies, to ongoing experiments, to fusion driver concepts. WARP combinesmore » aspects of a particle simulation code and an accelerator code; it uses multi-dimensional, electrostatic particle-in-cell (PIC) techniques and has a rich mechanism for specifying the lattice of externally applied fields. There are both two- and three-dimensional versions, the former including axisymmetric (r-z) and transverse slice (x-y) models. WARP includes a number of novel techniques and capabilities that both enhance its performance and make it applicable to a wide range of problems. Some of these have been described elsewhere. Several recent developments will be discussed in this paper. A transverse slice model has been implemented with the novel capability of including bends, allowing more rapid simulation while retaining essential physics. An interface using Python as the interpreter layer instead of Basis has been developed. A parallel version of WARP has been developed using Python.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A; Barnard, J J; Briggs, R J

    The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL), a collaboration of LBNL, LLNL, and PPPL, has achieved 60-fold pulse compression of ion beams on the Neutralized Drift Compression eXperiment (NDCX) at LBNL. In NDCX, a ramped voltage pulse from an induction cell imparts a velocity 'tilt' to the beam; the beam's tail then catches up with its head in a plasma environment that provides neutralization. The HIFS-VNL's mission is to carry out studies of warm dense matter (WDM) physics using ion beams as the energy source; an emerging thrust is basic target physics for heavy ion-driven inertial fusion energymore » (IFE). These goals require an improved platform, labeled NDCX-II. Development of NDCX-II at modest cost was recently enabled by the availability of induction cells and associated hardware from the decommissioned advanced test accelerator (ATA) facility at LLNL. Our initial physics design concept accelerates an {approx} 30 nC pulse of Li{sup +} ions to {approx} 3 MeV, then compresses it to {approx} 1 ns while focusing it onto a mm-scale spot. It uses the ATA cells themselves (with waveforms shaped by passive circuits) to impart the final velocity tilt; smart pulsers provide small corrections. The ATA accelerated electrons; acceleration of non-relativistic ions involves more complex beam dynamics both transversely and longitudinally. We are using an interactive one-dimensional kinetic simulation model and multidimensional Warp-code simulations to develop the NDCX-II accelerator section. Both LSP and Warp codes are being applied to the beam dynamics in the neutralized drift and final focus regions, and the plasma injection process. The status of this effort is described.« less

  17. Particle acceleration and transport at a 2D CME-driven shock using the HAFv3 and PATH Code

    NASA Astrophysics Data System (ADS)

    Li, G.; Ao, X.; Fry, C. D.; Verkhoglyadova, O. P.; Zank, G. P.

    2012-12-01

    We study particle acceleration at a 2D CME-driven shock and the subsequent transport in the inner heliosphere (up to 2 AU) by coupling the kinematic Hakamada-Akasofu-Fry version 3 (HAFv3) solar wind model (Hakamada and Akasofu, 1982, Fry et al. 2003) with the Particle Acceleration and Transport in the Heliosphere (PATH) model (Zank et al., 2000, Li et al., 2003, 2005, Verkhoglyadova et al. 2009). The HAFv3 provides the evolution of a two-dimensional shock geometry and other plasma parameters, which are fed into the PATH model to investigate the effect of a varying shock geometry on particle acceleration and transport. The transport module of the PATH model is parallelized and utilizes the state-of-the-art GPU computation technique to achieve a rapid physics-based numerical description of the interplanetary energetic particles. Together with a fast execution of the HAFv3 model, the coupled code gives us a possibility to nowcast/forecast the interplanetary radiation environment.

  18. Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.

    2014-07-01

    Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.

  19. Physics and engineering studies on the MITICA accelerator: comparison among possible design solutions

    NASA Astrophysics Data System (ADS)

    Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.

    2011-09-01

    Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.

  20. The ZPIC educational code suite

    NASA Astrophysics Data System (ADS)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  1. Accelerator System Model (ASM) user manual with physics and engineering model documentation. ASM version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1993-07-01

    The Accelerator System Model (ASM) is a computer program developed to model proton radiofrequency accelerators and to carry out system level trade studies. The ASM FORTRAN subroutines are incorporated into an intuitive graphical user interface which provides for the {open_quotes}construction{close_quotes} of the accelerator in a window on the computer screen. The interface is based on the Shell for Particle Accelerator Related Codes (SPARC) software technology written for the Macintosh operating system in the C programming language. This User Manual describes the operation and use of the ASM application within the SPARC interface. The Appendix provides a detailed description of themore » physics and engineering models used in ASM. ASM Version 1.0 is joint project of G. H. Gillespie Associates, Inc. and the Accelerator Technology (AT) Division of the Los Alamos National Laboratory. Neither the ASM Version 1.0 software nor this ASM Documentation may be reproduced without the expressed written consent of both the Los Alamos National Laboratory and G. H. Gillespie Associates, Inc.« less

  2. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C C

    The Computational Electronics and Electromagnetics thrust area serves as the focal point for Engineering R and D activities for developing computer-based design and analysis tools. Representative applications include design of particle accelerator cells and beamline components; design of transmission line components; engineering analysis and design of high-power (optical and microwave) components; photonics and optoelectronics circuit design; electromagnetic susceptibility analysis; and antenna synthesis. The FY-97 effort focuses on development and validation of (1) accelerator design codes; (2) 3-D massively parallel, time-dependent EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; andmore » (5) development of beam control algorithms coupled to beam transport physics codes. These efforts are in association with technology development in the power conversion, nondestructive evaluation, and microtechnology areas. The efforts complement technology development in Lawrence Livermore National programs.« less

  3. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    DOE PAGES

    Luo, Yun

    2015-08-29

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less

  4. Microphysics of Waves and Instabilities in the Solar Wind and their Macro Manifestations in the Corona and Interplanetary Space

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia R.; Gurman, Joseph (Technical Monitor)

    2003-01-01

    Investigations of the physical processes responsible for the acceleration of the solar wind were pursued with the development of two new solar wind codes: a hybrid code and a 2-D MHD code. Hybrid simulations were performed to investigate the interaction between ions and parallel propagating low frequency ion cyclotron waves in a homogeneous plasma. In a low-beta plasma such as the solar wind plasma in the inner corona, the proton thermal speed is much smaller than the Alfven speed. Vlasov linear theory predicts that protons are not in resonance with low frequency ion cyclotron waves. However, non-linear effect makes it possible that these waves can strongly heat and accelerate protons. This study has important implications for study of the corona and the solar wind. Low frequency ion cyclotron waves or Alfven waves are commonly observed in the solar wind. Until now, it is believed that these waves are not able to heat the solar wind plasma unless some cascading processes transfer the energy of these waves to high frequency part. However, this study shows that these waves may directly heat and accelerate protons non-linearly. This process may play an important role in the coronal heating and the solar wind acceleration, at least in some parameter space.

  5. Computational tools and lattice design for the PEP-II B-Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Irwin, J.; Nosochkov, Y.

    1997-02-01

    Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT. {copyright} {ital 1997 American Institute of Physics.}

  6. The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.

    2014-02-01

    A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.

  7. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  8. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  9. Sci—Fri PM: Topics — 05: Experience with linac simulation software in a teaching environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, Marco; Harnett, Nicole; Jaffray, David

    Medical linear accelerator education is usually restricted to use of academic textbooks and supervised access to accelerators. To facilitate the learning process, simulation software was developed to reproduce the effect of medical linear accelerator beam adjustments on resulting clinical photon beams. The purpose of this report is to briefly describe the method of operation of the software as well as the initial experience with it in a teaching environment. To first and higher orders, all components of medical linear accelerators can be described by analytical solutions. When appropriate calibrations are applied, these analytical solutions can accurately simulate the performance ofmore » all linear accelerator sub-components. Grouped together, an overall medical linear accelerator model can be constructed. Fifteen expressions in total were coded using MATLAB v 7.14. The program was called SIMAC. The SIMAC program was used in an accelerator technology course offered at our institution; 14 delegates attended the course. The professional breakdown of the participants was: 5 physics residents, 3 accelerator technologists, 4 regulators and 1 physics associate. The course consisted of didactic lectures supported by labs using SIMAC. At the conclusion of the course, eight of thirteen delegates were able to successfully perform advanced beam adjustments after two days of theory and use of the linac simulator program. We suggest that this demonstrates good proficiency in understanding of the accelerator physics, which we hope will translate to a better ability to understand real world beam adjustments on a functioning medical linear accelerator.« less

  10. Charged particle tracking through electrostatic wire meshes using the finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devlin, L. J.; Karamyshev, O.; Welsch, C. P., E-mail: carsten.welsch@cockcroft.ac.uk

    Wire meshes are used across many disciplines to accelerate and focus charged particles, however, analytical solutions are non-exact and few codes exist which simulate the exact fields around a mesh with physical sizes. A tracking code based in Matlab-Simulink using field maps generated using finite element software has been developed which tracks electrons or ions through electrostatic wire meshes. The fields around such a geometry are presented as an analytical expression using several basic assumptions, however, it is apparent that computational calculations are required to obtain realistic values of electric potential and fields, particularly when multiple wire meshes are deployed.more » The tracking code is flexible in that any quantitatively describable particle distribution can be used for both electrons and ions as well as other benefits such as ease of export to other programs for analysis. The code is made freely available and physical examples are highlighted where this code could be beneficial for different applications.« less

  11. Shielding analyses for repetitive high energy pulsed power accelerators

    NASA Astrophysics Data System (ADS)

    Jow, H. N.; Rao, D. V.

    Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically, groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response, SNL Health Physics has undertaken an intensive effort to assess existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort and its results.

  12. ICPP: Relativistic Plasma Physics with Ultra-Short High-Intensity Laser Pulses

    NASA Astrophysics Data System (ADS)

    Meyer-Ter-Vehn, Juergen

    2000-10-01

    Recent progress in generating ultra-short high-intensity laser pulses has opened a new branch of relativistic plasma physics, which is discussed in this talk in terms of particle-in-cell (PIC) simulations. These pulses create small plasma volumes of high-density plasma with plasma fields above 10^12 V/m and 10^8 Gauss. At intensities beyond 10^18 W/cm^2, now available from table-top systems, they drive relativistic electron currents in self-focussing plasma channels. These currents are close to the Alfven limit and allow to study relativistic current filamentation. A most remarkable feature is the generation of well collimated relativistic electron beams emerging from the channels with energies up to GeV. In dense matter they trigger cascades of gamma-rays, e^+e^- pairs, and a host of nuclear and particle processes. One of the applications may be fast ignition of compressed inertial fusion targets. Above 10^23 W/cm^2, expected to be achieved in the future, solid-density matter becomes relativistically transparent for optical light, and the acceleration of protons to multi-GeV energies is predicted in plasma layers less than 1 mm thick. These results open completely new perspectives for plasma-based accelerator schemes. Three-dimensional PIC simulations turn out to be the superior tool to explore the relativistic plasma kinetics at such intensities. Results obtained with the VLPL code [1] are presented. Different mechanisms of particle acceleration are discussed. Both laser wakefield and direct laser acceleration in plasma channels (by a mechanism similar to inverse free electron lasers) have been identified. The latter describes recent MPQ experimental results. [1] A. Pukhov, J. Plasma Physics 61, 425 - 433 (1999): Three-dimensional electromagnetic relativistic particle-in-cell code VLPL (Virtual Laser Plasma Laboratory).

  13. Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event

    NASA Astrophysics Data System (ADS)

    Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.

    2010-12-01

    The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.

  14. MuSim, a Graphical User Interface for Multiple Simulation Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Thomas; Cummings, Mary Anne; Johnson, Rolland

    2016-06-01

    MuSim is a new user-friendly program designed to interface to many different particle simulation codes, regardless of their data formats or geometry descriptions. It presents the user with a compelling graphical user interface that includes a flexible 3-D view of the simulated world plus powerful editing and drag-and-drop capabilities. All aspects of the design can be parametrized so that parameter scans and optimizations are easy. It is simple to create plots and display events in the 3-D viewer (with a slider to vary the transparency of solids), allowing for an effortless comparison of different simulation codes. Simulation codes: G4beamline, MAD-X,more » and MCNP; more coming. Many accelerator design tools and beam optics codes were written long ago, with primitive user interfaces by today's standards. MuSim is specifically designed to make it easy to interface to such codes, providing a common user experience for all, and permitting the construction and exploration of models with very little overhead. For today's technology-driven students, graphical interfaces meet their expectations far better than text-based tools, and education in accelerator physics is one of our primary goals.« less

  15. Recent Improvements of Particle and Heavy Ion Transport code System: PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Iwamoto, Yosuke; Hashimoto, Shintaro; Ogawa, Tatsuhiko; Furuta, Takuya; Abe, Shin-ichiro; Kai, Takeshi; Matsuda, Norihiro; Okumura, Keisuke; Kai, Tetsuya; Iwase, Hiroshi; Sihver, Lembit

    2017-09-01

    The Particle and Heavy Ion Transport code System, PHITS, has been developed under the collaboration of several research institutes in Japan and Europe. This system can simulate the transport of most particles with energy levels up to 1 TeV (per nucleon for ion) using different nuclear reaction models and data libraries. More than 2,500 registered researchers and technicians have used this system for various applications such as accelerator design, radiation shielding and protection, medical physics, and space- and geo-sciences. This paper summarizes the physics models and functions recently implemented in PHITS, between versions 2.52 and 2.88, especially those related to source generation useful for simulating brachytherapy and internal exposures of radioisotopes.

  16. Influence of Ionization and Beam Quality on Interaction of TW-Peak CO2 Laser with Hydrogen Plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samulyak, Roman

    3D numerical simulations of the interaction of a powerful CO2 laser with hydrogen jets demonstrating the role of ionization and laser beam quality are presented. Simulations are performed in support of the plasma wakefield accelerator experiments being conducted at the BNL Accelerator Test Facility (ATF). The CO2 laser at BNL ATF has several potential advantages for laser wakefield acceleration compared to widely used solid-state lasers. SPACE, a parallel relativistic Particle-in-Cell code, developed at SBU and BNL, has been used in these studies. A novelty of the code is its set of efficient atomic physics algorithms that compute ionization and recombinationmore » rates on the grid and transfer them to particles. The primary goal of the initial BNL experiments was to characterize the plasma density by measuring the sidebands in the spectrum of the probe laser. Simulations, that resolve hydrogen ionization and laser spectra, help explain several trends that were observed in the experiments.« less

  17. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  18. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  19. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  20. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE PAGES

    Lee, P.; Audet, T. L.; Lehe, R.; ...

    2015-12-31

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  1. Modeling laser-driven electron acceleration using WARP with Fourier decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, P.; Audet, T. L.; Lehe, R.

    WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.

  2. The EGS4 Code System: Solution of Gamma-ray and Electron Transport Problems

    DOE R&D Accomplishments Database

    Nelson, W. R.; Namito, Yoshihito

    1990-03-01

    In this paper we present an overview of the EGS4 Code System -- a general purpose package for the Monte Carlo simulation of the transport of electrons and photons. During the last 10-15 years EGS has been widely used to design accelerators and detectors for high-energy physics. More recently the code has been found to be of tremendous use in medical radiation physics and dosimetry. The problem-solving capabilities of EGS4 will be demonstrated by means of a variety of practical examples. To facilitate this review, we will take advantage of a new add-on package, called SHOWGRAF, to display particle trajectories in complicated geometries. These are shown as 2-D laser pictures in the written paper and as photographic slides of a 3-D high-resolution color monitor during the oral presentation. 11 refs., 15 figs.

  3. Parallel and Portable Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.

    1997-08-01

    We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.

  4. Unified Models of Turbulence and Nonlinear Wave Evolution in the Extended Solar Corona and Solar Wind

    NASA Technical Reports Server (NTRS)

    Cranmer, Steven R.; Wagner, William (Technical Monitor)

    2004-01-01

    The PI (Cranmer) and Co-I (A. van Ballegooijen) made substantial progress toward the goal of producing a unified model of the basic physical processes responsible for solar wind acceleration. The approach outlined in the original proposal comprised two complementary pieces: (1) to further investigate individual physical processes under realistic coronal and solar wind conditions, and (2) to extract the dominant physical effects from simulations and apply them to a 1D model of plasma heating and acceleration. The accomplishments in Year 2 are divided into these two categories: 1a. Focused Study of Kinetic Magnetohydrodynamic (MHD) Turbulence. lb. Focused Study of Non - WKB Alfven Wave Rejection. and 2. The Unified Model Code. We have continued the development of the computational model of a time-study open flux tube in the extended corona. The proton-electron Monte Carlo model is being tested, and collisionless wave-particle interactions are being included. In order to better understand how to easily incorporate various kinds of wave-particle processes into the code, the PI performed a detailed study of the so-called "Ito Calculus", i.e., the mathematical theory of how to update the positions of particles in a probabilistic manner when their motions are governed by diffusion in velocity space.

  5. LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN

    NASA Astrophysics Data System (ADS)

    Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor

    2017-12-01

    The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.

  6. Proton and Ion Acceleration using Multi-kJ Lasers

    NASA Astrophysics Data System (ADS)

    Wilks, S. C.; Ma, T.; Kemp, A. J.; Tabak, M.; Link, A. J.; Haefner, C.; Hermann, M. R.; Mariscal, D. A.; Rubenchik, S.; Sterne, P.; Kim, J.; McGuffey, C.; Bhutwala, K.; Beg, F.; Wei, M.; Kerr, S. M.; Sentoku, Y.; Iwata, N.; Norreys, P.; Sevin, A.

    2017-10-01

    Short (<50 ps) laser pulses are capable of accelerating protons and ions from solid (or dense gas jet) targets as demonstrated by a number of laser facilities around the world in the past 20 years accelerating protons to between 1 and 100 MeV, depending on specific laser parameters. Over this time, a distinct scaling with energy has emerged that shows a trend towards increasing maximum accelerated proton (ion) energy with increasing laser energy. We consider the physical basis underlying this scaling, and use this to estimate future results when multi-kJ laser systems begin operating in this new high energy regime. In particular, we consider the effects of laser prepulse, intensity, energy, and pulse length on the number and energy of the ions, as well as target size and composition. We also discuss potential uses of these ion beams in High Energy Density Physics Experiments. This work was performed under the auspices of the U.S. Department of Energy (DOE) by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A.; Barnard, J. J.; Cohen, R. H.

    The Heavy Ion Fusion Science Virtual National Laboratory(a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the"warm dense matter" regime at<~;; 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Test Accelerator at LLNL,more » NDCX-II will compress a ~;;500 ns pulse of Li+ ions to ~;;1 ns while accelerating it to 3-4 MeV over ~;;15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A; Barnard, J J; Cohen, R H

    The Heavy Ion Fusion Science Virtual National Laboratory (a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the 'warm dense matter' regime at {approx}< 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Testmore » Accelerator at LLNL, NDCX-II will compress a {approx}500 ns pulse of Li{sup +} ions to {approx} 1 ns while accelerating it to 3-4 MeV over {approx} 15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.« less

  9. Microphysics of Waves and Instabilities in the Solar Wind and their Macro Manifestations in the Corona and Interplanetary Space

    NASA Technical Reports Server (NTRS)

    Gurman, Joseph (Technical Monitor); Habbal, Shadia Rifai

    2004-01-01

    Investigations of the physical processes responsible for coronal heating and the acceleration of the solar wind were pursued with the use of our recently developed 2D MHD solar wind code and our 1D multifluid code. In particular, we explored (1) the role of proton temperature anisotropy in the expansion of the solar wind, (2) the role of plasma parameters at the coronal base in the formation of high speed solar wind streams at mid-latitudes, and (3) the heating of coronal loops.

  10. Study of beam optics and beam halo by integrated modeling of negative ion beams from plasma meniscus formation to beam acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, K.; Okuda, S.; Hatayama, A.

    2013-01-14

    To understand the physical mechanism of the beam halo formation in negative ion beams, a two-dimensional particle-in-cell code for simulating the trajectories of negative ions created via surface production has been developed. The simulation code reproduces a beam halo observed in an actual negative ion beam. The negative ions extracted from the periphery of the plasma meniscus (an electro-static lens in a source plasma) are over-focused in the extractor due to large curvature of the meniscus.

  11. Summary Report of Working Group 2: Computation

    NASA Astrophysics Data System (ADS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  12. Summary Report of Working Group 2: Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-22

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) newmore » hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.« less

  13. JASMIN: Japanese-American study of muon interactions and neutron detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroshi; /JAEA, Ibaraki; Mokhov, N.V.

    Experimental studies of shielding and radiation effects at Fermi National Accelerator Laboratory (FNAL) have been carried out under collaboration between FNAL and Japan, aiming at benchmarking of simulation codes and study of irradiation effects for upgrade and design of new high-energy accelerator facilities. The purposes of this collaboration are (1) acquisition of shielding data in a proton beam energy domain above 100GeV; (2) further evaluation of predictive accuracy of the PHITS and MARS codes; (3) modification of physics models and data in these codes if needed; (4) establishment of irradiation field for radiation effect tests; and (5) development of amore » code module for improved description of radiation effects. A series of experiments has been performed at the Pbar target station and NuMI facility, using irradiation of targets with 120 GeV protons for antiproton and neutrino production, as well as the M-test beam line (M-test) for measuring nuclear data and detector responses. Various nuclear and shielding data have been measured by activation methods with chemical separation techniques as well as by other detectors such as a Bonner ball counter. Analyses with the experimental data are in progress for benchmarking the PHITS and MARS15 codes. In this presentation recent activities and results are reviewed.« less

  14. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.

    2010-11-01

    The numerical modeling code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde, pronounced "inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.

  15. Relaunch of the Interactive Plasma Physics Educational Experience (IPPEX)

    NASA Astrophysics Data System (ADS)

    Dominguez, A.; Rusaitis, L.; Zwicker, A.; Stotler, D. P.

    2015-11-01

    In the late 1990's PPPL's Science Education Department developed an innovative online site called the Interactive Plasma Physics Educational Experience (IPPEX). It featured (among other modules) two Java based applications which simulated tokamak physics: A steady state tokamak (SST) and a time dependent tokamak (TDT). The physics underlying the SST and the TDT are based on the ASPECT code which is a global power balance code developed to evaluate the performance of fusion reactor designs. We have relaunched the IPPEX site with updated modules and functionalities: The site itself is now dynamic on all platforms. The graphic design of the site has been modified to current standards. The virtual tokamak programming has been redone in Javascript, taking advantage of the speed and compactness of the code. The GUI of the tokamak has been completely redesigned, including more intuitive representations of changes in the plasma, e.g., particles moving along magnetic field lines. The use of GPU accelerated computation provides accurate and smooth visual representations of the plasma. We will present the current version of IPPEX as well near term plans of incorporating real time NSTX-U data into the simulation.

  16. Hybrid Vlasov simulations for alpha particles heating in the solar wind

    NASA Astrophysics Data System (ADS)

    Perrone, Denise; Valentini, Francesco; Veltri, Pierluigi

    2011-06-01

    Heating and acceleration of heavy ions in the solar wind and corona represent a long-standing theoretical problem in space physics and are distinct experimental signatures of kinetic processes occurring in collisionless plasmas. To address this problem, we propose the use of a low-noise hybrid-Vlasov code in four dimensional phase space (1D in physical space and 3D in velocity space) configuration. We trigger a turbulent cascade injecting the energy at large wavelengths and analyze the role of kinetic effects along the development of the energy spectra. Following the evolution of both proton and α distribution functions shows that both the ion species significantly depart from the maxwellian equilibrium, with the appearance of beams of accelerated particles in the direction parallel to the background magnetic field.

  17. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  18. CISP: Simulation Platform for Collective Instabilities in the BRing of HIAF project

    NASA Astrophysics Data System (ADS)

    Liu, J.; Yang, J. C.; Xia, J. W.; Yin, D. Y.; Shen, G. D.; Li, P.; Zhao, H.; Ruan, S.; Wu, B.

    2018-02-01

    To simulate collective instabilities during the complicated beam manipulation in the BRing (Booster Ring) of HIAF (High Intensity heavy-ion Accelerator Facility) or other high intensity accelerators, a code, named CISP (Simulation Platform for Collective Instabilities), is designed and constructed in China's IMP (Institute of Modern Physics). The CISP is a scalable multi-macroparticle simulation platform that can perform longitudinal and transverse tracking when chromaticity, space charge effect, nonlinear magnets and wakes are included. And due to its well object-oriented design, the CISP is also a basic platform used to develop many other applications (like feedback). Several simulations, completed by the CISP in this paper, agree with analytical results very well, which shows that the CISP is fully functional now and it is a powerful platform for the further collective instability research in the BRing or other accelerators. In the future, the CISP can also be extended easily into a physics control system for HIAF or other facilities.

  19. Bunch modulation in LWFA blowout regime

    NASA Astrophysics Data System (ADS)

    Vyskočil, Jiří; Klimo, Ondřej; Vieira, Jorge; Korn, Georg

    2015-05-01

    Laser wakefield acceleration (LWFA) is able to produce high quality electron bunches interesting for many applications ranging from coherent light sources to high energy physics. The blow-out regime of LWFA provides excellent accelerating structure able to maintain small transverse emittance and energy spread of the accelerating electron beam if combined with localised injection. A modulation of the back of a self-injected electron bunch in the blowout regime of Laser Wakefield Acceleration appears 3D Particle-in-Cell simulations with the code OSIRIS. The shape of the modulation is connected to the polarization of the driving laser pulse, although the wavelength of the modulation is longer than that of the pulse. Nevertheless a circularly polarized laser pulse leads to a corkscrew-like modulation, while in the case of linear polarization, the modulation lies in the polarization plane.

  20. Community Petascale Project for Accelerator Science and Simulation: Advancing Computational Science for Future Accelerators and Accelerator Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, P.; /Fermilab; Cary, J.

    The design and performance optimization of particle accelerators are essential for the success of the DOE scientific program in the next decade. Particle accelerators are very complex systems whose accurate description involves a large number of degrees of freedom and requires the inclusion of many physics processes. Building on the success of the SciDAC-1 Accelerator Science and Technology project, the SciDAC-2 Community Petascale Project for Accelerator Science and Simulation (ComPASS) is developing a comprehensive set of interoperable components for beam dynamics, electromagnetics, electron cooling, and laser/plasma acceleration modelling. ComPASS is providing accelerator scientists the tools required to enable the necessarymore » accelerator simulation paradigm shift from high-fidelity single physics process modeling (covered under SciDAC1) to high-fidelity multiphysics modeling. Our computational frameworks have been used to model the behavior of a large number of accelerators and accelerator R&D experiments, assisting both their design and performance optimization. As parallel computational applications, the ComPASS codes have been shown to make effective use of thousands of processors. ComPASS is in the first year of executing its plan to develop the next-generation HPC accelerator modeling tools. ComPASS aims to develop an integrated simulation environment that will utilize existing and new accelerator physics modules with petascale capabilities, by employing modern computing and solver technologies. The ComPASS vision is to deliver to accelerator scientists a virtual accelerator and virtual prototyping modeling environment, with the necessary multiphysics, multiscale capabilities. The plan for this development includes delivering accelerator modeling applications appropriate for each stage of the ComPASS software evolution. Such applications are already being used to address challenging problems in accelerator design and optimization. The ComPASS organization for software development and applications accounts for the natural domain areas (beam dynamics, electromagnetics, and advanced acceleration), and all areas depend on the enabling technologies activities, such as solvers and component technology, to deliver the desired performance and integrated simulation environment. The ComPASS applications focus on computationally challenging problems important for design or performance optimization to all major HEP, NP, and BES accelerator facilities. With the cost and complexity of particle accelerators rising, the use of computation to optimize their designs and find improved operating regimes becomes essential, potentially leading to significant cost savings with modest investment.« less

  1. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  2. Stokes versus Basset: comparison of forces governing motion of small bodies with high acceleration

    NASA Astrophysics Data System (ADS)

    Krafcik, A.; Babinec, P.; Frollo, I.

    2018-05-01

    In this paper, the importance of the forces governing the motion of a millimetre-sized sphere in a viscous fluid has been examined. As has been shown previously, for spheres moving with a high initial acceleration, the Basset history force should be used, as well as the commonly used Stokes force. This paper introduces the concept of history forces, which are almost unknown to students despite their interesting mathematical structure and physical meaning, and shows the implementation of simple and efficient numerical methods as a MATLAB code to simulate the motion of a falling sphere. An important application of this code could be, for example, the simulation of microfluidic systems, where the external forces are very large and the relevant timescale is in the order of milliseconds to seconds, and therefore the Basset history force cannot be neglected.

  3. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    NASA Astrophysics Data System (ADS)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  4. 3D Multispecies Nonlinear Perturbative Particle Simulation of Intense Nonneutral Particle Beams (Research supported by the Department of Energy and the Short Pulse Spallation Source Project and LANSCE Division of LANL.)

    NASA Astrophysics Data System (ADS)

    Qin, Hong; Davidson, Ronald C.; Lee, W. Wei-Li

    1999-11-01

    The Beam Equilibrium Stability and Transport (BEST) code, a 3D multispecies nonlinear perturbative particle simulation code, has been developed to study collective effects in intense charged particle beams described self-consistently by the Vlasov-Maxwell equations. A Darwin model is adopted for transverse electromagnetic effects. As a 3D multispecies perturbative particle simulation code, it provides several unique capabilities. Since the simulation particles are used to simulate only the perturbed distribution function and self-fields, the simulation noise is reduced significantly. The perturbative approach also enables the code to investigate different physics effects separately, as well as simultaneously. The code can be easily switched between linear and nonlinear operation, and used to study both linear stability properties and nonlinear beam dynamics. These features, combined with 3D and multispecies capabilities, provides an effective tool to investigate the electron-ion two-stream instability, periodically focused solutions in alternating focusing fields, and many other important problems in nonlinear beam dynamics and accelerator physics. Applications to the two-stream instability are presented.

  5. TORBEAM 2.0, a paraxial beam tracing code for electron-cyclotron beams in fusion plasmas for extended physics applications

    NASA Astrophysics Data System (ADS)

    Poli, E.; Bock, A.; Lochbrunner, M.; Maj, O.; Reich, M.; Snicker, A.; Stegmeir, A.; Volpe, F.; Bertelli, N.; Bilato, R.; Conway, G. D.; Farina, D.; Felici, F.; Figini, L.; Fischer, R.; Galperti, C.; Happel, T.; Lin-Liu, Y. R.; Marushchenko, N. B.; Mszanowski, U.; Poli, F. M.; Stober, J.; Westerhof, E.; Zille, R.; Peeters, A. G.; Pereverzev, G. V.

    2018-04-01

    The paraxial WKB code TORBEAM (Poli, 2001) is widely used for the description of electron-cyclotron waves in fusion plasmas, retaining diffraction effects through the solution of a set of ordinary differential equations. With respect to its original form, the code has undergone significant transformations and extensions, in terms of both the physical model and the spectrum of applications. The code has been rewritten in Fortran 90 and transformed into a library, which can be called from within different (not necessarily Fortran-based) workflows. The models for both absorption and current drive have been extended, including e.g. fully-relativistic calculation of the absorption coefficient, momentum conservation in electron-electron collisions and the contribution of more than one harmonic to current drive. The code can be run also for reflectometry applications, with relativistic corrections for the electron mass. Formulas that provide the coupling between the reflected beam and the receiver have been developed. Accelerated versions of the code are available, with the reduced physics goal of inferring the location of maximum absorption (including or not the total driven current) for a given setting of the launcher mirrors. Optionally, plasma volumes within given flux surfaces and corresponding values of minimum and maximum magnetic field can be provided externally to speed up the calculation of full driven-current profiles. These can be employed in real-time control algorithms or for fast data analysis.

  6. Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Kelley, C. T.; Slattery, Stuart R

    ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less

  7. LIGHT SOURCE: Physical design of a 10 MeV LINAC for polymer radiation processing

    NASA Astrophysics Data System (ADS)

    Feng, Guang-Yao; Pei, Yuan-Ji; Wang, Lin; Zhang, Shan-Cai; Wu, Cong-Feng; Jin, Kai; Li, Wei-Min

    2009-06-01

    In China, polymer radiation processing has become one of the most important processing industries. The radiation processing source may be an electron beam accelerator or a radioactive source. Physical design of an electron beam facility applied for radiation crosslinking is introduced in this paper because of it's much higher dose rate and efficiency. Main part of this facility is a 10 MeV travelling wave electron linac with constant impedance accelerating structure. A start to end simulation concerning the linac is reported in this paper. The codes Opera-3d, Poisson-superfish and Parmela are used to describe electromagnetic elements of the accelerator and track particle distribution from the cathode to the end of the linac. After beam dynamic optimization, wave phase velocities in the structure have been chosen to be 0.56, 0.9 and 0.999 respectively. Physical parameters about the main elements such as DC electron gun, iris-loaded periodic structure, solenoids, etc, are presented. Simulation results proves that it can satisfy the industrial requirement. The linac is under construction. Some components have been finished. Measurements proved that they are in a good agreement with the design values.

  8. Efficient Modeling of Laser-Plasma Accelerators with INF and RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-11-04

    The numerical modeling code INF and RNO (INtegrated Fluid and paRticle simulatioN cOde, pronounced 'inferno') is presented. INF and RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations whilemore » still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  9. Radiation protection and environmental management at the relativistic heavy ion collider.

    PubMed

    Musolino, S V; Briggs, S L; Stevens, A J

    2001-01-01

    The Relativistic Heavy Ion Collider (RHIC) is a high energy hadron accelerator built to study basic nuclear physics. It consists of two counter-rotating beams of fully stripped gold ions that are accelerated in two rings to an energy of 100 GeV/nucleon or protons at 250 GeV/c. The beams can be stored for a period of five to ten hours and brought into collision for experiments during that time. The first major physics objective is to recreate a state of matter, the quark-gluon plasma, that has been predicted to have existed at a short time after the creation of the universe. Because there are only a few other high energy particle accelerators like RHIC in the world, the rules promulgated in the US Code of Federal Regulations under the Atomic Energy Act, State regulations, or international guidance documents do not cover prompt radiation from accelerators to govern directly the design and operation of a superconducting collider. Special design criteria for prompt radiation were developed to provide guidance tor the design of radiation shielding. Environmental Management at RHIC is accomplished through the ISO 14001 Environmental Management System. The applicability, benefits, and implementation of ISO 14001 within the framework of a large research accelerator complex are discussed in the paper.

  10. A Model of RHIC Using the Unified Accelerator Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilat, F.; Tepikian, S.; Trahern, C. G.

    1998-01-01

    The Unified Accelerator Library (UAL) is an object oriented and modular software environment for accelerator physics which comprises an accelerator object model for the description of the machine (SMF, for Standard Machine Format), a collection of Physics Libraries, and a Perl inte,face that provides a homo­geneous shell for integrating and managing these components. Currently available physics libraries include TEAPOT++, a collection of C++ physics modules conceptually derived from TEAPOT, and DNZLIB, a differential algebra package for map generation. This software environment has been used to build a flat model of RHIC which retains the hierarchical lat­tice description while assigning specificmore » characteristics to individual elements, such as measured field har­monics. A first application of the model and of the simulation capabilities of UAL has been the study of RHIC stability in the presence of siberian snakes and spin rotators. The building blocks of RHIC snakes and rotators are helical dipoles, unconventional devices that can not be modeled by traditional accelerator phys­ics codes and have been implemented in UAL as Taylor maps. Section 2 describes the RHIC data stores, Section 3 the RHIC SMF format and Section 4 the RHIC spe­cific Perl interface (RHIC Shell). Section 5 explains how the RHIC SMF and UAL have been used to study the RHIC dynamic behavior and presents detuning and dynamic aperture results. If the reader is not familiar with the motivation and characteristics of UAL, we include in the Appendix an useful overview paper. An example of a complete set of Perl Scripts for RHIC simulation can also be found in the Appendix.« less

  11. Microphysics of Waves and Instabilities in the Solar Wind and Their Macro Manifestations in the Corona and Interplanetary Space

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia Rifai

    2005-01-01

    Investigations of the physical processes responsible for coronal heating and the acceleration of the solar wind were pursued with the use of our recently developed 2D MHD solar wind code and our 1D multifluid code. In particular, we explored: (1) the role of proton temperature anisotropy in the expansion of the solar (2) the role of plasma parameters at the coronal base in the formation of high (3) a three-fluid model of the slow solar wind (4) the heating of coronal loops (5) a newly developed hybrid code for the study of ion cyclotron resonance in wind, speed solar wind streams at mid-latitudes, the solar wind.

  12. Modeling and Simulation of Explosively Driven Electromechanical Devices

    NASA Astrophysics Data System (ADS)

    Demmie, Paul N.

    2002-07-01

    Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.

  13. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-06-01

    The numerical modeling code INF&RNO (INtegrated Fluid& paRticle simulatioN cOde, pronounced"inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The codemore » has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  14. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  15. MAPA: Implementation of the Standard Interchange Format and use for analyzing lattices

    NASA Astrophysics Data System (ADS)

    Shasharina, Svetlana G.; Cary, John R.

    1997-05-01

    MAPA (Modular Accelerator Physics Analysis) is an object oriented application for accelerator design and analysis with a Motif based graphical user interface. MAPA has been ported to AIX, Linux, HPUX, Solaris, and IRIX. MAPA provides an intuitive environment for accelerator study and design. The user can bring up windows for fully nonlinear analysis of accelerator lattices in any number of dimensions. The current graphical analysis methods of Lifetime plots and Surfaces of Section have been used to analyze the improved lattice designs of Wan, Cary, and Shasharina (this conference). MAPA can now read and write Standard Interchange Format (MAD) accelerator description files and it has a general graphical user interface for adding, changing, and deleting elements. MAPA's consistency checks prevent deletion of used elements and prevent creation of recursive beam lines. Plans include development of a richer set of modeling tools and the ability to invoke existing modeling codes through the MAPA interface. MAPA will be demonstrated on a Pentium 150 laptop running Linux.

  16. Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi

    A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less

  17. Further Studies of the NRL Collective Particle Accelerator VIA Numerical Modeling with the MAGIC Code.

    DTIC Science & Technology

    1984-08-01

    COLLFCTIVF PAPTTCLE ACCELERATOR VIA NUMERICAL MODFLINC WITH THF MAGIC CODE Robert 1. Darker Auqust 19F4 Final Report for Period I April. qI84 - 30...NUMERICAL MODELING WITH THE MAGIC CODE Robert 3. Barker August 1984 Final Report for Period 1 April 1984 - 30 September 1984 Prepared for: Scientific...Collective Final Report Particle Accelerator VIA Numerical Modeling with April 1 - September-30, 1984 MAGIC Code. 6. PERFORMING ORG. REPORT NUMBER MRC/WDC-R

  18. TCP throughput adaptation in WiMax networks using replicator dynamics.

    PubMed

    Anastasopoulos, Markos P; Petraki, Dionysia K; Kannan, Rajgopal; Vasilakos, Athanasios V

    2010-06-01

    The high-frequency segment (10-66 GHz) of the IEEE 802.16 standard seems promising for the implementation of wireless backhaul networks carrying large volumes of Internet traffic. In contrast to wireline backbone networks, where channel errors seldom occur, the TCP protocol in IEEE 802.16 Worldwide Interoperability for Microwave Access networks is conditioned exclusively by wireless channel impairments rather than by congestion. This renders a cross-layer design approach between the transport and physical layers more appropriate during fading periods. In this paper, an adaptive coding and modulation (ACM) scheme for TCP throughput maximization is presented. In the current approach, Internet traffic is modulated and coded employing an adaptive scheme that is mathematically equivalent to the replicator dynamics model. The stability of the proposed ACM scheme is proven, and the dependence of the speed of convergence on various physical-layer parameters is investigated. It is also shown that convergence to the strategy that maximizes TCP throughput may be further accelerated by increasing the amount of information from the physical layer.

  19. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  20. Enhanced quasi-static particle-in-cell simulation of electron cloud instabilities in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feng, Bing

    Electron cloud instabilities have been observed in many circular accelerators around the world and raised concerns of future accelerators and possible upgrades. In this thesis, the electron cloud instabilities are studied with the quasi-static particle-in-cell (PIC) code QuickPIC. Modeling in three-dimensions the long timescale propagation of beam in electron clouds in circular accelerators requires faster and more efficient simulation codes. Thousands of processors are easily available for parallel computations. However, it is not straightforward to increase the effective speed of the simulation by running the same problem size on an increasingly number of processors because there is a limit to domain size in the decomposition of the two-dimensional part of the code. A pipelining algorithm applied on the fully parallelized particle-in-cell code QuickPIC is implemented to overcome this limit. The pipelining algorithm uses multiple groups of processors and optimizes the job allocation on the processors in parallel computing. With this novel algorithm, it is possible to use on the order of 102 processors, and to expand the scale and the speed of the simulation with QuickPIC by a similar factor. In addition to the efficiency improvement with the pipelining algorithm, the fidelity of QuickPIC is enhanced by adding two physics models, the beam space charge effect and the dispersion effect. Simulation of two specific circular machines is performed with the enhanced QuickPIC. First, the proposed upgrade to the Fermilab Main Injector is studied with an eye upon guiding the design of the upgrade and code validation. Moderate emittance growth is observed for the upgrade of increasing the bunch population by 5 times. But the simulation also shows that increasing the beam energy from 8GeV to 20GeV or above can effectively limit the emittance growth. Then the enhanced QuickPIC is used to simulate the electron cloud effect on electron beam in the Cornell Energy Recovery Linac (ERL) due to extremely small emittance and high peak currents anticipated in the machine. A tune shift is discovered from the simulation; however, emittance growth of the electron beam in electron cloud is not observed for ERL parameters.

  1. Mirror symmetric optics design for charge-stripping section in Rare Isotope Science Project

    NASA Astrophysics Data System (ADS)

    Kim, Hye-Jin; Kim, Hyung-Jin; Jeon, Dong-O.; Hwang, Ji-Gwang; Kim, Eun-San

    2013-12-01

    The main aim of the Rare Isotope Science Project is to construct a high power heavy-ion accelerator based on the superconducting linear accelerator (SCL). The heavy ion accelerator is a key research facility that will allow ground-breaking research into numerous facets of basic science, such as nuclear physics, astrophysics, atomic physics, life science, medicine and material science. The machine will provide a beam power of 400 kW with a 238U79+ beam of 8 pμA and 200 MeV/u. One of the critical components in the SCL is the charge stripper between the two segments, SCL1 and SCL2, of the SCL. The charge stripper removes electrons from the ion beams to enhance the acceleration efficiency in the subsequent SCL2. To improve the efficiency of acceleration and power in SCL2, the optimal energy of stripped ions in a solid carbon foil stripper was estimated using the code LISE++. The thickness of the solid carbon foil was 300 μg/m2. The charge stripping efficiency of the solid carbon stripper in the present study was approximately 87%. For charge selection from the ions produced by the solid carbon stripper, a dispersive section is needed down-stream of the foil. The designed optics for the dispersive section is based on the mirror-symmetric optics to minimize the effect of high-order aberrations.

  2. Simulation of orientational coherent effects via Geant4

    NASA Astrophysics Data System (ADS)

    Bagli, E.; Asai, M.; Brandt, D.; Dotti, A.; Guidi, V.; Verderi, M.; Wright, D.

    2017-10-01

    Simulation of orientational coherent effects via Geant4 beam manipulation of high-and very-high-energy particle beams is a hot topic in accelerator physics. Coherent effects of ultra-relativistic particles in bent crystals allow the steering of particle trajectories thanks to the strong electrical field generated between atomic planes. Recently, a collimation experiment with bent crystals was carried out at the CERN-LHC, paving the way to the usage of such technology in current and future accelerators. Geant4 is a widely used object-oriented tool-kit for the Monte Carlo simulation of the interaction of particles with matter in high-energy physics. Moreover, its areas of application include also nuclear and accelerator physics, as well as studies in medical and space science. We present the first Geant4 extension for the simulation of orientational effects in straight and bent crystals for high energy charged particles. The model allows the manipulation of particle trajectories by means of straight and bent crystals and the scaling of the cross sections of hadronic and electromagnetic processes for channeled particles. Based on such a model, an extension of the Geant4 toolkit has been developed. The code and the model have been validated by comparison with published experimental data regarding the deflection efficiency via channeling and the variation of the rate of inelastic nuclear interactions.

  3. "SMART": A Compact and Handy FORTRAN Code for the Physics of Stellar Atmospheres

    NASA Astrophysics Data System (ADS)

    Sapar, A.; Poolamäe, R.

    2003-01-01

    A new computer code SMART (Spectra from Model Atmospheres by Radiative Transfer) for computing the stellar spectra, forming in plane-parallel atmospheres, has been compiled by us and A. Aret. To guarantee wide compatibility of the code with shell environment, we chose FORTRAN-77 as programming language and tried to confine ourselves to common part of its numerous versions both in WINDOWS and LINUX. SMART can be used for studies of several processes in stellar atmospheres. The current version of the programme is undergoing rapid changes due to our goal to elaborate a simple, handy and compact code. Instead of linearisation (being a mathematical method of recurrent approximations) we propose to use the physical evolutionary changes or in other words relaxation of quantum state populations rates from LTE to NLTE has been studied using small number of NLTE states. This computational scheme is essentially simpler and more compact than the linearisation. This relaxation scheme enables using instead of the Λ-iteration procedure a physically changing emissivity (or the source function) which incorporates in itself changing Menzel coefficients for NLTE quantum state populations. However, the light scattering on free electrons is in the terms of Feynman graphs a real second-order quantum process and cannot be reduced to consequent processes of absorption and emission as in the case of radiative transfer in spectral lines. With duly chosen input parameters the code SMART enables computing radiative acceleration to the matter of stellar atmosphere in turbulence clumps. This also enables to connect the model atmosphere in more detail with the problem of the stellar wind triggering. Another problem, which has been incorporated into the computer code SMART, is diffusion of chemical elements and their isotopes in the atmospheres of chemically peculiar (CP) stars due to usual radiative acceleration and the essential additional acceleration generated by the light-induced drift. As a special case, using duly chosen pixels on the stellar disk, the spectrum of rotating star can be computed. No instrumental broadening has been incorporated in the code of SMART. To facilitate study of stellar spectra, a GUI (Graphical User Interface) with selection of labels by ions has been compiled to study the spectral lines of different elements and ions in the computed emergent flux. An amazing feature of SMART is that its code is very short: it occupies only 4 two-sided two-column A4 sheets in landscape format. In addition, if well commented, it is quite easily readable and understandable. We have used the tactics of writing the comments on the right-side margin (columns starting from 73). Such short code has been composed widely using the unified input physics (for example the ionisation cross-sections for bound-free transitions and the electron and ion collision rates). As current restriction to the application area of the present version of the SMART is that molecules are since ignored. Thus, it can be used only for luke and hot stellar atmospheres. In the computer code we have tried to avoid bulky often over-optimised methods, primarily meant to spare the time of computations. For instance, we compute the continuous absorption coefficient at every wavelength. Nevertheless, during an hour by the personal computer in our disposal AMD Athlon XP 1700+, 512MB DDRAM) a stellar spectrum with spectral step resolution λ / dλ = 3D100,000 for spectral interval 700 -- 30,000 Å is computed. The model input data and the line data used by us are both the ones computed and compiled by R. Kurucz. In order to follow presence and representability of quantum states and to enumerate them for NLTE studies a C++ code, transforming the needed data to the LATEX version, has been compiled. Thus we have composed a quantum state list for all neutrals and ions in the Kurucz file 'gfhyperall.dat'. The list enables more adequately to compose the concept of super-states, including partly correlating super-states. We are grateful to R. Kurucz for making available by CD-ROMs and Internet his computer codes ATLAS and SYNTHE used by us as a starting point in composing of the new computer code. We are also grateful to Estonian Science Foundation for grant ESF-4701.

  4. Applicability of a Bonner Shere technique for pulsed neutron in 120 GeV proton facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanami, T.; Hagiwara, M.; Iwase, H.

    2008-02-01

    The data on neutron spectra and intensity behind shielding are important for radiation safety design of high-energy accelerators since neutrons are capable of penetrating thick shielding and activating materials. Corresponding particle transport codes--that involve physics models of neutron and other particle production, transportation, and interaction--have been developed and used world-wide [1-8]. The results of these codes have been ensured through plenty of comparisons with experimental results taken in simple geometries. For neutron generation and transport, several related experiments have been performed to measure neutron spectra, attenuation length and reaction rates behind shielding walls of various thicknesses and materials in energymore » range up to several hundred of MeV [9-11]. The data have been used to benchmark--and modify if needed--the simulation modes and parameters in the codes, as well as the reference data for radiation safety design. To obtain such kind of data above several hundred of MeV, Japan-Fermi National Accelerator Laboratory (FNAL) collaboration for shielding experiments has been started in 2007, based on suggestion from the specialist meeting of shielding, Shielding Aspects of Target, Irradiation Facilities (SATIF), because of very limited data available in high-energy region (see, for example, [12]). As a part of this shielding experiment, a set of Bonner sphere (BS) was tested at the antiproton production target facility (pbar target station) at FNAL to obtain neutron spectra induced by a 120-GeV proton beam in concrete and iron shielding. Generally, utilization of an active detector around high-energy accelerators requires an improvement on its readout to overcome burst of secondary radiation since the accelerator delivers an intense beam to a target in a short period after relatively long acceleration period. In this paper, we employ BS for a spectrum measurement of neutrons that penetrate the shielding wall of the pbar target station in FNAL.« less

  5. Prediction of scaling physics laws for proton acceleration with extended parameter space of the NIF ARC

    NASA Astrophysics Data System (ADS)

    Bhutwala, Krish; Beg, Farhat; Mariscal, Derek; Wilks, Scott; Ma, Tammy

    2017-10-01

    The Advanced Radiographic Capability (ARC) laser at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is the world's most energetic short-pulse laser. It comprises four beamlets, each of substantial energy ( 1.5 kJ), extended short-pulse duration (10-30 ps), and large focal spot (>=50% of energy in 150 µm spot). This allows ARC to achieve proton and light ion acceleration via the Target Normal Sheath Acceleration (TNSA) mechanism, but it is yet unknown how proton beam characteristics scale with ARC-regime laser parameters. As theory has also not yet been validated for laser-generated protons at ARC-regime laser parameters, we attempt to formulate the scaling physics of proton beam characteristics as a function of laser energy, intensity, focal spot size, pulse length, target geometry, etc. through a review of relevant proton acceleration experiments from laser facilities across the world. These predicted scaling laws should then guide target design and future diagnostics for desired proton beam experiments on the NIF ARC. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LLNL LDRD program under tracking code 17-ERD-039.

  6. NESSY: NLTE spectral synthesis code for solar and stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Tagirov, R. V.; Shapiro, A. I.; Schmutz, W.

    2017-07-01

    Context. Physics-based models of solar and stellar magnetically-driven variability are based on the calculation of synthetic spectra for various surface magnetic features as well as quiet regions, which are a function of their position on the solar or stellar disc. Such calculations are performed with radiative transfer codes tailored for modeling broad spectral intervals. Aims: We aim to present the NLTE Spectral SYnthesis code (NESSY), which can be used for modeling of the entire (UV-visible-IR and radio) spectra of solar and stellar magnetic features and quiet regions. Methods: NESSY is a further development of the COde for Solar Irradiance (COSI), in which we have implemented an accelerated Λ-iteration (ALI) scheme for co-moving frame (CMF) line radiation transfer based on a new estimate of the local approximate Λ-operator. Results: We show that the new version of the code performs substantially faster than the previous one and yields a reliable calculation of the entire solar spectrum. This calculation is in a good agreement with the available observations.

  7. Production Level CFD Code Acceleration for Hybrid Many-Core Architectures

    NASA Technical Reports Server (NTRS)

    Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.

    2012-01-01

    In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.

  8. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  9. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  10. Evolution of beams in a plasma channel due to beam break up

    NASA Astrophysics Data System (ADS)

    Penn, Gregory; Lehe, Remi; Vay, Jean-Luc; Schroeder, Carl; Esarey, Eric

    2016-10-01

    We study the dynamics of beam break-up (BBU) of an accelerated electron beam in a plasma channel. Particle-in-cell simulations using the codes WARP and FBPIC are presented and interpreted in terms of theoretical calculations for the plasma-induced fields and the evolution of the instability. We focus on cylindrical channels for simplicity, and other geometries are considered to better understand the impact of BBU on electron beams undergoing laser-plasma wake field acceleration. We compare our findings with other published results. This work was supported by the Director, Office of Science, Office of High Energy Physics, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  11. Nuclear physics in particle therapy: a review

    NASA Astrophysics Data System (ADS)

    Durante, Marco; Paganetti, Harald

    2016-09-01

    Charged particle therapy has been largely driven and influenced by nuclear physics. The increase in energy deposition density along the ion path in the body allows reducing the dose to normal tissues during radiotherapy compared to photons. Clinical results of particle therapy support the physical rationale for this treatment, but the method remains controversial because of the high cost and of the lack of comparative clinical trials proving the benefit compared to x-rays. Research in applied nuclear physics, including nuclear interactions, dosimetry, image guidance, range verification, novel accelerators and beam delivery technologies, can significantly improve the clinical outcome in particle therapy. Measurements of fragmentation cross-sections, including those for the production of positron-emitting fragments, and attenuation curves are needed for tuning Monte Carlo codes, whose use in clinical environments is rapidly increasing thanks to fast calculation methods. Existing cross sections and codes are indeed not very accurate in the energy and target regions of interest for particle therapy. These measurements are especially urgent for new ions to be used in therapy, such as helium. Furthermore, nuclear physics hardware developments are frequently finding applications in ion therapy due to similar requirements concerning sensors and real-time data processing. In this review we will briefly describe the physics bases, and concentrate on the open issues.

  12. Nuclear physics in particle therapy: a review.

    PubMed

    Durante, Marco; Paganetti, Harald

    2016-09-01

    Charged particle therapy has been largely driven and influenced by nuclear physics. The increase in energy deposition density along the ion path in the body allows reducing the dose to normal tissues during radiotherapy compared to photons. Clinical results of particle therapy support the physical rationale for this treatment, but the method remains controversial because of the high cost and of the lack of comparative clinical trials proving the benefit compared to x-rays. Research in applied nuclear physics, including nuclear interactions, dosimetry, image guidance, range verification, novel accelerators and beam delivery technologies, can significantly improve the clinical outcome in particle therapy. Measurements of fragmentation cross-sections, including those for the production of positron-emitting fragments, and attenuation curves are needed for tuning Monte Carlo codes, whose use in clinical environments is rapidly increasing thanks to fast calculation methods. Existing cross sections and codes are indeed not very accurate in the energy and target regions of interest for particle therapy. These measurements are especially urgent for new ions to be used in therapy, such as helium. Furthermore, nuclear physics hardware developments are frequently finding applications in ion therapy due to similar requirements concerning sensors and real-time data processing. In this review we will briefly describe the physics bases, and concentrate on the open issues.

  13. On the physics of waves in the solar atmosphere: Wave heating and wind acceleration

    NASA Technical Reports Server (NTRS)

    Musielak, Z. E.

    1992-01-01

    In the area of solar physics, new calculations of the acoustic wave energy fluxes generated in the solar convective zone was performed. The original theory developed was corrected by including a new frequency factor describing temporal variations of the turbulent energy spectrum. We have modified the original Stein code by including this new frequency factor, and tested the code extensively. Another possible source of the mechanical energy generated in the solar convective zone is the excitation of magnetic flux tube waves which can carry energy along the tubes far away from the region. The problem as to how efficiently those waves are generated in the Sun was recently solved. The propagation of nonlinear magnetic tube waves in the solar atmosphere was calculated, and mode coupling, shock formation, and heating of the local medium was studied. The wave trapping problems and evaluation of critical frequencies for wave reflection in the solar atmosphere was studied. It was shown that the role played by Alfven waves in the wind accelerations and the coronal hole heating is dominant. Presently, we are performing calculations of wave energy fluxes generated in late-type dwarf stars and studying physical processes responsible for the heating of stellar chromospheres and coronae. In the area of physics of waves, a new analytical approach for studying linear Alfven waves in smoothly nonuniform media was recently developed. This approach is presently being extended to study the propagation of linear and nonlinear magnetohydrodynamic (MHD) waves in stratified, nonisothermal and solar atmosphere. The Lighthill theory of sound generation to nonisothermal media (with a special temperature distribution) was extended. Energy cascade by nonlinear MHD waves and possible chaos driven by these waves are presently considered.

  14. Opportunities for Undergraduate Research in Nuclear Physics

    DOE PAGES

    Hicks, S. F.; Nguyen, T. D.; Jackson, D. T.; ...

    2017-10-26

    University of Dallas (UD) physics majors are offered a variety of undergraduate research opportunities in nuclear physics through an established program at the University of Kentucky Accelerator Laboratory (UKAL). The 7-MV Model CN Van de Graaff accelerator and the neutron production and detection facilities located there are used by UD students to investigate how neutrons scatter from materials that are important in nuclear energy production and for our basic understanding of how neutrons interact with matter. Recent student projects include modeling of the laboratory using the neutron transport code MCNP to investigate the effectiveness of laboratory shielding, testing the long-termmore » gain stability of C 6D 6 liquid scintillation detectors, and deducing neutron elastic and inelastic scattering cross sections for 12C. Finally, results of these student projects are presented that indicate the pit below the scattering area reduces background by as much as 30%; the detectors show no significant gain instabilities; and new insights into existing 12C neutron inelastic scattering cross-section discrepancies near a neutron energy of 6.0 MeV are obtained.« less

  15. Opportunities for Undergraduate Research in Nuclear Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hicks, S. F.; Nguyen, T. D.; Jackson, D. T.

    University of Dallas (UD) physics majors are offered a variety of undergraduate research opportunities in nuclear physics through an established program at the University of Kentucky Accelerator Laboratory (UKAL). The 7-MV Model CN Van de Graaff accelerator and the neutron production and detection facilities located there are used by UD students to investigate how neutrons scatter from materials that are important in nuclear energy production and for our basic understanding of how neutrons interact with matter. Recent student projects include modeling of the laboratory using the neutron transport code MCNP to investigate the effectiveness of laboratory shielding, testing the long-termmore » gain stability of C 6D 6 liquid scintillation detectors, and deducing neutron elastic and inelastic scattering cross sections for 12C. Finally, results of these student projects are presented that indicate the pit below the scattering area reduces background by as much as 30%; the detectors show no significant gain instabilities; and new insights into existing 12C neutron inelastic scattering cross-section discrepancies near a neutron energy of 6.0 MeV are obtained.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, Arno; Li, Z.; Ng, C.

    The Compact Linear Collider (CLIC) provides a path to a multi-TeV accelerator to explore the energy frontier of High Energy Physics. Its novel two-beam accelerator concept envisions rf power transfer to the accelerating structures from a separate high-current decelerator beam line consisting of power extraction and transfer structures (PETS). It is critical to numerically verify the fundamental and higher-order mode properties in and between the two beam lines with high accuracy and confidence. To solve these large-scale problems, SLAC's parallel finite element electromagnetic code suite ACE3P is employed. Using curvilinear conformal meshes and higher-order finite element vector basis functions, unprecedentedmore » accuracy and computational efficiency are achieved, enabling high-fidelity modeling of complex detuned structures such as the CLIC TD24 accelerating structure. In this paper, time-domain simulations of wakefield coupling effects in the combined system of PETS and the TD24 structures are presented. The results will help to identify potential issues and provide new insights on the design, leading to further improvements on the novel CLIC two-beam accelerator scheme.« less

  17. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less

  18. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  19. Status and future of MUSE

    NASA Astrophysics Data System (ADS)

    Harfst, S.; Portegies Zwart, S.; McMillan, S.

    2008-12-01

    We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.

  20. Electron-Beam Dynamics for an Advanced Flash-Radiography Accelerator

    DOE PAGES

    Ekdahl, Carl

    2015-11-17

    Beam dynamics issues were assessed for a new linear induction electron accelerator being designed for multipulse flash radiography of large explosively driven hydrodynamic experiments. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Furthermore, beam physics issues were examined through theoretical analysis and computer simulations, including particle-in-cell codes. Beam instabilities investigated included beam breakup, image displacement, diocotron, parametric envelope, ion hose, and themore » resistive wall instability. The beam corkscrew motion and emittance growth from beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos National Laboratory will result if the same engineering standards and construction details are upheld.« less

  1. Electron-beam dynamics for an advanced flash-radiography accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August Jr.

    2015-06-22

    Beam dynamics issues were assessed for a new linear induction electron accelerator. Special attention was paid to equilibrium beam transport, possible emittance growth, and beam stability. Especially problematic would be high-frequency beam instabilities that could blur individual radiographic source spots, low-frequency beam motion that could cause pulse-to-pulse spot displacement, and emittance growth that could enlarge the source spots. Beam physics issues were examined through theoretical analysis and computer simulations, including particle-in cell (PIC) codes. Beam instabilities investigated included beam breakup (BBU), image displacement, diocotron, parametric envelope, ion hose, and the resistive wall instability. Beam corkscrew motion and emittance growth frommore » beam mismatch were also studied. It was concluded that a beam with radiographic quality equivalent to the present accelerators at Los Alamos will result if the same engineering standards and construction details are upheld.« less

  2. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  3. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...

    2017-10-17

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  4. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  5. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2018-01-01

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.

  6. GPU COMPUTING FOR PARTICLE TRACKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna

    2011-03-25

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less

  7. TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics

    DOE PAGES

    Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...

    2015-04-16

    Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less

  8. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    NASA Astrophysics Data System (ADS)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  9. Modeling Drift Compression in an Integrated Beam Experiment for Heavy-Ion-Fusion

    NASA Astrophysics Data System (ADS)

    Sharp, W. M.; Barnard, J. J.; Friedman, A.; Grote, D. P.; Celata, C. M.; Yu, S. S.

    2003-10-01

    The Integrated Beam Experiment (IBX) is an induction accelerator being designed to further develop the science base for heavy-ion fusion. The experiment is being developed jointly by Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and Princeton Plasma Physics Laboratory. One conceptual approach would first accelerate a 0.5-1 A beam of singly charged potassium ions to 5 MeV, impose a head-to-tail velocity tilt to compress the beam longitudinally, and finally focus the beam radiallly using a series of quadrupole lenses. The lengthwise compression is a critical step because the radial size must be controlled as the current increases, and the beam emittance must be kept minimal. The work reported here first uses the moment-based model HERMES to design the drift-compression beam line and to assess the sensitivity of the final beam profile to beam and lattice errors. The particle-in-cell code WARP is then used to validate the physics design, study the phase-space evolution, and quantify the emittance growth.

  10. Pure Niobium as a Pressure Vessel Material

    NASA Astrophysics Data System (ADS)

    Peterson, T. J.; Carter, H. F.; Foley, M. H.; Klebaner, A. L.; Nicol, T. H.; Page, T. M.; Theilacker, J. C.; Wands, R. H.; Wong-Squires, M. L.; Wu, G.

    2010-04-01

    Physics laboratories around the world are developing niobium superconducting radio frequency (SRF) cavities for use in particle accelerators. These SRF cavities are typically cooled to low temperatures by direct contact with a liquid helium bath, resulting in at least part of the helium container being made from pure niobium. In the U.S., the Code of Federal Regulations allows national laboratories to follow national consensus pressure vessel rules or use of alternative rules which provide a level of safety greater than or equal to that afforded by ASME Boiler and Pressure Vessel Code. Thus, while used for its superconducting properties, niobium ends up also being treated as a material for pressure vessels. This report summarizes what we have learned about the use of niobium as a pressure vessel material, with a focus on issues for compliance with pressure vessel codes. We present results of a literature search for mechanical properties and tests results, as well as a review of ASME pressure vessel code requirements and issues.

  11. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    PubMed

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  12. Generation of bright attosecond x-ray pulse trains via Thomson scattering from laser-plasma accelerators.

    PubMed

    Luo, W; Yu, T P; Chen, M; Song, Y M; Zhu, Z C; Ma, Y Y; Zhuo, H B

    2014-12-29

    Generation of attosecond x-ray pulse attracts more and more attention within the advanced light source user community due to its potentially wide applications. Here we propose an all-optical scheme to generate bright, attosecond hard x-ray pulse trains by Thomson backscattering of similarly structured electron beams produced in a vacuum channel by a tightly focused laser pulse. Design parameters for a proof-of-concept experiment are presented and demonstrated by using a particle-in-cell code and a four-dimensional laser-Compton scattering simulation code to model both the laser-based electron acceleration and Thomson scattering processes. Trains of 200 attosecond duration hard x-ray pulses holding stable longitudinal spacing with photon energies approaching 50 keV and maximum achievable peak brightness up to 1020 photons/s/mm2/mrad2/0.1%BW for each micro-bunch are observed. The suggested physical scheme for attosecond x-ray pulse trains generation may directly access the fastest time scales relevant to electron dynamics in atoms, molecules and materials.

  13. Roos and NACP-02 ion chamber perturbations and water-air stopping-power ratios for clinical electron beams for energies from 4 to 22 MeV

    NASA Astrophysics Data System (ADS)

    Bailey, M.; Shipley, D. R.; Manning, J. W.

    2015-02-01

    Empirical fits are developed for depth-compensated wall- and cavity-replacement perturbations in the PTW Roos 34001 and IBA / Scanditronix NACP-02 parallel-plate ionisation chambers, for electron beam qualities from 4 to 22 MeV for depths up to approximately 1.1 × R50,D. These are based on calculations using the Monte Carlo radiation transport code EGSnrc and its user codes with a full simulation of the linac treatment head modelled using BEAMnrc. These fits are used with calculated restricted stopping-power ratios between air and water to match measured depth-dose distributions in water from an Elekta Synergy clinical linear accelerator at the UK National Physical Laboratory. Results compare well with those from recent publications and from the IPEM 2003 electron beam radiotherapy Code of Practice.

  14. Hybrid petacomputing meets cosmology: The Roadrunner Universe project

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Lukić, Zarija; Daniel, David; Fasel, Patricia; Desai, Nehal; Heitmann, Katrin; Hsu, Chung-Hsing; Ankeny, Lee; Mark, Graham; Bhattacharya, Suman; Ahrens, James

    2009-07-01

    The target of the Roadrunner Universe project at Los Alamos National Laboratory is a set of very large cosmological N-body simulation runs on the hybrid supercomputer Roadrunner, the world's first petaflop platform. Roadrunner's architecture presents opportunities and difficulties characteristic of next-generation supercomputing. We describe a new code designed to optimize performance and scalability by explicitly matching the underlying algorithms to the machine architecture, and by using the physics of the problem as an essential aid in this process. While applications will differ in specific exploits, we believe that such a design process will become increasingly important in the future. The Roadrunner Universe project code, MC3 (Mesh-based Cosmology Code on the Cell), uses grid and direct particle methods to balance the capabilities of Roadrunner's conventional (Opteron) and accelerator (Cell BE) layers. Mirrored particle caches and spectral techniques are used to overcome communication bandwidth limitations and possible difficulties with complicated particle-grid interaction templates.

  15. EDITORIAL: Laser and plasma accelerators Laser and plasma accelerators

    NASA Astrophysics Data System (ADS)

    Bingham, Robert

    2009-02-01

    This special issue on laser and plasma accelerators illustrates the rapid advancement and diverse applications of laser and plasma accelerators. Plasma is an attractive medium for particle acceleration because of the high electric field it can sustain, with studies of acceleration processes remaining one of the most important areas of research in both laboratory and astrophysical plasmas. The rapid advance in laser and accelerator technology has led to the development of terawatt and petawatt laser systems with ultra-high intensities and short sub-picosecond pulses, which are used to generate wakefields in plasma. Recent successes include the demonstration by several groups in 2004 of quasi-monoenergetic electron beams by wakefields in the bubble regime with the GeV energy barrier being reached in 2006, and the energy doubling of the SLAC high-energy electron beam from 42 to 85 GeV. The electron beams generated by the laser plasma driven wakefields have good spatial quality with energies ranging from MeV to GeV. A unique feature is that they are ultra-short bunches with simulations showing that they can be as short as a few femtoseconds with low-energy spread, making these beams ideal for a variety of applications ranging from novel high-brightness radiation sources for medicine, material science and ultrafast time-resolved radiobiology or chemistry. Laser driven ion acceleration experiments have also made significant advances over the last few years with applications in laser fusion, nuclear physics and medicine. Attention is focused on the possibility of producing quasi-mono-energetic ions with energies ranging from hundreds of MeV to GeV per nucleon. New acceleration mechanisms are being studied, including ion acceleration from ultra-thin foils and direct laser acceleration. The application of wakefields or beat waves in other areas of science such as astrophysics and particle physics is beginning to take off, such as the study of cosmic accelerators considered by Chen et al where the driver, instead of being a laser, is a whistler wave known as the magnetowave plasma accelerator. The application to electron--positron plasmas that are found around pulsars is studied in the paper by Shukla, and to muon acceleration by Peano et al. Electron wakefield experiments are now concentrating on control and optimisation of high-quality beams that can be used as drivers for novel radiation sources. Studies by Thomas et al show that filamentation has a deleterious effect on the production of high quality mono-energetic electron beams and is caused by non-optimal choice of focusing geometry and/or electron density. It is crucial to match the focusing with the right plasma parameters and new types of plasma channels are being developed, such as the magnetically controlled plasma waveguide reported by Froula et al. The magnetic field provides a pressure profile shaping the channel to match the guiding conditions of the incident laser, resulting in predicted electron energies of 3GeV. In the forced laser-wakefield experiment Fang et al show that pump depletion reduces or inhibits the acceleration of electrons. One of the earlier laser acceleration concepts known as the beat wave may be revived due to the work by Kalmykov et al who report on all-optical control of nonlinear focusing of laser beams, allowing for stable propagation over several Rayleigh lengths with pre-injected electrons accelerated beyond 100 MeV. With the increasing number of petawatt lasers, attention is being focused on different acceleration regimes such as stochastic acceleration by counterpropagating laser pulses, the relativistic mirror, or the snow-plough effect leading to single-step acceleration reported by Mendonca. During wakefield acceleration the leading edge of the pulse undergoes frequency downshifting and head erosion as the laser energy is transferred to the wake while the trailing edge of the laser pulse undergoes frequency up-shift. This is commonly known as photon deceleration and acceleration and is the result of a modulational instability. Simulations reported by Trines et al using a photon-in-cell code or wave kinetic code agree extremely well with experimental observation. Ion acceleration is actively studied; for example the papers by Robinson, Macchi, Marita and Tripathi all discuss different types of acceleration mechanisms from direct laser acceleration, Coulombic explosion and double layers. Ion acceleration is an exciting development that may have great promise in oncology. The surprising application is in muon acceleration, demonstrated by Peano et al who show that counterpropagating laser beams with variable frequencies drive a beat structure with variable phase velocity, leading to particle trapping and acceleration with possible application to a future muon collider and neutrino factory. Laser and plasma accelerators remain one of the exciting areas of plasma physics with applications in many areas of science ranging from laser fusion, novel high-brightness radiation sources, particle physics and medicine. The guest editor would like to thank all authors and referees for their invaluable contributions to this special issue.

  16. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  17. Kinetic Modeling of Radiative Turbulence in Relativistic Astrophysical Plasmas: Particle Acceleration and High-Energy Flares

    NASA Astrophysics Data System (ADS)

    Uzdensky, Dmitri

    Relativistic astrophysical plasma environments routinely produce intense high-energy emission, which is often observed to be nonthermal and rapidly flaring. The recently discovered gamma-ray (> 100 MeV) flares in Crab Pulsar Wind Nebula (PWN) provide a quintessential illustration of this, but other notable examples include relativistic active galactic nuclei (AGN) jets, including blazars, and Gamma-ray Bursts (GRBs). Understanding the processes responsible for the very efficient and rapid relativistic particle acceleration and subsequent emission that occurs in these sources poses a strong challenge to modern high-energy astrophysics, especially in light of the necessity to overcome radiation reaction during the acceleration process. Magnetic reconnection and collisionless shocks have been invoked as possible mechanisms. However, the inferred extreme particle acceleration requires the presence of coherent electric-field structures. How such large-scale accelerating structures (such as reconnecting current sheets) can spontaneously arise in turbulent astrophysical environments still remains a mystery. The proposed project will conduct a first-principles computational and theoretical study of kinetic turbulence in relativistic collisionless plasmas with a special focus on nonthermal particle acceleration and radiation emission. The main computational tool employed in this study will be the relativistic radiative particle-in-cell (PIC) code Zeltron, developed by the team members at the Univ. of Colorado. This code has a unique capability to self-consistently include the synchrotron and inverse-Compton radiation reaction force on the relativistic particles, while simultaneously computing the resulting observable radiative signatures. This proposal envisions performing massively parallel, large-scale three-dimensional simulations of driven and decaying kinetic turbulence in physical regimes relevant to real astrophysical systems (such as the Crab PWN), including the radiation reaction effects. In addition to measuring the general fluid-level statistical properties of kinetic turbulence (e.g., the turbulent spectrum in the inertial and sub-inertial range), as well as the overall energy dissipation and particle acceleration, the proposed study will also investigate their intermittency and time variability, resulting in direction- and time-resolved emitted photon spectra and direction- and energy-resolved light curves, which can then be compared with observations. To gain deeper physical insight into the intermittent particle acceleration processes in turbulent astrophysical environments, the project will also identify and analyze statistically the current sheets, shocks, and other relevant localized particle-acceleration structures found in the simulations. In particular, it will assess whether relativistic kinetic turbulence in PWN can self-consistently generate such structures that are long and strong enough to accelerate large numbers of particles to the PeV energies required to explain the Crab gamma-ray flares, and where and under what conditions such acceleration can occur. The results of this research will also advance our understanding the origin of ultra-rapid TeV flares in blazar jets and will have important implications for GRB prompt emission, as well as AGN radio-lobes and radiatively-inefficient accretion flows, such as the flow onto the supermassive black hole at our Galactic Center.

  18. Overview of Particle and Heavy Ion Transport Code System PHITS

    NASA Astrophysics Data System (ADS)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  19. Prediction of high-energy radiation belt electron fluxes using a combined VERB-NARMAX model

    NASA Astrophysics Data System (ADS)

    Pakhotin, I. P.; Balikhin, M. A.; Shprits, Y.; Subbotin, D.; Boynton, R.

    2013-12-01

    This study is concerned with the modelling and forecasting of energetic electron fluxes that endanger satellites in space. By combining data-driven predictions from the NARMAX methodology with the physics-based VERB code, it becomes possible to predict electron fluxes with a high level of accuracy and across a radial distance from inside the local acceleration region to out beyond geosynchronous orbit. The model coupling also makes is possible to avoid accounting for seed electron variations at the outer boundary. Conversely, combining a convection code with the VERB and NARMAX models has the potential to provide even greater accuracy in forecasting that is not limited to geostationary orbit but makes predictions across the entire outer radiation belt region.

  20. Computational study of hot electron generation and energy transport in intense laser produced hot dense matter

    NASA Astrophysics Data System (ADS)

    Mishra, Rohini

    Present ultra high power lasers are capable of producing high energy density (HED) plasmas, in controlled way, with a density greater than solid density and at a high temperature of keV (1 keV ˜ 11,000,000° K). Matter in such extreme states is particularly interesting for (HED) physics such as laboratory studies of planetary and stellar astrophysics, laser fusion research, pulsed neutron source etc. To date however, the physics in HED plasma, especially, the energy transport, which is crucial to realize applications, has not been understood well. Intense laser produced plasmas are complex systems involving two widely distinct temperature distributions and are difficult to model by a single approach. Both kinetic and collisional process are equally important to understand an entire process of laser-solid interaction. By implementing atomic physics models, such as collision, ionization, and radiation damping, self consistently, in state-of-the-art particle-in-cell code (PICLS) has enabled to explore the physics involved in the HED plasmas. Laser absorption, hot electron transport, and isochoric heating physics in laser produced hot dense plasmas are studied with a help of PICLS simulations. In particular, a novel mode of electron acceleration, namely DC-ponderomotive acceleration, is identified in the super intense laser regime which plays an important role in the coupling of laser energy to a dense plasma. Geometric effects on hot electron transport and target heating processes are examined in the reduced mass target experiments. Further, pertinent to fast ignition, laser accelerated fast electron divergence and transport in the experiments using warm dense matter (low temperature plasma) is characterized and explained.

  1. Unified Models of Turbulence and Nonlinear Wave Evolution in the Extended Solar Corona and Solar Wind

    NASA Technical Reports Server (NTRS)

    Cranmer, Steven R.; Wagner, William (Technical Monitor)

    2003-01-01

    The PI (Cranmer) and Co-I (A. van Ballegooijen) made significant progress toward the goal of building a "unified model" of the dominant physical processes responsible for the acceleration of the solar wind. The approach outlined in the original proposal comprised two complementary pieces: (1) to further investigate individual physical processes under realistic coronal and solar wind conditions, and (2) to extract the dominant physical effects from simulations and apply them to a one-dimensional and time-independent model of plasma heating and acceleration. The accomplishments in the report period are thus divided into these two categories: 1a. Focused Study of Kinetic MHD Turbulence. We have developed a model of magnetohydrodynamic (MHD) turbulence in the extended solar corona that contains the effects of collisionless dissipation and anisotropic particle heating. A turbulent cascade is one possible way of generating small-scale fluctuations (easy to dissipate/heat) from a pre-existing population of low-frequency Alfven waves (difficult to dissipate/heat). We modeled the cascade as a combination of advection and diffusion in wavenumber space. The dominant spectral transfer occurs in the direction perpendicular to the background magnetic field. As expected from earlier models, this leads to a highly anisotropic fluctuation spectrum with a rapidly decaying tail in the parallel wavenumber direction. The wave power that decays to high enough frequencies to become ion cyclotron resonant depends on the relative strengths of advection and diffusion in the cascade. For the most realistic values of these parameters, though, there is insufficient power to heat protons and heavy ions. The dominant oblique waves undergo Landau damping, which implies strong parallel electron heating. We thus investigated the nonlinear evolution of the electron velocity distributions (VDFs) into parallel beams and discrete phase-space holes (similar to those seen in the terrestrial magnetosphere) which are an alternate means of heating protons via stochastic interactions similar to particle-particle collisions. 1b. Focused Study of the Multi-Mode Detailed Balance Formalism. The PI began to explore the feasibility of using the "weak turbulence," or detailed-balance theory of Tsytovich, Melrose, and others to encompass the relevant physics of the solar wind. This study did not go far, however, because if the "strong" MHD turbulence discussed above is a dominant player in the wind's acceleration region, this formalism is inherently not applicable to the corona. We will continue to study the various published approaches to the weak turbulence formalism, especially with an eye on ways to parameterize nonlinear wave reflection rates. 2. Building the Unified Model Code Architecture. We have begun developing the computational model of a time-steady open flux tube in the extended corona. The model will be "unified" in the sense that it will include (simultaneously for the first time) as many of the various proposed physical processes as possible, all on equal footing. To retain this generality, we have formulated the problem in two interconnected parts: a completely kinetic model for the particles, using the Monte Carlo approach, and a finite-difference approach for the self-consistent fluctuation spectra. The two codes are run sequentially and iteratively until complete consistency is achieved. The current version of the Monte Carlo code incorporates gravity, the zero-current electric field, magnetic mirroring, and collisions. The fluctuation code incorporates WKJ3 wave action conservation and the cascade/dissipation processes discussed above. The codes are being run for various test problems with known solutions. Planned additions to the codes include prescriptions for nonlinear wave steepening, kinetic velocity-space diffusion, and multi-mode coupling (including reflection and refraction).

  2. Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs

    NASA Astrophysics Data System (ADS)

    Chhotray, Atul; Lazzati, Davide

    2018-05-01

    We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph < Rsat regime. Our results reveal new phases of fireball evolution: a transition phase with a radial extent of several orders of magnitude - the fireball transitions from Γ ∝ R to Γ ∝ R0, a post-photospheric acceleration phase - where fireballs accelerate beyond the photosphere and a Thomson-dominated acceleration phase - characterized by slow acceleration of optically thick, matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.

  3. ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings

    NASA Astrophysics Data System (ADS)

    Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-12-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.

  4. Decoupling correction system in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trbojevic, D.; Tepikian, S.; Peggs, S.

    A global linear decoupling in the Relativistic Heavy Ion Collider (RHIC) is going to be performed with the three families of skew quadrupoles. The operating horizontal and vertical betatron tunes in the RHIC will be separated by one unit [nu][sub x]=28.19 and [nu][sub y]=29.18. The linear coupling is corrected by minimizing the tune splitting [Delta][nu]-the off diagonal matrix [bold m] (defined by Edwards and Teng). The skew quadrupole correction system is located close to each of the six interaction regions. A detail study of the system is presented by the use of the TEAPOT accelerator physics code. [copyright] 1994 Americanmore » Institute of Physics« less

  5. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1992-01-01

    Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.

  6. An Experimental Study of a Pulsed Electromagnetic Plasma Accelerator

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Eskridge, Richard; Lee, Mike; Smith, James; Martin, Adam; Markusic, Tom E.; Cassibry, Jason T.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Experiments are being performed on the NASA Marshall Space Flight Center (MSFC) pulsed electromagnetic plasma accelerator (PEPA-0). Data produced from the experiments provide an opportunity to further understand the plasma dynamics in these thrusters via detailed computational modeling. The detailed and accurate understanding of the plasma dynamics in these devices holds the key towards extending their capabilities in a number of applications, including their applications as high power (greater than 1 MW) thrusters, and their use for producing high-velocity, uniform plasma jets for experimental purposes. For this study, the 2-D MHD modeling code, MACH2, is used to provide detailed interpretation of the experimental data. At the same time, a 0-D physics model of the plasma initial phase is developed to guide our 2-D modeling studies.

  7. Monte Carlo simulation of electron beams from an accelerator head using PENELOPE.

    PubMed

    Sempau, J; Sánchez-Reyes, A; Salvat, F; ben Tahar, H O; Jiang, S B; Fernández-Varea, J M

    2001-04-01

    The Monte Carlo code PENELOPE has been used to simulate electron beams from a Siemens Mevatron KDS linac with nominal energies of 6, 12 and 18 MeV. Owing to its accuracy, which stems from that of the underlying physical interaction models, PENELOPE is suitable for simulating problems of interest to the medical physics community. It includes a geometry package that allows the definition of complex quadric geometries, such as those of irradiation instruments, in a straightforward manner. Dose distributions in water simulated with PENELOPE agree well with experimental measurements using a silicon detector and a monitoring ionization chamber. Insertion of a lead slab in the incident beam at the surface of the water phantom produces sharp variations in the dose distributions, which are correctly reproduced by the simulation code. Results from PENELOPE are also compared with those of equivalent simulations with the EGS4-based user codes BEAM and DOSXYZ. Angular and energy distributions of electrons and photons in the phase-space plane (at the downstream end of the applicator) obtained from both simulation codes are similar, although significant differences do appear in some cases. These differences, however, are shown to have a negligible effect on the calculated dose distributions. Various practical aspects of the simulations, such as the calculation of statistical uncertainties and the effect of the 'latent' variance in the phase-space file, are discussed in detail.

  8. WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergmann, Ryan M.; Rowland, Kelly L.

    2017-04-12

    WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed at UC Berkeley to efficiently execute on NVIDIA graphics processing unit (GPU) platforms. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo method, namely, that very few physical and geometrical simplifications are applied. WARP is able to calculate multiplication factors, neutron flux distributions (in both space and energy), and fission source distributions for time-independent neutron transport problems. It can run in both criticality or fixed source modes, but fixed source mode is currentlymore » not robust, optimized, or maintained in the newest version. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. The goal of developing WARP is to investigate algorithms that can grow into a full-featured, continuous energy, Monte Carlo neutron transport code that is accelerated by running on GPUs. The crux of the effort is to make Monte Carlo calculations faster while producing accurate results. Modern supercomputers are commonly being built with GPU coprocessor cards in their nodes to increase their computational efficiency and performance. GPUs execute efficiently on data-parallel problems, but most CPU codes, including those for Monte Carlo neutral particle transport, are predominantly task-parallel. WARP uses a data-parallel neutron transport algorithm to take advantage of the computing power GPUs offer.« less

  9. Analysis of GEANT4 Physics List Properties in the 12 GeV MOLLER Simulation Framework

    NASA Astrophysics Data System (ADS)

    Haufe, Christopher; Moller Collaboration

    2013-10-01

    To determine the validity of new physics beyond the scope of the electroweak theory, nuclear physicists across the globe have been collaborating on future endeavors that will provide the precision needed to confirm these speculations. One of these is the MOLLER experiment - a low-energy particle experiment that will utilize the 12 GeV upgrade of Jefferson Lab's CEBAF accelerator. The motivation of this experiment is to measure the parity-violating asymmetry of scattered polarized electrons off unpolarized electrons in a liquid hydrogen target. This measurement would allow for a more precise determination of the electron's weak charge and weak mixing angle. While still in its planning stages, the MOLLER experiment requires a detailed simulation framework in order to determine how the project should be run in the future. The simulation framework for MOLLER, called ``remoll'', is written in GEANT4 code. As a result, the simulation can utilize a number of GEANT4 coded physics lists that provide the simulation with a number of particle interaction constraints based off of different particle physics models. By comparing these lists with one another using the data-analysis application ROOT, the most optimal physics list for the MOLLER simulation can be determined and implemented. This material is based upon work supported by the National Science Foundation under Grant No. 714001.

  10. Factors Contributing to Corrosion of Steel Pilings in Duluth-Superior Harbor

    DTIC Science & Technology

    2009-11-01

    1226 Office of Counsel,Code 1008.3 ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only), Code 703o 4...Great Lakes. Accelerated corrosion of CS pilings in estua- rine and marine harbors is a global phenomenon.9 The term "accelerated low water corrosion

  11. Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J.-L.; Furman, M.A.; Azevedo, A.W.

    2004-04-19

    We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE.

  12. Testing fundamental physics with distant star clusters: theoretical models for pressure-supported stellar systems

    NASA Astrophysics Data System (ADS)

    Haghi, Hosein; Baumgardt, Holger; Kroupa, Pavel; Grebel, Eva K.; Hilker, Michael; Jordi, Katrin

    2009-05-01

    We investigate the mean velocity dispersion and the velocity dispersion profile of stellar systems in modified Newtonian dynamics (MOND), using the N-body code N-MODY, which is a particle-mesh-based code with a numerical MOND potential solver developed by Ciotti, Londrillo & Nipoti. We have calculated mean velocity dispersions for stellar systems following Plummer density distributions with masses in the range of 104 to 109Msolar and which are either isolated or immersed in an external field. Our integrations reproduce previous analytic estimates for stellar velocities in systems in the deep MOND regime (ai, ae << a0), where the motion of stars is either dominated by internal accelerations (ai >> ae) or constant external accelerations (ae >> ai). In addition, we derive for the first time analytic formulae for the line-of-sight velocity dispersion in the intermediate regime (ai ~ ae ~ a0). This allows for a much-improved comparison of MOND with observed velocity dispersions of stellar systems. We finally derive the velocity dispersion of the globular cluster Pal14 as one of the outer Milky Way halo globular clusters that have recently been proposed as a differentiator between Newtonian and MONDian dynamics.

  13. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badal, Andreu; Badano, Aldo

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-raymore » imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.« less

  14. Sirepo - Warp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagler, Robert; Moeller, Paul

    Sirepo is an open source framework for cloud computing. The graphical user interface (GUI) for Sirepo, also known as the client, executes in any HTML5 compliant web browser on any computing platform, including tablets. The client is built in JavaScript, making use of the following open source libraries: Bootstrap, which is fundamental for cross-platform web applications; AngularJS, which provides a model–view–controller (MVC) architecture and GUI components; and D3.js, which provides interactive plots and data-driven transformations. The Sirepo server is built on the following Python technologies: Flask, which is a lightweight framework for web development; Jin-ja, which is a secure andmore » widely used templating language; and Werkzeug, a utility library that is compliant with the WSGI standard. We use Nginx as the HTTP server and proxy, which provides a scalable event-driven architecture. The physics codes supported by Sirepo execute inside a Docker container. One of the codes supported by Sirepo is Warp. Warp is a particle-in-cell (PIC) code de-signed to simulate high-intensity charged particle beams and plasmas in both the electrostatic and electromagnetic regimes, with a wide variety of integrated physics models and diagnostics. At pre-sent, Sirepo supports a small subset of Warp’s capabilities. Warp is open source and is part of the Berkeley Lab Accelerator Simulation Toolkit.« less

  15. EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System

    NASA Astrophysics Data System (ADS)

    Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.

    2014-04-01

    The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.

  16. Background gas density and beam losses in NIO1 beam source

    NASA Astrophysics Data System (ADS)

    Sartori, E.; Veltri, P.; Cavenago, M.; Serianni, G.

    2016-02-01

    NIO1 (Negative Ion Optimization 1) is a versatile ion source designed to study the physics of production and acceleration of H- beams up to 60 keV. In ion sources, the gas is steadily injected in the plasma source to sustain the discharge, while high vacuum is maintained by a dedicated pumping system located in the vessel. In this paper, the three dimensional gas flow in NIO1 is studied in the molecular flow regime by the Avocado code. The analysis of the gas density profile along the accelerator considers the influence of effective gas temperature in the source, of the gas temperature accommodation by collisions at walls, and of the gas particle mass. The calculated source and vessel pressures are compared with experimental measurements in NIO1 during steady gas injection.

  17. LEGO - A Class Library for Accelerator Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunhai

    1998-11-19

    An object-oriented class library of accelerator design and simulation is designed and implemented in a simple and modular fashion. All physics of single-particle dynamics is implemented based on the Hamiltonian in the local frame of the component. Symplectic integrators are used to approximate the integration of the Hamiltonian. A differential algebra class is introduced to extract a Taylor map up to arbitrary order. Analysis of optics is done in the same way both for the linear and non-linear cases. Recently, Monte Carlo simulation of synchrotron radiation has been added into the library. The code is used to design and simulatemore » the lattices of the PEP-II and SPEAR3. And it is also used for the commissioning of the PEP-II. Some examples of how to use the library will be given.« less

  18. Classical Mechanics Experiments using Wiimotes

    NASA Astrophysics Data System (ADS)

    Lopez, Alexander; Ochoa, Romulo

    2010-02-01

    The Wii, a video game console, is a very popular device. Although computationally it is not a powerful machine by today's standards, to a physics educator the controllers are its most important components. The Wiimote (or remote) controller contains a three-axis accelerometer, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, such as GlovePie, any PC or Laptop with Bluetooth capability can detect the information sent out by the Wiimote. We present experiments that use two or three Wiimotes simultaneously to measure the variable accelerations in two mass systems interacting via springs. Normal modes are determined from the data obtained. Masses and spring constants are varied to analyze their impact on the accelerations of the systems. We present the results of our experiments and compare them with those predicted using Lagrangian mechanics. )

  19. First experience of vectorizing electromagnetic physics models for detector simulation

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  20. Simulations of Coherent Synchrotron Radiation Effects in Electron Machines

    NASA Astrophysics Data System (ADS)

    Migliorati, M.; Schiavi, A.; Dattoli, G.

    2007-09-01

    Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.

  1. Simulations of Coherent Synchrotron Radiation Effects in Electron Machines

    NASA Astrophysics Data System (ADS)

    Migliorati, M.; Schiavi, A.; Dattoli, G.

    Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.

  2. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE PAGES

    Kerby, Leslie M.; Mashnik, Stepan G.

    2015-05-14

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  3. User's guide for ALEX: uncertainty propagation from raw data to final results for ORELA transmission measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, N.M.

    1984-02-01

    This report describes a computer code (ALEX) developed to assist in AnaLysis of EXperimental data at the Oak Ridge Electron Linear Accelerator (ORELA). Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure; propagation of experimental uncertainties through that reduction procedure has in the past been viewed as even more difficult - if not impossible. The purpose of the code ALEX is to correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional inputmore » from the eperimentalist beyond that which is required for the data reduction itself. This report describes ALEX in detail, with special attention given to the case of transmission measurements (the code itself is applicable, with few changes, to any type of data). Application to the natural iron measurements of D.C. Larson et al. is described in some detail.« less

  4. Total reaction cross sections in CEM and MCNP6 at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerby, Leslie M.; Mashnik, Stepan G.

    Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less

  5. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  6. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  7. FPGA acceleration of rigid-molecule docking codes

    PubMed Central

    Sukhwani, B.; Herbordt, M.C.

    2011-01-01

    Modelling the interactions of biological molecules, or docking, is critical both to understanding basic life processes and to designing new drugs. The field programmable gate array (FPGA) based acceleration of a recently developed, complex, production docking code is described. The authors found that it is necessary to extend their previous three-dimensional (3D) correlation structure in several ways, most significantly to support simultaneous computation of several correlation functions. The result for small-molecule docking is a 100-fold speed-up of a section of the code that represents over 95% of the original run-time. An additional 2% is accelerated through a previously described method, yielding a total acceleration of 36× over a single core and 10× over a quad-core. This approach is found to be an ideal complement to graphics processing unit (GPU) based docking, which excels in the protein–protein domain. PMID:21857870

  8. Highly-Damped Spectral Acceleration as a Ground Motion Intensity Measure for Estimating Collapse Vulnerability of Buildings

    NASA Astrophysics Data System (ADS)

    Buyco, K.; Heaton, T. H.

    2016-12-01

    Current U.S. seismic code and performance-based design recommendations quantify ground motion intensity using 5%-damped spectral acceleration when estimating the collapse vulnerability of buildings. This intensity measure works well for predicting inter-story drift due to moderate shaking, but other measures have been shown to be better for estimating collapse risk.We propose using highly-damped (>10%) spectral acceleration to assess collapse vulnerability. As damping is increased, the spectral acceleration at a given period T begins to behave like a weighted average of the corresponding lowly-damped (i.e. 5%) spectrum at a range of periods. Weights for periods longer than T increase as damping increases. Using high damping is physically intuitive for two reasons. Firstly, ductile buildings dissipate a large amount of hysteretic energy before collapse and thus behave more like highly-damped systems. Secondly, heavily damaged buildings experience period-lengthening, giving further credence to the weighted-averaging property of highly-damped spectral acceleration.To determine the optimal damping value(s) for this ground motion intensity measure, we conduct incremental dynamic analysis for a suite of ground motions on several different mid-rise steel buildings and select the damping value yielding the lowest dispersion of intensity at the collapse threshold. Spectral acceleration calculated with damping as high as 70% has been shown to be a better indicator of collapse than that with 5% damping.

  9. Establishing the Common Community Physics Package by Transitioning the GFS Physics to a Collaborative Software Framework

    NASA Astrophysics Data System (ADS)

    Xue, L.; Firl, G.; Zhang, M.; Jimenez, P. A.; Gill, D.; Carson, L.; Bernardet, L.; Brown, T.; Dudhia, J.; Nance, L. B.; Stark, D. R.

    2017-12-01

    The Global Model Test Bed (GMTB) has been established to support the evolution of atmospheric physical parameterizations in NCEP global modeling applications. To accelerate the transition to the Next Generation Global Prediction System (NGGPS), a collaborative model development framework known as the Common Community Physics Package (CCPP) is created within the GMTB to facilitate engagement from the broad community on physics experimentation and development. A key component to this Research to Operation (R2O) software framework is the Interoperable Physics Driver (IPD) that hooks the physics parameterizations from one end to the dynamical cores on the other end with minimum implementation effort. To initiate the CCPP, scientists and engineers from the GMTB separated and refactored the GFS physics. This exercise demonstrated the process of creating IPD-compliant code and can serve as an example for other physics schemes to do the same and be considered for inclusion into the CCPP. Further benefits to this process include run-time physics suite configuration and considerably reduced effort for testing modifications to physics suites through GMTB's physics test harness. The implementation will be described and the preliminary results will be presented at the conference.

  10. MARS15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mokhov, Nikolai

    MARS is a Monte Carlo code for inclusive and exclusive simulation of three-dimensional hadronic and electromagnetic cascades, muon, heavy-ion and low-energy neutron transport in accelerator, detector, spacecraft and shielding components in the energy range from a fraction of an electronvolt up to 100 TeV. Recent developments in the MARS15 physical models of hadron, heavy-ion and lepton interactions with nuclei and atoms include a new nuclear cross section library, a model for soft pion production, the cascade-exciton model, the quark gluon string models, deuteron-nucleus and neutrino-nucleus interaction models, detailed description of negative hadron and muon absorption and a unified treatment ofmore » muon, charged hadron and heavy-ion electromagnetic interactions with matter. New algorithms are implemented into the code and thoroughly benchmarked against experimental data. The code capabilities to simulate cascades and generate a variety of results in complex media have been also enhanced. Other changes in the current version concern the improved photo- and electro-production of hadrons and muons, improved algorithms for the 3-body decays, particle tracking in magnetic fields, synchrotron radiation by electrons and muons, significantly extended histograming capabilities and material description, and improved computational performance. In addition to direct energy deposition calculations, a new set of fluence-to-dose conversion factors for all particles including neutrino are built into the code. The code includes new modules for calculation of Displacement-per-Atom and nuclide inventory. The powerful ROOT geometry and visualization model implemented in MARS15 provides a large set of geometrical elements with a possibility of producing composite shapes and assemblies and their 3D visualization along with a possible import/export of geometry descriptions created by other codes (via the GDML format) and CAD systems (via the STEP format). The built-in MARS-MAD Beamline Builder (MMBLB) was redesigned for use with the ROOT geometry package that allows a very efficient and highly-accurate description, modeling and visualization of beam loss induced effects in arbitrary beamlines and accelerator lattices. The MARS15 code includes links to the MCNP-family codes for neutron and photon production and transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings.« less

  11. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  12. Implementation of a hybrid particle code with a PIC description in r–z and a gridless description in ϕ into OSIRIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, A., E-mail: davidsoa@physics.ucla.edu; Tableman, A., E-mail: Tableman@physics.ucla.edu; An, W., E-mail: anweiming@ucla.edu

    2015-01-15

    For many plasma physics problems, three-dimensional and kinetic effects are very important. However, such simulations are very computationally intensive. Fortunately, there is a class of problems for which there is nearly azimuthal symmetry and the dominant three-dimensional physics is captured by the inclusion of only a few azimuthal harmonics. Recently, it was proposed [1] to model one such problem, laser wakefield acceleration, by expanding the fields and currents in azimuthal harmonics and truncating the expansion. The complex amplitudes of the fundamental and first harmonic for the fields were solved on an r–z grid and a procedure for calculating the complexmore » current amplitudes for each particle based on its motion in Cartesian geometry was presented using a Marder's correction to maintain the validity of Gauss's law. In this paper, we describe an implementation of this algorithm into OSIRIS using a rigorous charge conserving current deposition method to maintain the validity of Gauss's law. We show that this algorithm is a hybrid method which uses a particles-in-cell description in r–z and a gridless description in ϕ. We include the ability to keep an arbitrary number of harmonics and higher order particle shapes. Examples for laser wakefield acceleration, plasma wakefield acceleration, and beam loading are also presented and directions for future work are discussed.« less

  13. Modeling dynamic plasmas driven by ultraintense nano-focused x-ray laser pulses in solid iron targets

    NASA Astrophysics Data System (ADS)

    Royle, Ryan; Sentoku, Yasuhiko; Mancini, Roberto

    2017-10-01

    The hard x-ray free electron laser has proven to be a valuable tool for high energy density (HED) physics as it is able to produce well-characterized samples of HED matter at exactly solid density and homogeneous temperatures. However, if the x-ray pulses are focused to sub-micron spot sizes, where peak intensities can exceed 1020 W/cm2, the plasmas driven by sources of non-thermal photoelectrons and Auger electrons can be highly dynamic and so cannot be modeled by atomic kinetics or fluid codes. We apply the 2D/3D particle-in-cell code, PICLS-which has been extended with numerous physics models to enable the simulation of XFEL-driven plasmas-to the modeling of such dynamic plasmas driven by nano-focused XFEL pulses in solid iron targets. In the case of the smallest focal spot investigated of just 100 nm in diameter, keV plasmas induce strong radial E-fields that accelerate keV ions radially as well as sheath fields that accelerate surface ions to hundreds of keV. The heated spot, which is initially larger than the laser spot due to the kinetic nature of the fast Auger electrons, expands as ion and electron waves propagate radially, leaving a low density region along the laser axis. This research was supported by the US DOE-OFES under Grant No. DE-SC0008827, the DOE-NNSA under Grant No. DE-NA0002075, and the JSPS KAKENHI under Grant No. JP15K21767.

  14. Application of Plasma Waveguides to High Energy Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milchberg, Howard M

    2013-03-30

    The eventual success of laser-plasma based acceleration schemes for high-energy particle physics will require the focusing and stable guiding of short intense laser pulses in reproducible plasma channels. For this goal to be realized, many scientific issues need to be addressed. These issues include an understanding of the basic physics of, and an exploration of various schemes for, plasma channel formation. In addition, the coupling of intense laser pulses to these channels and the stable propagation of pulses in the channels require study. Finally, new theoretical and computational tools need to be developed to aid in the design and analysismore » of experiments and future accelerators. Here we propose a 3-year renewal of our combined theoretical and experimental program on the applications of plasma waveguides to high-energy accelerators. During the past grant period we have made a number of significant advances in the science of laser-plasma based acceleration. We pioneered the development of clustered gases as a new highly efficient medium for plasma channel formation. Our contributions here include theoretical and experimental studies of the physics of cluster ionization, heating, explosion, and channel formation. We have demonstrated for the first time the generation of and guiding in a corrugated plasma waveguide. The fine structure demonstrated in these guides is only possible with cluster jet heating by lasers. The corrugated guide is a slow wave structure operable at arbitrarily high laser intensities, allowing direct laser acceleration, a process we have explored in detail with simulations. The development of these guides opens the possibility of direct laser acceleration, a true miniature analogue of the SLAC RF-based accelerator. Our theoretical studies during this period have also contributed to the further development of the simulation codes, Wake and QuickPIC, which can be used for both laser driven and beam driven plasma based acceleration schemes. We will continue our development of advanced simulation tools by modifying the QuickPIC algorithm to allow for the simulation of plasma particle pick-up by the wake fields. We have also performed extensive simulations of plasma slow wave structures for efficient THz generation by guided laser beams or accelerated electron beams. We will pursue experimental studies of direct laser acceleration, and THz generation by two methods, ponderomotive-induced THz polarization, and THz radiation by laser accelerated electron beams. We also plan to study both conventional and corrugated plasma channels using our new 30 TW in our new lab facilities. We will investigate production of very long hydrogen plasma waveguides (5 cm). We will study guiding at increasing power levels through the onset of laser-induced cavitation (bubble regime) to assess the role played by the preformed channel. Experiments in direct acceleration will be performed, using laser plasma wakefields as the electron injector. Finally, we will use 2-colour ionization of gases as a high frequency THz source (<60 THz) in order for femtosecond measurements of low plasma densities in waveguides and beams.« less

  15. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    NASA Astrophysics Data System (ADS)

    Cowan, B. M.; Kalmykov, S. Y.; Beck, A.; Davoine, X.; Bunkers, K.; Lifschitz, A. F.; Lefebvre, E.; Bruhwiler, D. L.; Shadwick, B. A.; Umstadter, D. P.; Umstadter

    2012-08-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100-terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, 3D particle-in-cell modelling are examined. First, the Cartesian code vorpal (Nieter, C. and Cary, J. R. 2004 VORPAL: a versatile plasma simulation code. J. Comput. Phys. 196, 538) using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code calder-circ (Lifschitz, A. F. et al. 2009 Particle-in-cell modelling of laser-plasma interaction using Fourier decomposition. J. Comput. Phys. 228(5), 1803-1814) uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two modes, reducing the computational load to roughly that of a planar Cartesian simulation while preserving the 3D nature of the interaction. This significant economy of resources allows using fine resolution in the direction of propagation and a small time step, making numerical dispersion vanishingly small, together with a large number of particles per cell, enabling good particle statistics. Quantitative agreement of two simulations indicates that these are free of numerical artefacts. Both approaches thus retrieve the physically correct evolution of the plasma bubble, recovering the intrinsic connection of electron self-injection to the nonlinear optical evolution of the driver.

  16. The Scanning Electron Microscope As An Accelerator For The Undergraduate Advanced Physics Laboratory

    NASA Astrophysics Data System (ADS)

    Peterson, Randolph S.; Berggren, Karl K.; Mondol, Mark

    2011-06-01

    Few universities or colleges have an accelerator for use with advanced physics laboratories, but many of these institutions have a scanning electron microscope (SEM) on site, often in the biology department. As an accelerator for the undergraduate, advanced physics laboratory, the SEM is an excellent substitute for an ion accelerator. Although there are no nuclear physics experiments that can be performed with a typical 30 kV SEM, there is an opportunity for experimental work on accelerator physics, atomic physics, electron-solid interactions, and the basics of modern e-beam lithography.

  17. Hardware accelerated high performance neutron transport computation based on AGENT methodology

    NASA Astrophysics Data System (ADS)

    Xiao, Shanjie

    The spatial heterogeneity of the next generation Gen-IV nuclear reactor core designs brings challenges to the neutron transport analysis. The Arbitrary Geometry Neutron Transport (AGENT) AGENT code is a three-dimensional neutron transport analysis code being developed at the Laboratory for Neutronics and Geometry Computation (NEGE) at Purdue University. It can accurately describe the spatial heterogeneity in a hierarchical structure through the R-function solid modeler. The previous version of AGENT coupled the 2D transport MOC solver and the 1D diffusion NEM solver to solve the three dimensional Boltzmann transport equation. In this research, the 2D/1D coupling methodology was expanded to couple two transport solvers, the radial 2D MOC solver and the axial 1D MOC solver, for better accuracy. The expansion was benchmarked with the widely applied C5G7 benchmark models and two fast breeder reactor models, and showed good agreement with the reference Monte Carlo results. In practice, the accurate neutron transport analysis for a full reactor core is still time-consuming and thus limits its application. Therefore, another content of my research is focused on designing a specific hardware based on the reconfigurable computing technique in order to accelerate AGENT computations. It is the first time that the application of this type is used to the reactor physics and neutron transport for reactor design. The most time consuming part of the AGENT algorithm was identified. Moreover, the architecture of the AGENT acceleration system was designed based on the analysis. Through the parallel computation on the specially designed, highly efficient architecture, the acceleration design on FPGA acquires high performance at the much lower working frequency than CPUs. The whole design simulations show that the acceleration design would be able to speedup large scale AGENT computations about 20 times. The high performance AGENT acceleration system will drastically shortening the computation time for 3D full-core neutron transport analysis, making the AGENT methodology unique and advantageous, and thus supplies the possibility to extend the application range of neutron transport analysis in either industry engineering or academic research.

  18. Emission of energetic protons from relativistic intensity laser interaction with a cone-wire target.

    PubMed

    Paradkar, B S; Yabuuchi, T; Sawada, H; Higginson, D P; Link, A; Wei, M S; Stephens, R B; Krasheninnikov, S I; Beg, F N

    2012-11-01

    Emission of energetic protons (maximum energy ∼18 MeV) from the interaction of relativistic intensity laser with a cone-wire target is experimentally measured and numerically simulated with hybrid particle-in-cell code, lsp [D. R. Welch et al., Phys. Plasmas 13, 063105 (2006)]. The protons originate from the wire attached to the cone after the OMEGA EP laser (670 J, 10 ps, 5 × 10^{18} W/cm^{2}) deposits its energy inside the cone. These protons are accelerated from the contaminant layer on the wire surface, and are measured in the radial direction, i.e., in a direction transverse to the wire length. Simulations show that the radial electric field, responsible for the proton acceleration, is excited by three factors, viz., (i) transverse momentum of the relativistic fast electrons beam entering into the wire, (ii) scattering of electrons inside the wire, and (iii) refluxing of escaped electrons by "fountain effect" at the end of the wire. The underlying physics of radial electric field and acceleration of protons is discussed.

  19. FERMILAB ACCELERATOR R&D PROGRAM TOWARDS INTENSITY FRONTIER ACCELERATORS : STATUS AND PROGRESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiltsev, Vladimir

    2016-11-15

    The 2014 P5 report indicated the accelerator-based neutrino and rare decay physics research as a centrepiece of the US domestic HEP program at Fermilab. Operation, upgrade and development of the accelerators for the near- term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss key elements of the accelerator physics and technology R&D program toward future multi-MW proton accelerators and present its status and progress. INTENSITY FRONTIER ACCELERATORS

  20. SimTrack: A compact c++ library for particle orbit and spin tracking in accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yun

    2015-06-24

    SimTrack is a compact c++ library of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam-beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam-beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam-beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less

  1. Transit Time and Normal Orientation of ICME-driven Shocks

    NASA Astrophysics Data System (ADS)

    Case, A. W.; Spence, H.; Owens, M.; Riley, P.; Linker, J.; Odstrcil, D.

    2006-12-01

    Interplanetary Coronal Mass Ejections (ICMEs) can drive shocks that accelerate particles to great energies. It is important to understand the acceleration, transport, and spectra of these particles in order to quantify this fundamental physical process operating throughout the cosmos. This understanding also helps to better protect astronauts and spacecraft in upcoming missions. We show that the ambient solar wind is crucial in determining characteristics of ICME-driven shocks, which in turn affect energetic particle production. We use a coupled 3-D MHD code of the corona and heliosphere to simulate ICME propagation from 30 solar radii to 1AU. ICMEs of different velocities are injected into a realistic solar wind to determine how the initial speed affects the shape and deceleration of the ICME-driven shock. We use shock transit time and shock normal orientation to quantify these dependencies. We also inject identical ICMEs into different ambient solar winds to quantify the effective drag force on an ICME.

  2. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Wang, Peng; Plimpton, Steven J

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less

  3. Development of an efficient multigrid method for the NEM form of the multigroup neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Al-Chalabi, Rifat M. Khalil

    1997-09-01

    Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)

  4. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE PAGES

    Vay, J. -L.; Almgren, A.; Bell, J.; ...

    2018-01-31

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  5. GPU accelerated manifold correction method for spinning compact binaries

    NASA Astrophysics Data System (ADS)

    Ran, Chong-xi; Liu, Song; Zhong, Shuang-ying

    2018-04-01

    The graphics processing unit (GPU) acceleration of the manifold correction algorithm based on the compute unified device architecture (CUDA) technology is designed to simulate the dynamic evolution of the Post-Newtonian (PN) Hamiltonian formulation of spinning compact binaries. The feasibility and the efficiency of parallel computation on GPU have been confirmed by various numerical experiments. The numerical comparisons show that the accuracy on GPU execution of manifold corrections method has a good agreement with the execution of codes on merely central processing unit (CPU-based) method. The acceleration ability when the codes are implemented on GPU can increase enormously through the use of shared memory and register optimization techniques without additional hardware costs, implying that the speedup is nearly 13 times as compared with the codes executed on CPU for phase space scan (including 314 × 314 orbits). In addition, GPU-accelerated manifold correction method is used to numerically study how dynamics are affected by the spin-induced quadrupole-monopole interaction for black hole binary system.

  6. Warp-X: A new exascale computing platform for beam–plasma simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, J. -L.; Almgren, A.; Bell, J.

    Turning the current experimental plasma accelerator state-of-the-art from a promising technology into mainstream scientific tools depends critically on high-performance, high-fidelity modeling of complex processes that develop over a wide range of space and time scales. As part of the U.S. Department of Energy's Exascale Computing Project, a team from Lawrence Berkeley National Laboratory, in collaboration with teams from SLAC National Accelerator Laboratory and Lawrence Livermore National Laboratory, is developing a new plasma accelerator simulation tool that will harness the power of future exascale supercomputers for high-performance modeling of plasma accelerators. We present the various components of the codes such asmore » the new Particle-In-Cell Scalable Application Resource (PICSAR) and the redesigned adaptive mesh refinement library AMReX, which are combined with redesigned elements of the Warp code, in the new WarpX software. Lastly, the code structure, status, early examples of applications and plans are discussed.« less

  7. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  8. Extraordinary Tools for Extraordinary Science: The Impact ofSciDAC on Accelerator Science&Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert D.

    2006-08-10

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook''. Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now takemore » hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.« less

  9. Extraordinary tools for extraordinary science: the impact of SciDAC on accelerator science and technology

    NASA Astrophysics Data System (ADS)

    Ryne, Robert D.

    2006-09-01

    Particle accelerators are among the most complex and versatile instruments of scientific exploration. They have enabled remarkable scientific discoveries and important technological advances that span all programs within the DOE Office of Science (DOE/SC). The importance of accelerators to the DOE/SC mission is evident from an examination of the DOE document, ''Facilities for the Future of Science: A Twenty-Year Outlook.'' Of the 28 facilities listed, 13 involve accelerators. Thanks to SciDAC, a powerful suite of parallel simulation tools has been developed that represent a paradigm shift in computational accelerator science. Simulations that used to take weeks or more now take hours, and simulations that were once thought impossible are now performed routinely. These codes have been applied to many important projects of DOE/SC including existing facilities (the Tevatron complex, the Relativistic Heavy Ion Collider), facilities under construction (the Large Hadron Collider, the Spallation Neutron Source, the Linac Coherent Light Source), and to future facilities (the International Linear Collider, the Rare Isotope Accelerator). The new codes have also been used to explore innovative approaches to charged particle acceleration. These approaches, based on the extremely intense fields that can be present in lasers and plasmas, may one day provide a path to the outermost reaches of the energy frontier. Furthermore, they could lead to compact, high-gradient accelerators that would have huge consequences for US science and technology, industry, and medicine. In this talk I will describe the new accelerator modeling capabilities developed under SciDAC, the essential role of multi-disciplinary collaboration with applied mathematicians, computer scientists, and other IT experts in developing these capabilities, and provide examples of how the codes have been used to support DOE/SC accelerator projects.

  10. openPSTD: The open source pseudospectral time-domain method for acoustic propagation

    NASA Astrophysics Data System (ADS)

    Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis

    2016-06-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.

  11. Textbook presentations of weight: Conceptual difficulties and language ambiguities

    NASA Astrophysics Data System (ADS)

    Taibu, Rex; Rudge, David; Schuster, David

    2015-06-01

    The term "weight" has multiple related meanings in both scientific and everyday usage. Even among experts and in textbooks, weight is ambiguously defined as either the gravitational force on an object or operationally as the magnitude of the force an object exerts on a measuring scale. This poses both conceptual and language difficulties for learners, especially for accelerating objects where the scale reading is different from the gravitational force. But while the underlying physical constructs behind the two referents for the term weight (and their relation to each other) are well understood scientifically, it is unclear how the concept of weight should be introduced to students and how the language ambiguities should be dealt with. We investigated treatments of weight in a sample of twenty introductory college physics textbooks, analyzing and coding their content based on the definition adopted, how the distinct constructs were dealt with in various situations, terminologies used, and whether and how language issues were handled. Results indicate that language-related issues, such as different, inconsistent, or ambiguous uses of the terms weight, "apparent weight," and "weightlessness," were prevalent both across and within textbooks. The physics of the related constructs was not always clearly presented, particularly for accelerating bodies such as astronauts in spaceships, and the language issue was rarely addressed. Our analysis of both literature and textbooks leads us to an instructional position which focuses on the physics constructs before introducing the term weight, and which explicitly discusses the associated language issues.

  12. The FLUKA code for space applications: recent developments

    NASA Technical Reports Server (NTRS)

    Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; hide

    2004-01-01

    The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  13. Seismic site coefficients and acceleration design response spectra based on conditions in South Carolina : final report.

    DOT National Transportation Integrated Search

    2014-11-15

    The simplified procedure in design codes for determining earthquake response spectra involves : estimating site coefficients to adjust available rock accelerations to site accelerations. Several : investigators have noted concerns with the site coeff...

  14. Physics and biophysics experiments needed for improved risk assessment in space

    NASA Astrophysics Data System (ADS)

    Sihver, L.

    To improve the risk assessment of radiation carcinogenesis, late degenerative tissue effects, acute syndromes, synergistic effects of radiation and microgravity or other spacecraft factors, and hereditary effects, on future LEO and interplanetary space missions, the radiobiological effects of cosmic radiation before and after shielding must be well understood. However, cosmic radiation is very complex and includes low and high LET components of many different neutral and charged particles. The understanding of the radiobiology of the heavy ions, from GCRs and SPEs, is still a subject of great concern due to the complicated dependence of their biological effects on the type of ion and energy, and its interaction with various targets both outside and within the spacecraft and the human body. In order to estimate the biological effects of cosmic radiation, accurate knowledge of the physics of the interactions of both charged and non-charged high-LET particles is necessary. Since it is practically impossible to measure all primary and secondary particles from all projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes might be a helpful instrument to overcome those difficulties. These codes have to be carefully validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground-based accelerator experiments are needed. In this paper current and future physics and biophysics experiments needed for improved risk assessment in space will be discussed. The cyclotron HIRFL (heavy ion research facility in Lanzhou) and the new synchrotron CSR (cooling storage ring), which can be used to provide ion beams for space related experiments at the Institute of Modern Physics, Chinese Academy of Sciences (IMP-CAS), will be presented together with the physical and biomedical research performed at IMP-CAS.

  15. Beam Induced Hydrodynamic Tunneling in the Future Circular Collider Components

    NASA Astrophysics Data System (ADS)

    Tahir, N. A.; Burkart, F.; Schmidt, R.; Shutov, A.; Wollmann, D.; Piriz, A. R.

    2016-08-01

    A future circular collider (FCC) has been proposed as a post-Large Hadron Collider accelerator, to explore particle physics in unprecedented energy ranges. The FCC is a circular collider in a tunnel with a circumference of 80-100 km. The FCC study puts an emphasis on proton-proton high-energy and electron-positron high-intensity frontier machines. A proton-electron interaction scenario is also examined. According to the nominal FCC parameters, each of the 50 TeV proton beams will carry an amount of 8.5 GJ energy that is equivalent to the kinetic energy of an Airbus A380 (560 t) at a typical speed of 850 km /h . Safety of operation with such extremely energetic beams is an important issue, as off-nominal beam loss can cause serious damage to the accelerator and detector components with a severe impact on the accelerator environment. In order to estimate the consequences of an accident with the full beam accidently deflected into equipment, we have carried out numerical simulations of interaction of a FCC beam with a solid copper target using an energy-deposition code (fluka) and a 2D hydrodynamic code (big2) iteratively. These simulations show that, although the penetration length of a single FCC proton and its shower in solid copper is about 1.5 m, the full FCC beam will penetrate up to about 350 m into the target because of the "hydrodynamic tunneling." These simulations also show that a significant part of the target is converted into high-energy-density matter. We also discuss this interesting aspect of this study.

  16. Resource Letter AFHEP-1: Accelerators for the Future of High-Energy Physics

    NASA Astrophysics Data System (ADS)

    Barletta, William A.

    2012-02-01

    This Resource Letter provides a guide to literature concerning the development of accelerators for the future of high-energy physics. Research articles, books, and Internet resources are cited for the following topics: motivation for future accelerators, present accelerators for high-energy physics, possible future machine, and laboratory and collaboration websites.

  17. Numerical studies of acceleration of thorium ions by a laser pulse of ultra-relativistic intensity

    NASA Astrophysics Data System (ADS)

    Domanski, Jaroslaw; Badziak, Jan

    2018-01-01

    One of the key scientific projects of ELI-Nuclear Physics is to study the production of extremely neutron-rich nuclides by a new reaction mechanism called fission-fusion using laser-accelerated thorium (232Th) ions. This research is of crucial importance for understanding the nature of the creation of heavy elements in the Universe; however, they require Th ion beams of very high beam fluencies and intensities which are inaccessible in conventional accelerators. This contribution is a first attempt to investigate the possibility of the generation of intense Th ion beams by a fs laser pulse of ultra-relativistic intensity. The investigation was performed with the use of fully electromagnetic relativistic particle-in-cell code. A sub-μm thorium target was irradiated by a circularly polarized 20-fs laser pulse of intensity up to 1023 W/cm2, predicted to be attainable at ELI-NP. At the laser intensity 1023 W/cm2 and an optimum target thickness, the maximum energies of Th ions approach 9.3 GeV, the ion beam intensity is > 1020 W/cm2 and the total ion fluence reaches values 1019 ions/cm2. The last two values are much higher than attainable in conventional accelerators and are fairly promising for the planned ELI-NP experiment.

  18. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  19. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  20. Study of negative ion transport phenomena in a plasma source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riz, D.; Pamela, J.

    1996-07-01

    NIETZSCHE (Negative Ions Extraction and Transport ZSimulation Code for HydrogEn species) is a negative ion (NI) transport code developed at Cadarache. This code calculates NI trajectories using a 3D Monte-Carlo technique, taking into account the main destruction processes, as well as elastic collisions (H{sup {minus}}/H{sup +}) and charge exchanges (H{sup {minus}}/H{sup 0}). It determines the extraction probability of a NI created at a given position. According to the simulations, we have seen that in the case of volume production, only NI produced close to the plasma grid (PG) can be extracted. Concerning the surface production, we have studied how NImore » produced on the PG and accelerated by the plasma sheath backward into the source could be extracted. We demonstrate that elastic collisions and charge exchanges play an important role, which in some conditions dominates the magnetic filter effect, which acts as a magnetic mirror. NI transport in various conditions will be discussed: volume/surface production, high/low plasmas density, tent filter/transverse filter. {copyright} {ital 1996 American Institute of Physics.}« less

  1. Adapting smart phone applications about physics education to blind students

    NASA Astrophysics Data System (ADS)

    Bülbül, M. Ş.; Yiğit, N.; Garip, B.

    2016-04-01

    Today, most of necessary equipment in a physics laboratory are available for smartphone users via applications. Physics teachers may measure from acceleration to sound volume with its internal sensors. These sensors collect data and smartphone applications make the raw data visible. Teachers who do not have well-equipped laboratories at their schools may have an opportunity to conduct experiments with the help of smart phones. In this study, we analyzed possible open source physics education applications in terms of blind users in inclusive learning environments. All apps are categorized as partially, full or non-supported. The roles of blind learner’s friend during the application are categorized as reader, describer or user. Mentioned apps in the study are compared with additional opportunities like size and downloading rates. Out of using apps we may also get information about whether via internet and some other extra information for different experiments in physics lab. Q-codes reading or augmented reality are two other opportunity provided by smart phones for users in physics labs. We also summarized blind learner’s smartphone experiences from literature and listed some suggestions for application designers about concepts in physics.

  2. Accelerators, Beams And Physical Review Special Topics - Accelerators And Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siemann, R.H.; /SLAC

    Accelerator science and technology have evolved as accelerators became larger and important to a broad range of science. Physical Review Special Topics - Accelerators and Beams was established to serve the accelerator community as a timely, widely circulated, international journal covering the full breadth of accelerators and beams. The history of the journal and the innovations associated with it are reviewed.

  3. Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes

    NASA Technical Reports Server (NTRS)

    Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.

    2001-01-01

    The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as part of their continuing efforts to support the users of the FLUKA code within the particle physics community. In keeping with the spirit of developing an evolving physics code, we are planning as part of this project, to participate in the efforts to validate the core FLUKA physics in ground-based accelerator test runs. The emphasis of these test runs will be the physics of greatest interest in the simulation of the space radiation environment. Such a tool will be of great value to planners, designers and operators of future space missions, as well as for the design of the vehicles and habitats to be used on such missions. It will also be of aid to future experiments of various kinds that may be affected at some level by the ambient radiation environment, or in the analysis of hybrid experiment designs that have been discussed for space-based astronomy and astrophysics. The tool will be of value to the Life Sciences personnel involved in the prediction and measurement of radiation doses experienced by the crewmembers on such missions. In addition, the tool will be of great use to the planners of experiments to measure and evaluate the space radiation environment itself. It can likewise be useful in the analysis of safe havens, hazard migration plans, and NASA's call for new research in composites and to NASA engineers modeling the radiation exposure of electronic circuits. This code will provide an important complimentary check on the predictions of analytic codes such as BRYNTRN/HZETRN that are presently used for many similar applications, and which have shortcomings that are more easily overcome with Monte Carlo type simulations. Finally, it is acknowledged that there are similar efforts based around the use of the GEANT4 Monte-Carlo transport code currently under development at CERN. It is our intention to make our software modular and sufficiently flexible to allow the parallel use of either FLUKA or GEANT4 as the physics transport engine.

  4. FPPAC94: A two-dimensional multispecies nonlinear Fokker-Planck package for UNIX systems

    NASA Astrophysics Data System (ADS)

    Mirin, A. A.; McCoy, M. G.; Tomaschke, G. P.; Killeen, J.

    1994-07-01

    FPPAC94 solves the complete nonlinear multispecies Fokker-Planck collison operator for a plasma in two-dimensional velocity space. The operator is expressed in terms of spherical coordinates (speed and pitch angle) under the assumption of azimuthal symmetry. Provision is made for additional physics contributions (e.g. rf heating, electric field acceleration). The charged species, referred to as general species, are assumed to be in the presence of an arbitrary number of fixed Maxwellian species. The electrons may be treated either as one of these Maxwellian species or as a general species. Coulomb interactions among all charged species are considered This program is a new version of FPPAC. FPPAC was last published in Computer Physics Communications in 1988. This new version is identical in scope to the previous version. However, it is written in standard Fortran 77 and is able to execute on a variety of Unix systems. The code has been tested on the Cray-C90, HP-755 and Sun Sparc-1. The answers agree on all platforms where the code has been tested. The test problems are the same as those provided in 1988. This version also corrects a bug in the 1988 version.

  5. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  6. The Influence of Accelerator Science on Physics Research

    NASA Astrophysics Data System (ADS)

    Haussecker, Enzo F.; Chao, Alexander W.

    2011-06-01

    We evaluate accelerator science in the context of its contributions to the physics community. We address the problem of quantifying these contributions and present a scheme for a numerical evaluation of them. We show by using a statistical sample of important developments in modern physics that accelerator science has influenced 28% of post-1938 physicists and also 28% of post-1938 physics research. We also examine how the influence of accelerator science has evolved over time, and show that on average it has contributed to a physics Nobel Prize-winning research every 2.9 years.

  7. A test harness for accelerating physics parameterization advancements into operations

    NASA Astrophysics Data System (ADS)

    Firl, G. J.; Bernardet, L.; Harrold, M.; Henderson, J.; Wolff, J.; Zhang, M.

    2017-12-01

    The process of transitioning advances in parameterization of sub-grid scale processes from initial idea to implementation is often much quicker than the transition from implementation to use in an operational setting. After all, considerable work must be undertaken by operational centers to fully test, evaluate, and implement new physics. The process is complicated by the scarcity of like-to-like comparisons, availability of HPC resources, and the ``tuning problem" whereby advances in physics schemes are difficult to properly evaluate without first undertaking the expensive and time-consuming process of tuning to other schemes within a suite. To address this process shortcoming, the Global Model TestBed (GMTB), supported by the NWS NGGPS project and undertaken by the Developmental Testbed Center, has developed a physics test harness. It implements the concept of hierarchical testing, where the same code can be tested in model configurations of varying complexity from single column models (SCM) to fully coupled, cycled global simulations. Developers and users may choose at which level of complexity to engage. Several components of the physics test harness have been implemented, including a SCM and an end-to-end workflow that expands upon the one used at NOAA/EMC to run the GFS operationally, although the testbed components will necessarily morph to coincide with changes to the operational configuration (FV3-GFS). A standard, relatively user-friendly interface known as the Interoperable Physics Driver (IPD) is available for physics developers to connect their codes. This prerequisite exercise allows access to the testbed tools and removes a technical hurdle for potential inclusion into the Common Community Physics Package (CCPP). The testbed offers users the opportunity to conduct like-to-like comparisons between the operational physics suite and new development as well as among multiple developments. GMTB staff have demonstrated use of the testbed through a comparison between the 2017 operational GFS suite and one containing the Grell-Freitas convective parameterization. An overview of the physics test harness and its early use will be presented.

  8. Searching for Physics Beyond the Standard Model: Strongly-Coupled Field Theories at the Intensity and Energy Frontiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brower, Richard C.

    This proposal is to develop the software and algorithmic infrastructure needed for the numerical study of quantum chromodynamics (QCD), and of theories that have been proposed to describe physics beyond the Standard Model (BSM) of high energy physics, on current and future computers. This infrastructure will enable users (1) to improve the accuracy of QCD calculations to the point where they no longer limit what can be learned from high-precision experiments that seek to test the Standard Model, and (2) to determine the predictions of BSM theories in order to understand which of them are consistent with the data thatmore » will soon be available from the LHC. Work will include the extension and optimizations of community codes for the next generation of leadership class computers, the IBM Blue Gene/Q and the Cray XE/XK, and for the dedicated hardware funded for our field by the Department of Energy. Members of our collaboration at Brookhaven National Laboratory and Columbia University worked on the design of the Blue Gene/Q, and have begun to develop software for it. Under this grant we will build upon their experience to produce high-efficiency production codes for this machine. Cray XE/XK computers with many thousands of GPU accelerators will soon be available, and the dedicated commodity clusters we obtain with DOE funding include growing numbers of GPUs. We will work with our partners in NVIDIA's Emerging Technology group to scale our existing software to thousands of GPUs, and to produce highly efficient production codes for these machines. Work under this grant will also include the development of new algorithms for the effective use of heterogeneous computers, and their integration into our codes. It will include improvements of Krylov solvers and the development of new multigrid methods in collaboration with members of the FASTMath SciDAC Institute, using their HYPRE framework, as well as work on improved symplectic integrators.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiltsev, Vladimir

    The 2014 P5 report indicated the accelerator-based neutrino and rare decay physics research as a centerpiece of the US domestic HEP program. Operation, upgrade and development of the accelerators for the near-term and longer-term particle physics program at the Intensity Frontier face formidable challenges. Here we discuss key elements of the accelerator physics and technology R&D program toward future multi-MW proton accelerators.

  10. Corkscrew Motion of an Electron Beam due to Coherent Variations in Accelerating Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl August

    2016-09-13

    Corkscrew motion results from the interaction of fluctuations of beam electron energy with accidental magnetic dipoles caused by misalignment of the beam transport solenoids. Corkscrew is a serious concern for high-current linear induction accelerators (LIA). A simple scaling law for corkscrew amplitude derived from a theory based on a constant-energy beam coasting through a uniform magnetic field has often been used to assess LIA vulnerability to this effect. We use a beam dynamics code to verify that this scaling also holds for an accelerated beam in a non-uniform magnetic field, as in a real accelerator. Results of simulations with thismore » code are strikingly similar to measurements on one of the LIAs at Los Alamos National Laboratory.« less

  11. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  12. Study of shock-induced combustion using an implicit TVD scheme

    NASA Technical Reports Server (NTRS)

    Yungster, Shayne

    1992-01-01

    The supersonic combustion flowfields associated with various hypersonic propulsion systems, such as the ram accelerator, the oblique detonation wave engine, and the scramjet, are being investigated using a new computational fluid dynamics (CFD) code. The code solves the fully coupled Reynolds-averaged Navier-Stokes equations and species continuity equations in an efficient manner. It employs an iterative method and a second order differencing scheme to improve computational efficiency. The code is currently being applied to study shock wave/boundary layer interactions in premixed combustible gases, and to investigate the ram accelerator concept. Results obtained for a ram accelerator configuration indicate a new combustion mechanism in which a shock wave induces combustion in the boundary layer, which then propagates outward and downstream. The combustion process creates a high pressure region over the back of the projectile resulting in a net positive thrust forward.

  13. Induced radioactivity studies of the shielding and beamline equipment of the high intensity proton accelerator facility at PSI

    NASA Astrophysics Data System (ADS)

    Otiougova, Polina; Bergmann, Ryan; Kiselev, Daniela; Talanov, Vadim; Wohlmuther, Michael

    2017-09-01

    The Paul Scherrer Institute (PSI) is the largest national research center in Switzerland. Its multidisciplinary research is dedicated to a wide ↓eld in natural science and technology as well as particle physics. The High Intensity Proton Accelerator Facility (HIPA) has been in operation at PSI since 1974. It includes an 870 keV Cockroft-Walton pre-accelerator, a 72 MeV injector cyclotron as well as a 590 MeV ring cyclotron. The experimental facilities, the meson production graphite targets, Target E and Target M, and the spallation target stations (SINQ and UCN) are used for material research and particle physics. In order to ful↓ll the request of the regulatory authorities and to be reported to the regulators, the expected radioactive waste and nuclide inventory after an anticipated ↓nal shutdown in the far future has to be estimated. In this contribution, calculations for the 20 m long beamline between Target E and the 590 MeV beam dump of HIPA are presented. The ↓rst step in the calculations was determining spectra and spatial particle distributions around the beamlines using the Monte-Carlo particle transport code MCNPX2.7.0 [1]. To perform the analysis of the MCNPX output and to determine the radionuclide inventory as well as the speci↓c activity of the nuclides, an activation script [2] using the FISPACT10 code with the cross sections from the European Activation File (EAF2010) [3] was applied. The speci↓c activity values were compared to the currently existing Swiss exemption limits (LE) [4] as well as to the Swiss liberation limits (LL) [5], becoming e↑ective in the near future. The obtained results were used to estimate the total volume of the radioactive waste produced at HIPA and have to be reported to the Swiss regulatory authorities. The comparison of the performed calculations to measurements is discussed as well. Note to the reader: the pdf file has been changed on September 22, 2017.

  14. Fifty years of accelerator based physics at Chalk River

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKay, John W.

    1999-04-26

    The Chalk River Laboratories of Atomic Energy of Canada Ltd. was a major centre for Accelerator based physics for the last fifty years. As early as 1946, nuclear structure studies were started on Cockroft-Walton accelerators. A series of accelerators followed, including the world's first Tandem, and the MP Tandem, Superconducting Cyclotron (TASCC) facility that was opened in 1986. The nuclear physics program was shut down in 1996. This paper will describe some of the highlights of the accelerators and the research of the laboratory.

  15. PHITS Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niita, K.; Matsuda, N.; Iwamoto, Y.

    The paper presents a brief description of the models incorporated in PHITS and the present status of the code, showing some benchmarking tests of the PHITS code for accelerator facilities and space radiation.

  16. Beam-dynamics codes used at DARHT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Jr., Carl August

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  17. Seismic Safety Of Simple Masonry Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guadagnuolo, Mariateresa; Faella, Giuseppe

    2008-07-08

    Several masonry buildings comply with the rules for simple buildings provided by seismic codes. For these buildings explicit safety verifications are not compulsory if specific code rules are fulfilled. In fact it is assumed that their fulfilment ensures a suitable seismic behaviour of buildings and thus adequate safety under earthquakes. Italian and European seismic codes differ in the requirements for simple masonry buildings, mostly concerning the building typology, the building geometry and the acceleration at site. Obviously, a wide percentage of buildings assumed simple by codes should satisfy the numerical safety verification, so that no confusion and uncertainty have tomore » be given rise to designers who must use the codes. This paper aims at evaluating the seismic response of some simple unreinforced masonry buildings that comply with the provisions of the new Italian seismic code. Two-story buildings, having different geometry, are analysed and results from nonlinear static analyses performed by varying the acceleration at site are presented and discussed. Indications on the congruence between code rules and results of numerical analyses performed according to the code itself are supplied and, in this context, the obtained result can provide a contribution for improving the seismic code requirements.« less

  18. GPU Optimizations for a Production Molecular Docking Code*

    PubMed Central

    Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667

  19. GPU Optimizations for a Production Molecular Docking Code.

    PubMed

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  20. Bayesian analysis of caustic-crossing microlensing events

    NASA Astrophysics Data System (ADS)

    Cassan, A.; Horne, K.; Kains, N.; Tsapras, Y.; Browne, P.

    2010-06-01

    Aims: Caustic-crossing binary-lens microlensing events are important anomalous events because they are capable of detecting an extrasolar planet companion orbiting the lens star. Fast and robust modelling methods are thus of prime interest in helping to decide whether a planet is detected by an event. Cassan introduced a new set of parameters to model binary-lens events, which are closely related to properties of the light curve. In this work, we explain how Bayesian priors can be added to this framework, and investigate on interesting options. Methods: We develop a mathematical formulation that allows us to compute analytically the priors on the new parameters, given some previous knowledge about other physical quantities. We explicitly compute the priors for a number of interesting cases, and show how this can be implemented in a fully Bayesian, Markov chain Monte Carlo algorithm. Results: Using Bayesian priors can accelerate microlens fitting codes by reducing the time spent considering physically implausible models, and helps us to discriminate between alternative models based on the physical plausibility of their parameters.

  1. Current-Voltage Characteristic of Nanosecond - Duration Relativistic Electron Beam

    NASA Astrophysics Data System (ADS)

    Andreev, Andrey

    2005-10-01

    The pulsed electron-beam accelerator SINUS-6 was used to measure current-voltage characteristic of nanosecond-duration thin annular relativistic electron beam accelerated in vacuum along axis of a smooth uniform metal tube immersed into strong axial magnetic field. Results of these measurements as well as results of computer simulations performed using 3D MAGIC code show that the electron-beam current dependence on the accelerating voltage at the front of the nanosecond-duration pulse is different from the analogical dependence at the flat part of the pulse. In the steady-state (flat) part of the pulse), the measured electron-beam current is close to Fedosov current [1], which is governed by the conservation law of an electron moment flow for any constant voltage. In the non steady-state part (front) of the pulse, the electron-beam current is higher that the appropriate, for a giving voltage, steady-state (Fedosov) current. [1] A. I. Fedosov, E. A. Litvinov, S. Ya. Belomytsev, and S. P. Bugaev, ``Characteristics of electron beam formed in diodes with magnetic insulation,'' Soviet Physics Journal (A translation of Izvestiya VUZ. Fizika), vol. 20, no. 10, October 1977 (April 20, 1978), pp.1367-1368.

  2. New features in the design code Tlie

    NASA Astrophysics Data System (ADS)

    van Zeijts, Johannes

    1993-12-01

    We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.

  3. Empirical evidence for site coefficients in building code provisions

    USGS Publications Warehouse

    Borcherdt, R.D.

    2002-01-01

    Site-response coefficients, Fa and Fv, used in U.S. building code provisions are based on empirical data for motions up to 0.1 g. For larger motions they are based on theoretical and laboratory results. The Northridge earthquake of 17 January 1994 provided a significant new set of empirical data up to 0.5 g. These data together with recent site characterizations based on shear-wave velocity measurements provide empirical estimates of the site coefficients at base accelerations up to 0.5 g for Site Classes C and D. These empirical estimates of Fa and Fnu; as well as their decrease with increasing base acceleration level are consistent at the 95 percent confidence level with those in present building code provisions, with the exception of estimates for Fa at levels of 0.1 and 0.2 g, which are less than the lower confidence bound by amounts up to 13 percent. The site-coefficient estimates are consistent at the 95 percent confidence level with those of several other investigators for base accelerations greater than 0.3 g. These consistencies and present code procedures indicate that changes in the site coefficients are not warranted. Empirical results for base accelerations greater than 0.2 g confirm the need for both a short- and a mid- or long-period site coefficient to characterize site response for purposes of estimating site-specific design spectra.

  4. Advanced Computing Tools and Models for Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryne, Robert; Ryne, Robert D.

    2008-06-11

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

  5. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  6. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  7. Symplectic multi-particle tracking on GPUs

    NASA Astrophysics Data System (ADS)

    Liu, Zhicong; Qiang, Ji

    2018-05-01

    A symplectic multi-particle tracking model is implemented on the Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) language. The symplectic tracking model can preserve phase space structure and reduce non-physical effects in long term simulation, which is important for beam property evaluation in particle accelerators. Though this model is computationally expensive, it is very suitable for parallelization and can be accelerated significantly by using GPUs. In this paper, we optimized the implementation of the symplectic tracking model on both single GPU and multiple GPUs. Using a single GPU processor, the code achieves a factor of 2-10 speedup for a range of problem sizes compared with the time on a single state-of-the-art Central Processing Unit (CPU) node with similar power consumption and semiconductor technology. It also shows good scalability on a multi-GPU cluster at Oak Ridge Leadership Computing Facility. In an application to beam dynamics simulation, the GPU implementation helps save more than a factor of two total computing time in comparison to the CPU implementation.

  8. Calibration of imaging plate detectors to mono-energetic protons in the range 1-200 MeV

    NASA Astrophysics Data System (ADS)

    Rabhi, N.; Batani, D.; Boutoux, G.; Ducret, J.-E.; Jakubowska, K.; Lantuejoul-Thfoin, I.; Nauraye, C.; Patriarca, A.; Saïd, A.; Semsoum, A.; Serani, L.; Thomas, B.; Vauzour, B.

    2017-11-01

    Responses of Fuji Imaging Plates (IPs) to proton have been measured in the range 1-200 MeV. Mono-energetic protons were produced with the 15 MV ALTO-Tandem accelerator of the Institute of Nuclear Physics (Orsay, France) and, at higher energies, with the 200-MeV isochronous cyclotron of the Institut Curie—Centre de Protonthérapie d'Orsay (Orsay, France). The experimental setups are described and the measured photo-stimulated luminescence responses for MS, SR, and TR IPs are presented and compared to existing data. For the interpretation of the results, a sensitivity model based on the Monte Carlo GEANT4 code has been developed. It enables the calculation of the response functions in a large energy range, from 0.1 to 200 MeV. Finally, we show that our model reproduces accurately the response of more complex detectors, i.e., stack of high-Z filters and IPs, which could be of great interest for diagnostics of Petawatt laser accelerated particles.

  9. Effects of energy chirp on bunch length measurement in linear accelerator beams

    NASA Astrophysics Data System (ADS)

    Sabato, L.; Arpaia, P.; Giribono, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Vaccarezza, C.; Variola, A.

    2017-08-01

    The effects of assumptions about bunch properties on the accuracy of the measurement method of the bunch length based on radio frequency deflectors (RFDs) in electron linear accelerators (LINACs) are investigated. In particular, when the electron bunch at the RFD has a non-negligible energy chirp (i.e. a correlation between the longitudinal positions and energies of the particle), the measurement is affected by a deterministic intrinsic error, which is directly related to the RFD phase offset. A case study on this effect in the electron LINAC of a gamma beam source at the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) is reported. The relative error is estimated by using an electron generation and tracking (ELEGANT) code to define the reference measurements of the bunch length. The relative error is proved to increase linearly with the RFD phase offset. In particular, for an offset of {{7}\\circ} , corresponding to a vertical centroid offset at a screen of about 1 mm, the relative error is 4.5%.

  10. Elastic and inelastic scattering of neutrons from 56Fe

    NASA Astrophysics Data System (ADS)

    Ramirez, Anthony Paul; McEllistrem, M. T.; Liu, S. H.; Mukhopadhyay, S.; Peters, E. E.; Yates, S. W.; Vanhoy, J. R.; Harrison, T. D.; Rice, B. G.; Thompson, B. K.; Hicks, S. F.; Howard, T. J.; Jackson, D. T.; Lenzen, P. D.; Nguyen, T. D.; Pecha, R. L.

    2015-10-01

    The differential cross sections for elastic and inelastic scattered neutrons from 56Fe have been measured at the University of Kentucky Accelerator Laboratory (www.pa.uky.edu/accelerator) for incident neutron energies between 2.0 and 8.0 MeV and for the angular range 30° to 150°. Time-of-flight techniques and pulse-shape discrimination were employed for enhancing the neutron energy spectra and for reducing background. An overview of the experimental procedures and data analysis for the conversion of neutron yields to differential cross sections will be presented. These include the determination of the energy-dependent detection efficiencies, the normalization of the measured differential cross sections, and the attenuation and multiple scattering corrections. Our results will also be compared to evaluated cross section databases and reaction model calculations using the TALYS code. This work is supported by grants from the U.S. Department of Energy-Nuclear Energy Universities Program: NU-12-KY-UK-0201-05, and the Donald A. Cowan Physics Institute at the University of Dallas.

  11. The interaction of radio frequency electromagnetic fields with atmospheric water droplets and applications to aircraft ice prevention. Thesis

    NASA Technical Reports Server (NTRS)

    Hansman, R. J., Jr.

    1982-01-01

    The feasibility of computerized simulation of the physics of advanced microwave anti-icing systems, which preheat impinging supercooled water droplets prior to impact, was investigated. Theoretical and experimental work performed to create a physically realistic simulation is described. The behavior of the absorption cross section for melting ice particles was measured by a resonant cavity technique and found to agree with theoretical predictions. Values of the dielectric parameters of supercooled water were measured by a similar technique at lambda = 2.82 cm down to -17 C. The hydrodynamic behavior of accelerated water droplets was studied photograhically in a wind tunnel. Droplets were found to initially deform as oblate spheroids and to eventually become unstable and break up in Bessel function modes for large values of acceleration or droplet size. This confirms the theory as to the maximum stable droplet size in the atmosphere. A computer code which predicts droplet trajectories in an arbitrary flow field was written and confirmed experimentally. The results were consolidated into a simulation to study the heating by electromagnetic fields of droplets impinging onto an object such as an airfoil. It was determined that there is sufficient time to heat droplets prior to impact for typical parameter values. Design curves for such a system are presented.

  12. Analytical investigation of the dynamics of tethered constellations in earth orbit

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.; Gullahorn, Gordon E.; Estes, Robert D.

    1988-01-01

    This Quarterly Report on Tethering in Earth Orbit deals with three topics: (1) Investigation of the propagation of longitudinal and transverse waves along the upper tether. Specifically, the upper tether is modeled as three massive platforms connected by two perfectly elastic continua (tether segments). The tether attachment point to the station is assumed to vibrate both longitudinally and transversely at a given frequency. Longitudinal and transverse waves propagate along the tethers affecting the acceleration levels at the elevator and at the upper platform. The displacement and acceleration frequency-response functions at the elevator and at the upper platform are computed for both longitudinal and transverse waves. An analysis to optimize the damping time of the longitudinal dampers is also carried out in order to select optimal parameters. The analytical evaluation of the performance of tuned vs. detuned longitudinal dampers is also part of this analysis. (2) The use of the Shuttle primary Reaction Control System (RCS) thrusters for blowing away a recoiling broken tether is discussed. A microcomputer system was set up to support this operation. (3) Most of the effort in the tether plasma physics study was devoted to software development. A particle simulation code has been integrated into the Macintosh II computer system and will be utilized for studying the physics of hollow cathodes.

  13. Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence

    NASA Astrophysics Data System (ADS)

    Lynn, Jacob William

    We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no acceleration; resonance-broadening modifies this conclusion and allows for a continued Fermi-like acceleration process. This may affect the observed spectra of black hole accretion disks by accelerating relativistic particles into a quasi-powerlaw tail.

  14. A boundary-Fitted Coordinate Code for General Two-Dimensional Regions with Obstacles and Boundary Intrusions.

    DTIC Science & Technology

    1983-03-01

    values of these functions on the two sides of the slits. The acceleration parameters for the iteration at each point are in the field array WACC (I,J...code will calculate a locally optimum value at each point in the field, these values being placed in the field array WACC . This calculation is...changes in x and y, are calculated by calling subroutine ERROR.) The acceleration parameter is placed in the field 65 array WACC . The addition to the

  15. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  16. VizieR Online Data Catalog: Radiative forces for stellar envelopes (Seaton, 1997)

    NASA Astrophysics Data System (ADS)

    Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.

    2000-02-01

    (1) Primary data files, stages.zz These files give data for the calculation of radiative accelerations, GRAD, for elements with nuclear charge zz. Data are available for zz=06, 07, 08, 10, 11, 12, 13, 14, 16, 18, 20, 24, 25, 26 and 28. Calculations are made using data from the Opacity Project (see papers SYMP and IXZ). The data are given for each ionisation stage, j. They are tabulated on a mesh of (T, Ne, CHI) where T is temperature, Ne electron density and CHI is abundance multiplier. The files include data for ionisation fractions, for each (T, Ne). The file contents are described in the paper ACC and as comments in the code add.f (2) Code add.f This reads a file stages.zz and creates a file acc.zz giving radiative accelerations averaged over ionisation stages. The code prompts for names of input and output files. The code, as provided, gives equal weights (as defined in the paper ACC) to all stages. Th weights are set in SUBROUTINE WEIGHTS, which could be changed to give any weights preferred by the user. The dependence of diffusion coefficients on ionisation stage is given by a function ZET, which is defined in SUBROUTINE ZETA. The expressions used for ZET are as given in the paper. The user can change that subroutine if other expressions are preferred. The output file contains values, ZETBAR, of ZET, averaged over ionisation stages. (3) Files acc.zz Radiative accelerations computed using add.f as provided. The user will need to run the code add.f only if it is required to change the subroutines WEIGHTS or ZETA. The contents of the files acc.zz are described in the paper ACC and in comments contained in the code add.f. (4) Code accfit.f This code gives gives radiative accelerations, and some related data, for a stellar model. Methods used to interpolate data to the values of (T, RHO) for the stellar model are based on those used in the code opfit.for (see the paper OPF). The executable file accfit.com runs accfit.f. It uses a list of files given in accfit.files (see that file for further description). The mesh used for the abundance-multiplier CHI on the output file will generally be finer than that used in the input files acc.zz. The mesh to be used is specified on a file chi.dat. For a test run, the stellar model used is given in the file 10000_4.2 (Teff=10000 K, LOG10(g)=4.2) The output file from that test run is acc100004.2. The contents of the output file are described in the paper ACC and as comments in the code accfit.f. (5) The code diff.f This code reads the output file (e.g. acc1000004.2) created by accfit.f. For any specified depth point in the model and value of CHI, it gives values of radiative accelerations, the quantity ZETBAR required for calculation of diffusion coefficients, and Rosseland-mean opacities. The code prompts for input data. It creates a file recording all data calculated. The code diff.f is intended for incorporation, as a set of subroutines, in codes for diffusion calculations. (1 data file).

  17. Specification of the near-Earth space environment with SHIELDS

    DOE PAGES

    Jordanova, Vania Koleva; Delzanno, Gian Luca; Henderson, Michael Gerard; ...

    2017-11-26

    Here, predicting variations in the near-Earth space environment that can lead to spacecraft damage and failure is one example of “space weather” and a big space physics challenge. A project recently funded through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program aims at developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to understand the dynamics of the surface charging environment (SCE), the hot (keV) electrons representing the source and seed populations for the radiation belts, on both macro- andmore » micro-scale. Important physics questions related to particle injection and acceleration associated with magnetospheric storms and substorms, as well as plasma waves, are investigated. These challenging problems are addressed using a team of world-class experts in the fields of space science and computational plasma physics, and state-of-the-art models and computational facilities. A full two-way coupling of physics-based models across multiple scales, including a global MHD (BATS-R-US) embedding a particle-in-cell (iPIC3D) and an inner magnetosphere (RAM-SCB) codes, is achieved. New data assimilation techniques employing in situ satellite data are developed; these provide an order of magnitude improvement in the accuracy in the simulation of the SCE. SHIELDS also includes a post-processing tool designed to calculate the surface charging for specific spacecraft geometry using the Curvilinear Particle-In-Cell (CPIC) code that can be used for reanalysis of satellite failures or for satellite design.« less

  18. Specification of the near-Earth space environment with SHIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordanova, Vania Koleva; Delzanno, Gian Luca; Henderson, Michael Gerard

    Here, predicting variations in the near-Earth space environment that can lead to spacecraft damage and failure is one example of “space weather” and a big space physics challenge. A project recently funded through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program aims at developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to understand the dynamics of the surface charging environment (SCE), the hot (keV) electrons representing the source and seed populations for the radiation belts, on both macro- andmore » micro-scale. Important physics questions related to particle injection and acceleration associated with magnetospheric storms and substorms, as well as plasma waves, are investigated. These challenging problems are addressed using a team of world-class experts in the fields of space science and computational plasma physics, and state-of-the-art models and computational facilities. A full two-way coupling of physics-based models across multiple scales, including a global MHD (BATS-R-US) embedding a particle-in-cell (iPIC3D) and an inner magnetosphere (RAM-SCB) codes, is achieved. New data assimilation techniques employing in situ satellite data are developed; these provide an order of magnitude improvement in the accuracy in the simulation of the SCE. SHIELDS also includes a post-processing tool designed to calculate the surface charging for specific spacecraft geometry using the Curvilinear Particle-In-Cell (CPIC) code that can be used for reanalysis of satellite failures or for satellite design.« less

  19. Physical activities to enhance an understanding of acceleration

    NASA Astrophysics Data System (ADS)

    Lee, S. A.

    2006-03-01

    On the basis of their everyday experiences, students have developed an understanding of many of the concepts of mechanics by the time they take their first physics course. However, an accurate understanding of acceleration remains elusive. Many students have difficulties distinguishing between velocity and acceleration. In this report, a set of physical activities to highlight the differences between acceleration and velocity are described. These activities involve running and walking on sand (such as an outdoor volleyball court).

  20. Acceleration of Semiempirical QM/MM Methods through Message Passage Interface (MPI), Hybrid MPI/Open Multiprocessing, and Self-Consistent Field Accelerator Implementations.

    PubMed

    Ojeda-May, Pedro; Nam, Kwangho

    2017-08-08

    The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).

  1. AMBER: a PIC slice code for DARHT

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Fawley, William

    1999-11-01

    The accelerator for the second axis of the Dual Axis Radiographic Hydrodynamic Test (DARHT) facility will produce a 4-kA, 20-MeV, 2-μ s output electron beam with a design goal of less than 1000 π mm-mrad normalized transverse emittance and less than 0.5-mm beam centroid motion. In order to study the beam dynamics throughout the accelerator, we have developed a slice Particle-In-Cell code named AMBER, in which the beam is modeled as a time-steady flow, subject to self, as well as external, electrostatic and magnetostatic fields. The code follows the evolution of a slice of the beam as it propagates through the DARHT accelerator lattice, modeled as an assembly of pipes, solenoids and gaps. In particular, we have paid careful attention to non-paraxial phenomena that can contribute to nonlinear forces and possible emittance growth. We will present the model and the numerical techniques implemented, as well as some test cases and some preliminary results obtained when studying emittance growth during the beam propagation.

  2. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  3. Modeling multi-GeV class laser-plasma accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, Carlo; Schroeder, Carl; Bulanov, Stepan; Geddes, Cameron; Esarey, Eric; Leemans, Wim

    2016-10-01

    Laser plasma accelerators (LPAs) can produce accelerating gradients on the order of tens to hundreds of GV/m, making them attractive as compact particle accelerators for radiation production or as drivers for future high-energy colliders. Understanding and optimizing the performance of LPAs requires detailed numerical modeling of the nonlinear laser-plasma interaction. We present simulation results, obtained with the computationally efficient, PIC/fluid code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde), concerning present (multi-GeV stages) and future (10 GeV stages) LPA experiments performed with the BELLA PW laser system at LBNL. In particular, we will illustrate the issues related to the guiding of a high-intensity, short-pulse, laser when a realistic description for both the laser driver and the background plasma is adopted. Work Supported by the U.S. Department of Energy under contract No. DE-AC02-05CH11231.

  4. Linear microbunching analysis for recirculation machines

    DOE PAGES

    Tsai, C. -Y.; Douglas, D.; Li, R.; ...

    2016-11-28

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, C. -Y.; Douglas, D.; Li, R.

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  6. Modeling Solar Wind Flow with the Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Pogorelov, N.V.; Borovikov, S. N.; Bedford, M. C.; ...

    2013-04-01

    Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) is a package of numerical codes capable of performing adaptive mesh refinement simulations of complex plasma flows in the presence of discontinuities and charge exchange between ions and neutral atoms. The flow of the ionized component is described with the ideal MHD equations, while the transport of atoms is governed either by the Boltzmann equation or multiple Euler gas dynamics equations. We have enhanced the code with additional physical treatments for the transport of turbulence and acceleration of pickup ions in the interplanetary space and at the termination shock. In this article, we present themore » results of our numerical simulation of the solar wind (SW) interaction with the local interstellar medium (LISM) in different time-dependent and stationary formulations. Numerical results are compared with the Ulysses, Voyager, and OMNI observations. Finally, the SW boundary conditions are derived from in-situ spacecraft measurements and remote observations.« less

  7. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  8. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  9. Probabilistic Seismic Hazard Assessment for Iraq

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onur, Tuna; Gok, Rengin; Abdulnaby, Wathiq

    Probabilistic Seismic Hazard Assessments (PSHA) form the basis for most contemporary seismic provisions in building codes around the world. The current building code of Iraq was published in 1997. An update to this edition is in the process of being released. However, there are no national PSHA studies in Iraq for the new building code to refer to for seismic loading in terms of spectral accelerations. As an interim solution, the new draft building code was considering to refer to PSHA results produced in the late 1990s as part of the Global Seismic Hazard Assessment Program (GSHAP; Giardini et al.,more » 1999). However these results are: a) more than 15 years outdated, b) PGA-based only, necessitating rough conversion factors to calculate spectral accelerations at 0.3s and 1.0s for seismic design, and c) at a probability level of 10% chance of exceedance in 50 years, not the 2% that the building code requires. Hence there is a pressing need for a new, updated PSHA for Iraq.« less

  10. Shock Spectrum Calculation from Acceleration Time Histories

    DTIC Science & Technology

    1980-09-01

    CLASSIFICATIONe OF THIS PAGE (Uh-e DOg ~ 9--t)____________________ REPORT DOCUMENTATION PAGE BEFORE COMPLETING FORM I. REPRT NU9911ACCUIISIO6 NO .3ASCCSPICHT’S...SCE. Oakland CA NAVSCOLCECOFF C35 Port Hueneme. CA,. CO, Code C44A Porn Hueneme. CA NAVSEASYSCOM Code 05M13 (Newhouse) Wash DC; Code 6212, Wash DC

  11. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebe, A.; Leveling, A.; Lu, T.

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances frommore » those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.« less

  12. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    NASA Astrophysics Data System (ADS)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  13. Heavy ion linear accelerator for radiation damage studies of materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.

    A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response ofmore » the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for U-238(50+) and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.« less

  14. Heavy ion linear accelerator for radiation damage studies of materials

    NASA Astrophysics Data System (ADS)

    Kutsaev, Sergey V.; Mustapha, Brahim; Ostroumov, Peter N.; Nolen, Jerry; Barcikowski, Albert; Pellin, Michael; Yacout, Abdellatif

    2017-03-01

    A new eXtreme MATerial (XMAT) research facility is being proposed at Argonne National Laboratory to enable rapid in situ mesoscale bulk analysis of ion radiation damage in advanced materials and nuclear fuels. This facility combines a new heavy-ion accelerator with the existing high-energy X-ray analysis capability of the Argonne Advanced Photon Source. The heavy-ion accelerator and target complex will enable experimenters to emulate the environment of a nuclear reactor making possible the study of fission fragment damage in materials. Material scientists will be able to use the measured material parameters to validate computer simulation codes and extrapolate the response of the material in a nuclear reactor environment. Utilizing a new heavy-ion accelerator will provide the appropriate energies and intensities to study these effects with beam intensities which allow experiments to run over hours or days instead of years. The XMAT facility will use a CW heavy-ion accelerator capable of providing beams of any stable isotope with adjustable energy up to 1.2 MeV/u for 238U50+ and 1.7 MeV for protons. This energy is crucial to the design since it well mimics fission fragments that provide the major portion of the damage in nuclear fuels. The energy also allows damage to be created far from the surface of the material allowing bulk radiation damage effects to be investigated. The XMAT ion linac includes an electron cyclotron resonance ion source, a normal-conducting radio-frequency quadrupole and four normal-conducting multi-gap quarter-wave resonators operating at 60.625 MHz. This paper presents the 3D multi-physics design and analysis of the accelerating structures and beam dynamics studies of the linac.

  15. Scientific Discovery through Advanced Computing in Plasma Science

    NASA Astrophysics Data System (ADS)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of plasma turbulence in magnetically-confined high temperature plasmas. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract bright young talent to the computational science area.

  16. Study on induced radioactivity of China Spallation Neutron Source

    NASA Astrophysics Data System (ADS)

    Wu, Qing-Biao; Wang, Qing-Bin; Wu, Jing-Min; Ma, Zhong-Jian

    2011-06-01

    China Spallation Neutron Source (CSNS) is the first High Energy Intense Proton Accelerator planned to be constructed in China during the State Eleventh Five-Year Plan period, whose induced radioactivity is very important for occupational disease hazard assessment and environmental impact assessment. Adopting the FLUKA code, the authors have constructed a cylinder-tunnel geometric model and a line-source sampling physical model, deduced proper formulas to calculate air activation, and analyzed various issues with regard to the activation of different tunnel parts. The results show that the environmental impact resulting from induced activation is negligible, whereas the residual radiation in the tunnels has a great influence on maintenance personnel, so strict measures should be adopted.

  17. Electron linear accelerator system for natural rubber vulcanization

    NASA Astrophysics Data System (ADS)

    Rimjaem, S.; Kongmon, E.; Rhodes, M. W.; Saisut, J.; Thongbai, C.

    2017-09-01

    Development of an electron accelerator system, beam diagnostic instruments, an irradiation apparatus and electron beam processing methodology for natural rubber vulcanization is underway at the Plasma and Beam Physics Research Facility, Chiang Mai University, Thailand. The project is carried out with the aims to improve the qualities of natural rubber products. The system consists of a DC thermionic electron gun, 5-cell standing-wave radio-frequency (RF) linear accelerator (linac) with side-coupling cavities and an electron beam irradiation apparatus. This system is used to produce electron beams with an adjustable energy between 0.5 and 4 MeV and a pulse current of 10-100 mA at a pulse repetition rate of 20-400 Hz. An average absorbed dose between 160 and 640 Gy is expected to be archived for 4 MeV electron beam when the accelerator is operated at 400 Hz. The research activities focus firstly on assembling of the accelerator system, study on accelerator properties and electron beam dynamic simulations. The resonant frequency of the RF linac in π/2 operating mode is 2996.82 MHz for the operating temperature of 35 °C. The beam dynamic simulations were conducted by using the code ASTRA. Simulation results suggest that electron beams with an average energy of 4.002 MeV can be obtained when the linac accelerating gradient is 41.7 MV/m. The rms transverse beam size and normalized rms transverse emittance at the linac exit are 0.91 mm and 10.48 π mm·mrad, respectively. This information can then be used as the input data for Monte Carlo simulations to estimate the electron beam penetration depth and dose distribution in the natural rubber latex. The study results from this research will be used to define optimal conditions for natural rubber vulcanization with different electron beam energies and doses. This is very useful for development of future practical industrial accelerator units.

  18. Simon van der Meer (1925-2011):. A Modest Genius of Accelerator Science

    NASA Astrophysics Data System (ADS)

    Chohan, Vinod C.

    2011-02-01

    Simon van der Meer was a brilliant scientist and a true giant of accelerator science. His seminal contributions to accelerator science have been essential to this day in our quest for satisfying the demands of modern particle physics. Whether we talk of long base-line neutrino physics or antiproton-proton physics at Fermilab or proton-proton physics at LHC, his techniques and inventions have been a vital part of the modern day successes. Simon van der Meer and Carlo Rubbia were the first CERN scientists to become Nobel laureates in Physics, in 1984. Van der Meer's lesserknown contributions spanned a whole range of subjects in accelerator science, from magnet design to power supply design, beam measurements, slow beam extraction, sophisticated programs and controls.

  19. Proceedings of the 1982 DPF summer study on elementary particle physics and future facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaldson, R.; Gustafson, R.; Paige, F.

    1982-01-01

    This book presents the papers given at a conference on high energy physics. Topics considered at the conference included synchrotron radiation, testing the standard model, beyond the standard model, exploring the limits of accelerator technology, novel detector ideas, lepton-lepton colliders, lepton-hadron colliders, hadron-hadron colliders, fixed-target accelerators, non-accelerator physics, and sociology.

  20. Inner Radiation Belt Representation of the Energetic Electron Environment: Model and Data Synthesis Using the Salammbo Radiation Belt Transport Code and Los Alamos Geosynchronous and GPS Energetic Particle Data

    NASA Technical Reports Server (NTRS)

    Friedel, R. H. W.; Bourdarie, S.; Fennell, J.; Kanekal, S.; Cayton, T. E.

    2004-01-01

    The highly energetic electron environment in the inner magnetosphere (GEO inward) has received a lot of research attention in resent years, as the dynamics of relativistic electron acceleration and transport are not yet fully understood. These electrons can cause deep dielectric charging in any space hardware in the MEO to GEO region. We use a new and novel approach to obtain a global representation of the inner magnetospheric energetic electron environment, which can reproduce the absolute environment (flux) for any spacecraft orbit in that region to within a factor of 2 for the energy range of 100 KeV to 5 MeV electrons, for any levels of magnetospheric activity. We combine the extensive set of inner magnetospheric energetic electron observations available at Los Alamos with the physics based Salammbo transport code, using the data assimilation technique of "nudging". This in effect input in-situ data into the code and allows the diffusion mechanisms in the code to interpolate the data into regions and times of no data availability. We present here details of the methods used, both in the data assimilation process and in the necessary inter-calibration of the input data used. We will present sample runs of the model/data code and compare the results to test spacecraft data not used in the data assimilation process.

  1. UCLA Final Technical Report for the "Community Petascale Project for Accelerator Science and Simulation”.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Warren

    The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC code QuickPIC. We have also used our suite of PIC codes to make scientific discovery. Highlights include supporting FACET experiments which achieved the milestones of showing high beam loading and energy transfer efficiency from a drive electron beam to a witness electron beam and the discovery of a self-loading regime a for high gradient acceleration of a positron beam. Both of these experimental milestones were published in Nature together with supporting QuickPIC simulation results. Simulation results from QuickPIC were used on the cover of Nature in one case. We are also making progress on using highly resolved QuickPIC simulations to show that ion motion may not lead to catastrophic emittance growth for tightly focused electron bunches loaded into nonlinear wakefields. This could mean that fully self-consistent beam loading scenarios are possible. This work remains in progress. OSIRIS simulations were used to discover how 200 MeV electron rings are formed in LWFA experiments, on how to generate electrons that have a series of bunches on nanometer scale, and how to transport electron beams from (into) plasma sections into (from) conventional beam optic sections.« less

  2. Programming (Tips) for Physicists & Engineers

    ScienceCinema

    Ozcan, Erkcan

    2018-02-19

    Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.

  3. Programming (Tips) for Physicists & Engineers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozcan, Erkcan

    2010-07-13

    Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.

  4. Computational electronics and electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, C. C.

    The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less

  5. A comparison of models for supernova remnants including cosmic rays

    NASA Astrophysics Data System (ADS)

    Kang, Hyesung; Drury, L. O'C.

    1992-11-01

    A simplified model which can follow the dynamical evolution of a supernova remnant including the acceleration of cosmic rays without carrying out full numerical simulations has been proposed by Drury, Markiewicz, & Voelk in 1989. To explore the accuracy and the merits of using such a model, we have recalculated with the simplified code the evolution of the supernova remnants considered in Jones & Kang, in which more detailed and accurate numerical simulations were done using a full hydrodynamic code based on the two-fluid approximation. For the total energy transferred to cosmic rays the two codes are in good agreement, the acceleration efficiency being the same within a factor of 2 or so. The dependence of the results of the two codes on the closure parameters for the two-fluid approximation is also qualitatively similar. The agreement is somewhat degraded in those cases where the shock is smoothed out by the cosmic rays.

  6. The history and future of accelerator radiological protection.

    PubMed

    Thomas, R H

    2001-01-01

    The development of accelerator radiological protection from the mid-1930s, just after the invention of the cyclotron, to the present day is described. Three major themes--physics, personalities and politics--are developed. In the sections describing physics the development of shielding design though measurement, radiation transport calculations, the impact of accelerators on the environment and dosimetry in accelerator radiation fields are described. The discussion is limited to high-energy, high-intensity electron and proton accelerators. The impact of notable personalities on the development of both the basic science and on the accelerator health physics profession itself is described. The important role played by scholars and teachers is discussed. In the final section. which discusses the future of accelerator radiological protection, some emphasis is given to the social and political aspects that must he faced in the years ahead.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokosawa, A.

    Spin physics activities at medium and high energies became significantly active when polarized targets and polarized beams became accessible for hadron-hadron scattering experiments. My overview of spin physics will be inclined to the study of strong interaction using facilities at Argonne ZGS, Brookhaven AGS (including RHIC), CERN, Fermilab, LAMPF, an SATURNE. In 1960 accelerator physicists had already been convinced that the ZGS could be unique in accelerating a polarized beam; polarized beams were being accelerated through linear accelerators elsewhere at that time. However, there was much concern about going ahead with the construction of a polarized beam because (i) themore » source intensity was not high enough to accelerate in the accelerator, (ii) the use of the accelerator would be limited to only polarized-beam physics, that is, proton-proton interaction, and (iii) p-p elastic scattering was not the most popular topic in high-energy physics. In fact, within spin physics, [pi]-nucleon physics looked attractive, since the determination of spin and parity of possible [pi]p resonances attracted much attention. To proceed we needed more data beside total cross sections and elastic differential cross sections; measurements of polarization and other parameters were urgently needed. Polarization measurements had traditionally been performed by analyzing the spin of recoil protons. The drawbacks of this technique are: (i) it involves double scattering, resulting in poor accuracy of the data, and (ii) a carbon analyzer can only be used for a limited region of energy.« less

  8. Neutrino Factory Targets and the MICE Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walaron, Kenneth Andrew

    2007-01-01

    The future of particle physics in the next 30 years must include detailed study of neutrinos. The first proof of physics beyond the Standard Model of particle physics is evident in results from recent neutrino experiments which imply that neutrinos have mass and flavour mixing. The Neutrino Factory is the leading contender to measure precisely the neutrino mixing parameters to probe beyond the Standard Model physics. Significantly, one must look to measure the mixing angle θ 13 and investigate the possibility of leptonic CP violation. If found this may provide a key insight into the origins of the matter/anti- mattermore » asymmetry seen in the universe, through the mechanism of leptogenesis. The Neutrino Factory will be a large international multi-billion dollar experiment combining novel new accelerator and long-baseline detector technology. Arguably the most important and costly features of this facility are the proton driver and cooling channel. This thesis will present simulation work focused on determining the optimal proton driver energy to maximise pion production and also simulation of the transport of this pion °ux through some candidate transport lattices. Bench-marking of pion cross- sections calculated by MARS and GEANT4 codes to measured data from the HARP experiment is also presented. The cooling channel aims to reduce the phase-space volume of the decayed muon beam to a level that can be e±ciently injected into the accelerator system. The Muon Ionisation Cooling Experiment (MICE) hosted by the Rutherford Appleton laboratory, UK is a proof-of-principle experiment aimed at measuring ionisation cooling. The experiment will run parasitically to the ISIS accelerator and will produce muons from pion decay. The MICE beamline provides muon beams of variable emittance and momentum to the MICE experiment to enable measurement of cooling over a wide range of beam conditions. Simulation work in the design of this beamline is presented in this thesis as are results from an experiment to estimate the °ux from the target into the beamline acceptance.« less

  9. Study on radiation production in the charge stripping section of the RISP linear accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Oranj, Leila Mokhtari; Lee, Hee-Seock; Ko, Seung-Kook

    2015-02-01

    The linear accelerator of the Rare Isotope Science Project (RISP) accelerates 200 MeV/nucleon 238U ions in a multi-charge states. Many kinds of radiations are generated while the primary beam is transported along the beam line. The stripping process using thin carbon foil leads to complicated radiation environments at the 90-degree bending section. The charge distribution of 238U ions after the carbon charge stripper was calculated by using the LISE++ program. The estimates of the radiation environments were carried out by using the well-proved Monte Carlo codes PHITS and FLUKA. The tracks of 238U ions in various charge states were identified using the magnetic field subroutine of the PHITS code. The dose distribution caused by U beam losses for those tracks was obtained over the accelerator tunnel. A modified calculation was applied for tracking the multi-charged U beams because the fundamental idea of PHITS and FLUKA was to transport fully-ionized ion beam. In this study, the beam loss pattern after a stripping section was observed, and the radiation production by heavy ions was studied. Finally, the performance of the PHITS and the FLUKA codes was validated for estimating the radiation production at the stripping section by applying a modified method.

  10. 3D Reconnection and SEP Considerations in the CME-Flare Problem

    NASA Astrophysics Data System (ADS)

    Moschou, S. P.; Cohen, O.; Drake, J. J.; Sokolov, I.; Borovikov, D.; Alvarado Gomez, J. D.; Garraffo, C.

    2017-12-01

    Reconnection is known to play a major role in particle acceleration in both solar and astrophysical regimes, yet little is known about its connection with the global scales and its comparative contribution in the generation of SEPs with respect to other acceleration mechanisms, such as the shock at a fast CME front, in the presence of a global structure such as a CME. Coupling efforts, combining both particle and global scales, are necessary to answer questions about the fundamentals of the energetic processes evolved. We present such a coupling modeling effort that looks into particle acceleration through reconnection in a self-consistent CME-flare model in both particle and fluid regimes. Of special interest is the supra-thermal component of the acceleration due to the reconnection that will at a later time interact colliding with the solar atmospheric material of the more dense chromospheric layer and radiate in hard X- and γ-rays for super-thermal electrons and protons respectively. Two cutting edge computational codes are used to capture the global CME and flare dynamics, specifically a two fluid MHD code and a 3D PIC code for the flare scales. Finally, we are connecting the simulations with current observations in different wavelengths in an effort to shed light to the unified CME-flare picture.

  11. Advanced propeller noise prediction in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Spence, P. L.

    1992-01-01

    The time domain code ASSPIN gives acousticians a powerful technique of advanced propeller noise prediction. Except for nonlinear effects, the code uses exact solutions of the Ffowcs Williams-Hawkings equation with exact blade geometry and kinematics. By including nonaxial inflow, periodic loading noise, and adaptive time steps to accelerate computer execution, the development of this code becomes complete.

  12. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  13. Physical Processes for Driving Ionospheric Outflows in Global Simulations

    NASA Technical Reports Server (NTRS)

    Moore, Thomas Earle; Strangeway, Robert J.

    2009-01-01

    We review and assess the importance of processes thought to drive ionospheric outflows, linking them as appropriate to the solar wind and interplanetary magnetic field, and to the spatial and temporal distribution of their magnetospheric internal responses. These begin with the diffuse effects of photoionization and thermal equilibrium of the ionospheric topside, enhancing Jeans' escape, with ambipolar diffusion and acceleration. Auroral outflows begin with dayside reconnexion and resultant field-aligned currents and driven convection. These produce plasmaspheric plumes, collisional heating and wave-particle interactions, centrifugal acceleration, and auroral acceleration by parallel electric fields, including enhanced ambipolar fields from electron heating by precipitating particles. Observations and simulations show that solar wind energy dissipation into the atmosphere is concentrated by the geomagnetic field into auroral regions with an amplification factor of 10-100, enhancing heavy species plasma and gas escape from gravity, and providing more current carrying capacity. Internal plasmas thus enable electromagnetic driving via coupling to the plasma, neutral gas and by extension, the entire body " We assess the Importance of each of these processes in terms of local escape flux production as well as global outflow, and suggest methods for their implementation within multispecies global simulation codes. We complete 'he survey with an assessment of outstanding obstacles to this objective.

  14. Toward GPGPU accelerated human electromechanical cardiac simulations

    PubMed Central

    Vigueras, Guillermo; Roy, Ishani; Cookson, Andrew; Lee, Jack; Smith, Nicolas; Nordsletten, David

    2014-01-01

    In this paper, we look at the acceleration of weakly coupled electromechanics using the graphics processing unit (GPU). Specifically, we port to the GPU a number of components of Heart—a CPU-based finite element code developed for simulating multi-physics problems. On the basis of a criterion of computational cost, we implemented on the GPU the ODE and PDE solution steps for the electrophysiology problem and the Jacobian and residual evaluation for the mechanics problem. Performance of the GPU implementation is then compared with single core CPU (SC) execution as well as multi-core CPU (MC) computations with equivalent theoretical performance. Results show that for a human scale left ventricle mesh, GPU acceleration of the electrophysiology problem provided speedups of 164 × compared with SC and 5.5 times compared with MC for the solution of the ODE model. Speedup of up to 72 × compared with SC and 2.6 × compared with MC was also observed for the PDE solve. Using the same human geometry, the GPU implementation of mechanics residual/Jacobian computation provided speedups of up to 44 × compared with SC and 2.0 × compared with MC. © 2013 The Authors. International Journal for Numerical Methods in Biomedical Engineering published by John Wiley & Sons, Ltd. PMID:24115492

  15. An introduction to the physics of high energy accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.A.; Syphers, J.J.

    1993-01-01

    This book is an outgrowth of a course given by the authors at various universities and particle accelerator schools. It starts from the basic physics principles governing particle motion inside an accelerator, and leads to a full description of the complicated phenomena and analytical tools encountered in the design and operation of a working accelerator. The book covers acceleration and longitudinal beam dynamics, transverse motion and nonlinear perturbations, intensity dependent effects, emittance preservation methods and synchrotron radiation. These subjects encompass the core concerns of a high energy synchrotron. The authors apparently do not assume the reader has much previous knowledgemore » about accelerator physics. Hence, they take great care to introduce the physical phenomena encountered and the concepts used to describe them. The mathematical formulae and derivations are deliberately kept at a level suitable for beginners. After mastering this course, any interested reader will not find it difficult to follow subjects of more current interests. Useful homework problems are provided at the end of each chapter. Many of the problems are based on actual activities associated with the design and operation of existing accelerators.« less

  16. Path Toward a Unified Geometry for Radiation Transport

    NASA Astrophysics Data System (ADS)

    Lee, Kerry

    The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex CAD models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN (high charge and energy transport code developed by NASA LaRC), are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The work-flow for doing radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats.

  17. Electric dipole moment planning with a resurrected BNL Alternating Gradient Synchrotron electron analog ring

    NASA Astrophysics Data System (ADS)

    Talman, Richard M.; Talman, John D.

    2015-07-01

    There has been much recent interest in directly measuring the electric dipole moments (EDM) of the proton and the electron, because of their possible importance in the present day observed matter/antimatter imbalance in the Universe. Such a measurement will require storing a polarized beam of "frozen spin" particles, 15 MeV electrons or 230 MeV protons, in an all-electric storage ring. Only one such relativistic electric accelerator has ever been built—the 10 MeV "electron analog" ring at Brookhaven National Laboratory in 1954; it can also be referred to as the "AGS analog" ring to make clear it was a prototype for the Alternating Gradient Synchrotron (AGS) proton ring under construction at that time at BNL. (Its purpose was to investigate nonlinear resonances as well as passage through "transition" with the newly invented alternating gradient proton ring design.) By chance this electron ring, long since dismantled and its engineering drawings disappeared, would have been appropriate both for measuring the electron EDM and to serve as an inexpensive prototype for the arguably more promising, but 10 times more expensive, proton EDM measurement. Today it is cheaper yet to "resurrect" the electron analog ring by simulating its performance computationally. This is one purpose for the present paper. Most existing accelerator simulation codes cannot be used for this purpose because they implicitly assume magnetic bending. The new ual/eteapot code, described in detail in an accompanying paper, has been developed for modeling storage ring performance, including spin evolution, in electric rings. Illustrating its use, comparing its predictions with the old observations, and describing new expectations concerning spin evolution and code performance, are other goals of the paper. To set up some of these calculations has required a kind of "archeological physics" to reconstitute the detailed electron analog lattice design from a 1991 retrospective report by Plotkin as well as unpublished notes of Courant describing machine studies performed in 1954-1955. This paper describes the practical application of the eteapot code and provides sample results, with emphasis on emulating lattice optics in the AGS analog ring for comparison with the historical machine studies and to predict the electron spin evolution they would have measured if they had polarized electrons and electron polarimetry. Of greater present day interest is the performance to be expected for a proton storage ring experiment. To exhibit the eteapot code performance and confirm its symplecticity, results are also given for 30 million turn proton spin tracking in an all-electric lattice that would be appropriate for a present day measurement of the proton EDM. The accompanying paper "Symplectic orbit and spin tracking code for all-electric storage rings" documents in detail the theoretical formulation implemented in eteapot, which is a new module in the Unified Accelerator Libraries (ual) environment.

  18. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  19. High Energy Density Physics and Exotic Acceleration Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cowan, T.; /General Atomics, San Diego; Colby, E.

    2005-09-27

    The High Energy Density and Exotic Acceleration working group took as our goal to reach beyond the community of plasma accelerator research with its applications to high energy physics, to promote exchange with other disciplines which are challenged by related and demanding beam physics issues. The scope of the group was to cover particle acceleration and beam transport that, unlike other groups at AAC, are not mediated by plasmas or by electromagnetic structures. At this Workshop, we saw an impressive advancement from years past in the area of Vacuum Acceleration, for example with the LEAP experiment at Stanford. And wemore » saw an influx of exciting new beam physics topics involving particle propagation inside of solid-density plasmas or at extremely high charge density, particularly in the areas of laser acceleration of ions, and extreme beams for fusion energy research, including Heavy-ion Inertial Fusion beam physics. One example of the importance and extreme nature of beam physics in HED research is the requirement in the Fast Ignitor scheme of inertial fusion to heat a compressed DT fusion pellet to keV temperatures by injection of laser-driven electron or ion beams of giga-Amp current. Even in modest experiments presently being performed on the laser-acceleration of ions from solids, mega-amp currents of MeV electrons must be transported through solid foils, requiring almost complete return current neutralization, and giving rise to a wide variety of beam-plasma instabilities. As keynote talks our group promoted Ion Acceleration (plenary talk by A. MacKinnon), which historically has grown out of inertial fusion research, and HIF Accelerator Research (invited talk by A. Friedman), which will require impressive advancements in space-charge-limited ion beam physics and in understanding the generation and transport of neutralized ion beams. A unifying aspect of High Energy Density applications was the physics of particle beams inside of solids, which is proving to be a very important field for diverse applications such as muon cooling, fusion energy research, and ultra-bright particle and radiation generation with high intensity lasers. We had several talks on these and other subjects, and many joint sessions with the Computational group, the EM Structures group, and the Beam Generation group. We summarize our groups' work in the following categories: vacuum acceleration schemes; ion acceleration; particle transport in solids; and applications to high energy density phenomena.« less

  20. Rate heterogeneity in six protein-coding genes from the holoparasite Balanophora (Balanophoraceae) and other taxa of Santalales

    PubMed Central

    Su, Huei-Jiun; Hu, Jer-Ming

    2012-01-01

    Background and Aims The holoparasitic flowering plant Balanophora displays extreme floral reduction and was previously found to have enormous rate acceleration in the nuclear 18S rDNA region. So far, it remains unclear whether non-ribosomal, protein-coding genes of Balanophora also evolve in an accelerated fashion and whether the genes with high substitution rates retain their functionality. To tackle these issues, six different genes were sequenced from two Balanophora species and their rate variation and expression patterns were examined. Methods Sequences including nuclear PI, euAP3, TM6, LFY and RPB2 and mitochondrial matR were determined from two Balanophora spp. and compared with selected hemiparasitic species of Santalales and autotrophic core eudicots. Gene expression was detected for the six protein-coding genes and the expression patterns of the three B-class genes (PI, AP3 and TM6) were further examined across different organs of B. laxiflora using RT-PCR. Key Results Balanophora mitochondrial matR is highly accelerated in both nonsynonymous (dN) and synonymous (dS) substitution rates, whereas the rate variation of nuclear genes LFY, PI, euAP3, TM6 and RPB2 are less dramatic. Significant dS increases were detected in Balanophora PI, TM6, RPB2 and dN accelerations in euAP3. All of the protein-coding genes are expressed in inflorescences, indicative of their functionality. PI is restrictively expressed in tepals, synandria and floral bracts, whereas AP3 and TM6 are widely expressed in both male and female inflorescences. Conclusions Despite the observation that rates of sequence evolution are generally higher in Balanophora than in hemiparasitic species of Santalales and autotrophic core eudicots, the five nuclear protein-coding genes are functional and are evolving at a much slower rate than 18S rDNA. The mechanism or mechanisms responsible for rapid sequence evolution and concomitant rate acceleration for 18S rDNA and matR are currently not well understood and require further study in Balanophora and other holoparasites. PMID:23041381

  1. Simulation of Combustion Systems with Realistic g-jitter

    NASA Technical Reports Server (NTRS)

    Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.

    2003-01-01

    In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.

  2. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  3. Cosmic Rays and Their Radiative Processes in Numerical Cosmology

    NASA Technical Reports Server (NTRS)

    Ryu, Dongsu; Miniati, Francesco; Jones, Tom W.; Kang, Hyesung

    2000-01-01

    A cosmological hydrodynamic code is described, which includes a routine to compute cosmic ray acceleration and transport in a simplified way. The routine was designed to follow explicitly diffusive, acceleration at shocks, and second-order Fermi acceleration and adiabatic loss in smooth flows. Synchrotron cooling of the electron population can also be followed. The updated code is intended to be used to study the properties of nonthermal synchrotron emission and inverse Compton scattering from electron cosmic rays in clusters of galaxies, in addition to the properties of thermal bremsstrahlung emission from hot gas. The results of a test simulation using a grid of 128 (exp 3) cells are presented, where cosmic rays and magnetic field have been treated passively and synchrotron cooling of cosmic ray electrons has not been included.

  4. Cosmic Rays and Their Radiative Processes in Numerical Cosmology

    NASA Astrophysics Data System (ADS)

    Ryu, D.; Miniati, F.; Jones, T. W.; Kang, H.

    2000-05-01

    A cosmological hydrodynamic code is described, which includes a routine to compute cosmic ray acceleration and transport in a simplified way. The routine was designed to follow explicitly diffusive acceleration at shocks, and second-order Fermi acceleration and adiabatic loss in smooth flows. Synchrotron cooling of the electron population can also be followed. The updated code is intended to be used to study the properties of nonthermal synchrotron emission and inverse Compton scattering from electron cosmic rays in clusters of galaxies, in addition to the properties of thermal bremsstrahlung emission from hot gas. The results of a test simulation using a grid of 1283 cells are presented, where cosmic rays and magnetic field have been treated passively and synchrotron cooling of cosmic ray electrons has not been included.

  5. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  6. Frontier applications of electrostatic accelerators

    NASA Astrophysics Data System (ADS)

    Liu, Ke-Xin; Wang, Yu-Gang; Fan, Tie-Shuan; Zhang, Guo-Hui; Chen, Jia-Er

    2013-10-01

    Electrostatic accelerator is a powerful tool in many research fields, such as nuclear physics, radiation biology, material science, archaeology and earth sciences. Two electrostatic accelerators, one is the single stage Van de Graaff with terminal voltage of 4.5 MV and another one is the EN tandem with terminal voltage of 6 MV, were installed in 1980s and had been put into operation since the early 1990s at the Institute of Heavy Ion Physics. Many applications have been carried out since then. These two accelerators are described and summaries of the most important applications on neutron physics and technology, radiation biology and material science, as well as accelerator mass spectrometry (AMS) are presented.

  7. Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations

    DTIC Science & Technology

    2017-05-08

    NUMBER (Include area code) 08 May 2017 Briefing Charts 05 April 2017 - 08 May 2017 Using Kokkos for Performant Cross-Platform Acceleration of Liquid ...ERC Incorporated RQRC AFRL-West Using Kokkos for Performant Cross-Platform Acceleration of Liquid Rocket Simulations 2DISTRIBUTION A: Approved for... Liquid Rocket Combustion Simulation SPACE simulation of rotating detonation engine (courtesy of Dr. Christopher Lietz) 3DISTRIBUTION A: Approved

  8. Simulating cosmic ray physics on a moving mesh

    NASA Astrophysics Data System (ADS)

    Pfrommer, C.; Pakmor, R.; Schaal, K.; Simpson, C. M.; Springel, V.

    2017-03-01

    We discuss new methods to integrate the cosmic ray (CR) evolution equations coupled to magnetohydrodynamics on an unstructured moving mesh, as realized in the massively parallel AREPO code for cosmological simulations. We account for diffusive shock acceleration of CRs at resolved shocks and at supernova remnants in the interstellar medium (ISM) and follow the advective CR transport within the magnetized plasma, as well as anisotropic diffusive transport of CRs along the local magnetic field. CR losses are included in terms of Coulomb and hadronic interactions with the thermal plasma. We demonstrate the accuracy of our formalism for CR acceleration at shocks through simulations of plane-parallel shock tubes that are compared to newly derived exact solutions of the Riemann shock-tube problem with CR acceleration. We find that the increased compressibility of the post-shock plasma due to the produced CRs decreases the shock speed. However, CR acceleration at spherically expanding blast waves does not significantly break the self-similarity of the Sedov-Taylor solution; the resulting modifications can be approximated by a suitably adjusted, but constant adiabatic index. In first applications of the new CR formalism to simulations of isolated galaxies and cosmic structure formation, we find that CRs add an important pressure component to the ISM that increases the vertical scaleheight of disc galaxies and thus reduces the star formation rate. Strong external structure formation shocks inject CRs into the gas, but the relative pressure of this component decreases towards halo centres as adiabatic compression favours the thermal over the CR pressure.

  9. Computer modeling of test particle acceleration at oblique shocks

    NASA Technical Reports Server (NTRS)

    Decker, Robert B.

    1988-01-01

    The present evaluation of the basic techniques and illustrative results of charged particle-modeling numerical codes suitable for particle acceleration at oblique, fast-mode collisionless shocks emphasizes the treatment of ions as test particles, calculating particle dynamics through numerical integration along exact phase-space orbits. Attention is given to the acceleration of particles at planar, infinitessimally thin shocks, as well as to plasma simulations in which low-energy ions are injected and accelerated at quasi-perpendicular shocks with internal structure.

  10. Physical Interpretation of the Schott Energy of An Accelerating Point Charge and the Question of Whether a Uniformly Accelerating Charge Radiates

    ERIC Educational Resources Information Center

    Rowland, David R.

    2010-01-01

    A core topic in graduate courses in electrodynamics is the description of radiation from an accelerated charge and the associated radiation reaction. However, contemporary papers still express a diversity of views on the question of whether or not a uniformly accelerating charge radiates suggesting that a complete "physical" understanding of the…

  11. Measurement of Coriolis Acceleration with a Smartphone

    ERIC Educational Resources Information Center

    Shaku, Asif; Kraft, Jakob

    2016-01-01

    Undergraduate physics laboratories seldom have experiments that measure the Coriolis acceleration. This has traditionally been the case owing to the inherent complexities of making such measurements. Articles on the experimental determination of the Coriolis acceleration are few and far between in the physics literature. However, because modern…

  12. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  13. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  14. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  15. Electron Accelerator Shielding Design of KIPT Neutron Source Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Zhaopeng; Gohar, Yousry

    The Argonne National Laboratory of the United States and the Kharkov Institute of Physics and Technology of the Ukraine have been collaborating on the design, development and construction of a neutron source facility at Kharkov Institute of Physics and Technology utilizing an electron-accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100-MeV electrons. The facility was designed to perform basic and applied nuclear research, produce medical isotopes, and train nuclear specialists. The biological shield of the accelerator building was designed to reduce the biological dose to less than 5.0e-03 mSv/h during operation. The main source of the biologicalmore » dose for the accelerator building is the photons and neutrons generated from different interactions of leaked electrons from the electron gun and the accelerator sections with the surrounding components and materials. The Monte Carlo N-particle extended code (MCNPX) was used for the shielding calculations because of its capability to perform electron-, photon-, and neutron-coupled transport simulations. The photon dose was tallied using the MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is very small, similar to 0.01 neutron for 100-MeV electron and even smaller for lower-energy electrons. This causes difficulties for the Monte Carlo analyses and consumes tremendous computation resources for tallying the neutron dose outside the shield boundary with an acceptable accuracy. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were utilized for this study. The generated neutrons were banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron dose. The weight windows variance reduction technique was also utilized for both neutron and photon dose calculations. Two shielding materials, heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total dose outside the shield boundary less than 5.0e-03 mSv/h during operation. The shield configuration and parameters of the accelerator building were determined and are presented in this paper. Copyright (C) 2016, Published by Elsevier Korea LLC on behalf of Korean Nuclear Society.« less

  16. Transport, Acceleration and Spatial Access of Solar Energetic Particles

    NASA Astrophysics Data System (ADS)

    Borovikov, D.; Sokolov, I.; Effenberger, F.; Jin, M.; Gombosi, T. I.

    2017-12-01

    Solar Energetic Particles (SEPs) are a major branch of space weather. Often driven by Coronal Mass Ejections (CMEs), SEPs have a very high destructive potential, which includes but is not limited to disrupting communication systems on Earth, inflicting harmful and potentially fatal radiation doses to crew members onboard spacecraft and, in extreme cases, to people aboard high altitude flights. However, currently the research community lacks efficient tools to predict such hazardous SEP events. Such a tool would serve as the first step towards improving humanity's preparedness for SEP events and ultimately its ability to mitigate their effects. The main goal of the presented research is to develop a computational tool that provides the said capabilities and meets the community's demand. Our model has the forecasting capability and can be the basis for operational system that will provide live information on the current potential threats posed by SEPs based on observations of the Sun. The tool comprises several numerical models, which are designed to simulate different physical aspects of SEPs. The background conditions in the interplanetary medium, in particular, the Coronal Mass Ejection driving the particle acceleration, play a defining role and are simulated with the state-of-the-art MHD solver, Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme (BATS-R-US). The newly developed particle code, Multiple-Field-Line-Advection Model for Particle Acceleration (M-FLAMPA), simulates the actual transport and acceleration of SEPs and is coupled to the MHD code. The special property of SEPs, the tendency to follow magnetic lines of force, is fully taken advantage of in the computational model, which substitutes a complicated 3-D model with a multitude of 1-D models. This approach significantly simplifies computations and improves the time performance of the overall model. Also, it plays an important role of mapping the affected region by connecting it with the origin of SEPs at the solar surface. Our model incorporates the effects of the near-Sun field line meandering that affects the perpendicular transport of SEPs and can explain the occurrence of large longitudinal spread observed even in the early phases of such events.

  17. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  18. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  19. Activation assessment of the soil around the ESS accelerator tunnel

    NASA Astrophysics Data System (ADS)

    Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.; Ene, D.

    2018-06-01

    Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal directions. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order to estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.

  20. Atomic-scale Modeling of the Structure and Dynamics of Dislocations in Complex Alloys at High Temperatures

    NASA Technical Reports Server (NTRS)

    Daw, Murray S.; Mills, Michael J.

    2003-01-01

    We report on the progress made during the first year of the project. Most of the progress at this point has been on the theoretical and computational side. Here are the highlights: (1) A new code, tailored for high-end desktop computing, now combines modern Accelerated Dynamics (AD) with the well-tested Embedded Atom Method (EAM); (2) The new Accelerated Dynamics allows the study of relatively slow, thermally-activated processes, such as diffusion, which are much too slow for traditional Molecular Dynamics; (3) We have benchmarked the new AD code on a rather simple and well-known process: vacancy diffusion in copper; and (4) We have begun application of the AD code to the diffusion of vacancies in ordered intermetallics.

  1. UFO: A THREE-DIMENSIONAL NEUTRON DIFFUSION CODE FOR THE IBM 704

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auerbach, E.H.; Jewett, J.P.; Ketchum, M.A.

    A description of UFO, a code for the solution of the fewgroup neutron diffusion equation in three-dimensional Cartesian coordinates on the IBM 704, is given. An accelerated Liebmann flux iteration scheme is used, and optimum parameters can be calculated by the code whenever they are required. The theory and operation of the program are discussed. (auth)

  2. Centripetal Acceleration: Often Forgotten or Misinterpreted

    ERIC Educational Resources Information Center

    Singh, Chandralekha

    2009-01-01

    Acceleration is a fundamental concept in physics which is taught in mechanics at all levels. Here, we discuss some challenges in teaching this concept effectively when the path along which the object is moving has a curvature and centripetal acceleration is present. We discuss examples illustrating that both physics teachers and students have…

  3. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be... basic color for designating caution and for marking physical hazards such as: Striking against...

  4. Ultra-High Gradient Channeling Acceleration in Nanostructures: Design/Progress of Proof-of-Concept (POC) Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Young Min; Green, A.; Lumpkin, A. H.

    2016-09-16

    A short bunch of relativistic particles or a short-pulse laser perturbs the density state of conduction electrons in a solid crystal and excites wakefields along atomic lattices in a crystal. Under a coupling condition the wakes, if excited, can accelerate channeling particles with TeV/m acceleration gradients in principle since the density of charge carriers (conduction electrons) in solids n 0 = ~ 10 20 – 10 23 cm -3 is significantly higher than what can be obtained in gaseous plasma. Nanostructures have some advantages over crystals for channeling applications of high power beams. The dechanneling rate can be reduced andmore » the beam acceptance increased by the large size of the channels. For beam-driven acceleration, a bunch length with a sufficient charge density would need to be in the range of the plasma wavelength to properly excite plasma wakefields, and channeled particle acceleration with the wakefields must occur before the ions in the lattices move beyond the restoring threshold. In the case of the excitation by short laser pulses, the dephasing length is appreciably increased with the larger channel, which enables channeled particles to gain sufficient amounts of energy. This paper describes simulation analyses on beam- and laser (X-ray)-driven accelerations in effective nanotube models obtained from Vsim and EPOCH codes. Experimental setups to detect wakefields are also outlined with accelerator facilities at Fermilab and NIU. In the FAST facility, the electron beamline was successfully commissioned at 50 MeV and it is being upgraded toward higher energies for electron accelerator R&D. The 50 MeV injector beamline of the facility is used for X-ray crystal-channeling radiation with a diamond target. It has been proposed to utilize the same diamond crystal for a channeling acceleration POC test. Another POC experiment is also designed for the NIU accelerator lab with time-resolved electron diffraction. Recently, a stable generation of single-cycle laser pulses with tens of Petawatt power based on thin film compression (TFC) technique has been investigated for target normal sheath acceleration (TNSA) and radiation pressure acceleration (RPA). The experimental plan with a nanometer foil is discussed with an available test facility such as Extreme Light Infrastructure – Nuclear Physics (ELI-NP).« less

  5. Ultra-high gradient channeling acceleration in nanostructures: Design/progress of proof-of-concept (POC) experiments

    NASA Astrophysics Data System (ADS)

    Shin, Y. M.; Green, A.; Lumpkin, A. H.; Thurman-Keup, R. M.; Shiltsev, V.; Zhang, X.; Farinella, D. M.-A.; Taborek, P.; Tajima, T.; Wheeler, J. A.; Mourou, G.

    2017-03-01

    A short bunch of relativistic particles, or a short-pulse laser, perturb the density state of conduction electrons in a solid crystal and excite wakefields along atomic lattices in a crystal. Under a coupling condition between a driver and plasma, the wakes, if excited, can accelerate channeling particles with TeV/m acceleration gradients [1], in principle, since the density of charge carriers (conduction electrons) in solids n0 = 1020 - 1023 cm-3 is significantly higher than what was considered above in gaseous plasma. Nanostructures have some advantages over crystals for channeling applications of high power beams. The de-channeling rate can be reduced and the beam acceptance increased by the large size of the channels. For beam-driven acceleration, a bunch length with a sufficient charge density would need to be in the range of the plasma wavelength to properly excite plasma wakefields, and channeled particle acceleration with the wakefields must occur before the ions in the lattices move beyond the restoring threshold. In the case of the excitation by short laser pulses, the dephasing length is appreciably increased with the larger channel, which enables channeled particles to gain sufficient amounts of energy. This paper describes simulation analyses on beam- and laser (X-ray)-driven accelerations in effective nanotube models obtained from the Vsim and EPOCH codes. Experimental setups to detect wakefields are also outlined with accelerator facilities at Fermilab and Northern Illinois University (NIU). In the FAST facility, the electron beamline was successfully commissioned at 50 MeV, and it is being upgraded toward higher energies for electron accelerator R&D. The 50 MeV injector beamline of the facility is used for X-ray crystal-channeling radiation with a diamond target. It has been proposed to utilize the same diamond crystal for a channeling acceleration proof-of-concept (POC). Another POC experiment is also designed for the NIU accelerator lab with time-resolved electron diffraction. Recently, a stable generation of single-cycle laser pulses with tens of Petawatt power based on the thin film compression (TFC) technique has been investigated for target normal sheath acceleration (TNSA) and radiation pressure acceleration (RPA). The experimental plan with a nanometer foil is discussed with an available test facility such as Extreme Light Infrastructure - Nuclear Physics (ELI-NP).

  6. Corrigendum to “Accelerated materials evaluation for nuclear applications” [J. Nucl. Mater. 488 (2017) 46–62

    DOE PAGES

    Griffiths, Malcolm; Walters, L.; Greenwood, L. R.; ...

    2017-09-21

    The original article addresses the opportunities and complexities of using materials test reactors with high neutron fluxes to perform accelerated studies of material aging in power reactors operating at lower neutron fluxes and with different neutron flux spectra. Radiation damage and gas production in different reactors have been compared using the code, SPECTER. This code provides a common standard from which to compare neutron damage data generated by different research groups using a variety of reactors. This Corrigendum identifies a few typographical errors. Tables 2 and 3 are included in revised form.

  7. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  8. Future HEP Accelerators: The US Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha; Shiltsev, Vladimir

    2015-11-02

    Accelerator technology has advanced tremendously since the introduction of accelerators in the 1930s, and particle accelerators have become indispensable instruments in high energy physics (HEP) research to probe Nature at smaller and smaller distances. At present, accelerator facilities can be classified into Energy Frontier colliders that enable direct discoveries and studies of high mass scale particles and Intensity Frontier accelerators for exploration of extremely rare processes, usually at relatively low energies. The near term strategies of the global energy frontier particle physics community are centered on fully exploiting the physics potential of the Large Hadron Collider (LHC) at CERN throughmore » its high-luminosity upgrade (HL-LHC), while the intensity frontier HEP research is focused on studies of neutrinos at the MW-scale beam power accelerator facilities, such as Fermilab Main Injector with the planned PIP-II SRF linac project. A number of next generation accelerator facilities have been proposed and are currently under consideration for the medium- and long-term future programs of accelerator-based HEP research. In this paper, we briefly review the post-LHC energy frontier options, both for lepton and hadron colliders in various regions of the world, as well as possible future intensity frontier accelerator facilities.« less

  9. Application of Intel Many Integrated Core (MIC) accelerators to the Pleim-Xiu land surface scheme

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2015-10-01

    The land-surface model (LSM) is one physics process in the weather research and forecast (WRF) model. The LSM includes atmospheric information from the surface layer scheme, radiative forcing from the radiation scheme, and precipitation forcing from the microphysics and convective schemes, together with internal information on the land's state variables and land-surface properties. The LSM is to provide heat and moisture fluxes over land points and sea-ice points. The Pleim-Xiu (PX) scheme is one LSM. The PX LSM features three pathways for moisture fluxes: evapotranspiration, soil evaporation, and evaporation from wet canopies. To accelerate the computation process of this scheme, we employ Intel Xeon Phi Many Integrated Core (MIC) Architecture as it is a multiprocessor computer structure with merits of efficient parallelization and vectorization essentials. Our results show that the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.3x and 11.7x as compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670.

  10. CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical Systems

    DTIC Science & Technology

    2018-04-19

    AFRL-AFOSR-JP-TR-2018-0035 CORESAFE:A Formal Approach against Code Replacement Attacks on Cyber Physical Systems Sandeep Shukla INDIAN INSTITUTE OF...Formal Approach against Code Replacement Attacks on Cyber Physical Systems 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-16-1-4099 5c.  PROGRAM ELEMENT...Institute of Technology Kanpur India Final Report for AOARD Grant “CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical

  11. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    NASA Astrophysics Data System (ADS)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  12. Turbulent Heating and Wave Pressure in Solar Wind Acceleration Modeling: New Insights to Empirical Forecasting of the Solar Wind

    NASA Astrophysics Data System (ADS)

    Woolsey, L. N.; Cranmer, S. R.

    2013-12-01

    The study of solar wind acceleration has made several important advances recently due to improvements in modeling techniques. Existing code and simulations test the competing theories for coronal heating, which include reconnection/loop-opening (RLO) models and wave/turbulence-driven (WTD) models. In order to compare and contrast the validity of these theories, we need flexible tools that predict the emergent solar wind properties from a wide range of coronal magnetic field structures such as coronal holes, pseudostreamers, and helmet streamers. ZEPHYR (Cranmer et al. 2007) is a one-dimensional magnetohydrodynamics code that includes Alfven wave generation and reflection and the resulting turbulent heating to accelerate solar wind in open flux tubes. We present the ZEPHYR output for a wide range of magnetic field geometries to show the effect of the magnetic field profiles on wind properties. We also investigate the competing acceleration mechanisms found in ZEPHYR to determine the relative importance of increased gas pressure from turbulent heating and the separate pressure source from the Alfven waves. To do so, we developed a code that will become publicly available for solar wind prediction. This code, TEMPEST, provides an outflow solution based on only one input: the magnetic field strength as a function of height above the photosphere. It uses correlations found in ZEPHYR between the magnetic field strength at the source surface and the temperature profile of the outflow solution to compute the wind speed profile based on the increased gas pressure from turbulent heating. With this initial solution, TEMPEST then adds in the Alfven wave pressure term to the modified Parker equation and iterates to find a stable solution for the wind speed. This code, therefore, can make predictions of the wind speeds that will be observed at 1 AU based on extrapolations from magnetogram data, providing a useful tool for empirical forecasting of the sol! ar wind.

  13. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  14. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  15. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  16. Sensitivity analysis of tall buildings in Semarang, Indonesia due to fault earthquakes with maximum 7 Mw

    NASA Astrophysics Data System (ADS)

    Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian

    2017-11-01

    Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.

  17. Elementary particle physics

    NASA Technical Reports Server (NTRS)

    Perkins, D. H.

    1986-01-01

    Elementary particle physics is discussed. Status of the Standard Model of electroweak and strong interactions; phenomena beyond the Standard Model; new accelerator projects; and possible contributions from non-accelerator experiments are examined.

  18. CFD Code Survey for Thrust Chamber Application

    NASA Technical Reports Server (NTRS)

    Gross, Klaus W.

    1990-01-01

    In the quest fo find analytical reference codes, responses from a questionnaire are presented which portray the current computational fluid dynamics (CFD) program status and capability at various organizations, characterizing liquid rocket thrust chamber flow fields. Sample cases are identified to examine the ability, operational condition, and accuracy of the codes. To select the best suited programs for accelerated improvements, evaluation criteria are being proposed.

  19. High fidelity 3-dimensional models of beam-electron cloud interactions in circular accelerators

    NASA Astrophysics Data System (ADS)

    Feiz Zarrin Ghalam, Ali

    Electron cloud is a low-density electron profile created inside the vacuum chamber of circular machines with positively charged beams. Electron cloud limits the peak current of the beam and degrades the beams' quality through luminosity degradation, emittance growth and head to tail or bunch to bunch instability. The adverse effects of electron cloud on long-term beam dynamics becomes more and more important as the beams go to higher and higher energies. This problem has become a major concern in many future circular machines design like the Large Hadron Collider (LHC) under construction at European Center for Nuclear Research (CERN). Due to the importance of the problem several simulation models have been developed to model long-term beam-electron cloud interaction. These models are based on "single kick approximation" where the electron cloud is assumed to be concentrated at one thin slab around the ring. While this model is efficient in terms of computational costs, it does not reflect the real physical situation as the forces from electron cloud to the beam are non-linear contrary to this model's assumption. To address the existing codes limitation, in this thesis a new model is developed to continuously model the beam-electron cloud interaction. The code is derived from a 3-D parallel Particle-In-Cell (PIC) model (QuickPIC) originally used for plasma wakefield acceleration research. To make the original model fit into circular machines environment, betatron and synchrotron equations of motions have been added to the code, also the effect of chromaticity, lattice structure have been included. QuickPIC is then benchmarked against one of the codes developed based on single kick approximation (HEAD-TAIL) for the transverse spot size of the beam in CERN-LHC. The growth predicted by QuickPIC is less than the one predicted by HEAD-TAIL. The code is then used to investigate the effect of electron cloud image charges on the long-term beam dynamics, particularly on the transverse tune shift of the beam at CERN Super Proton Synchrotron (SPS) ring. The force from the electron cloud image charges on the beam cancels the force due to cloud compression formed on the beam axis and therefore the tune shift is mainly due to the uniform electron cloud density. (Abstract shortened by UMI.)

  20. Jerome Lewis Duggan: A Nuclear Physicist and a Well-Known, Six-Decade Accelerator Application Conference (CAARI) Organizer

    NASA Astrophysics Data System (ADS)

    Del McDaniel, Floyd; Doyle, Barney L.

    Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry’s physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator engineers and vendors, medical doctors, cultural heritage experts... the list goes on and on. While thousands of his acquaintances already miss Jerry, this is being felt most by his family and us (B.D. and F.D.M).

  1. Jerome Lewis Duggan: A Nuclear Physicist and a Well-Known, Six-Decade Accelerator Application Conference (CAARI) Organizer

    NASA Astrophysics Data System (ADS)

    Del McDaniel, Floyd; Doyle, Barney L.

    Jerry Duggan was an experimental MeV-accelerator-based nuclear and atomic physicist who, over the past few decades, played a key role in the important transition of this field from basic to applied physics. His fascination for and application of particle accelerators spanned almost 60 years, and led to important discoveries in the following fields: accelerator-based analysis (accelerator mass spectrometry, ion beam techniques, nuclear-based analysis, nuclear microprobes, neutron techniques); accelerator facilities, stewardship, and technology development; accelerator applications (industrial, medical, security and defense, and teaching with accelerators); applied research with accelerators (advanced synthesis and modification, radiation effects, nanosciences and technology); physics research (atomic and molecular physics, and nuclear physics); and many other areas and applications. Here we describe Jerry's physics education at the University of North Texas (B. S. and M. S.) and Louisiana State University (Ph.D.). We also discuss his research at UNT, LSU, and Oak Ridge National Laboratory, his involvement with the industrial aspects of accelerators, and his impact on many graduate students, colleagues at UNT and other universities, national laboratories, and industry and acquaintances around the world. Along the way, we found it hard not to also talk about his love of family, sports, fishing, and other recreational activities. While these were significant accomplishments in his life, Jerry will be most remembered for his insight in starting and his industry in maintaining and growing what became one of the most diverse accelerator conferences in the world — the International Conference on the Application of Accelerators in Research and Industry, or what we all know as CAARI. Through this conference, which he ran almost single-handed for decades, Jerry came to know, and became well known by, literally thousands of atomic and nuclear physicists, accelerator engineers and vendors, medical doctors, cultural heritage experts... the list goes on and on. While thousands of his acquaintances already miss Jerry, this is being felt most by his family and us (B.D. and F.D.M).

  2. Microwave and Electron Beam Computer Programs

    DTIC Science & Technology

    1988-06-01

    Research (ONR). SCRIBE was adapted by MRC from the Stanford Linear Accelerator Center Beam Trajectory Program, EGUN . oTIC NSECE Acc !,,o For IDL1C I...achieved with SCRIBE. It is a ver- sion of the Stanford Linear Accelerator (SLAC) code EGUN (Ref. 8), extensively modified by MRC for research on

  3. High energy density physics issues related to Future Circular Collider

    NASA Astrophysics Data System (ADS)

    Tahir, N. A.; Burkart, F.; Schmidt, R.; Shutov, A.; Wollmann, D.; Piriz, A. R.

    2017-07-01

    A design study for a post-Large Hadron Collider accelerator named, Future Circular Collider (FCC), is being carried out by the International Scientific Community. A complete design report is expected to be ready by spring 2018. The FCC will accelerate two counter rotating beams of 50 TeV protons in a tunnel having a length (circumference) of 100 km. Each beam will be comprised of 10 600 proton bunches, with each bunch having an intensity of 1011 protons. The bunch length is of 0.5 ns, and two neighboring bunches are separated by 25 ns. Although there is an option for 5 ns bunch separation as well, in the present studies, we consider the former case only. The total energy stored in each FCC beam is about 8.5 GJ, which is equivalent to the kinetic energy of Airbus 380 (560 t) flying at a speed of 850 km/h. Machine protection is a very important issue while operating with such powerful beams. It is important to have an estimate of the damage caused to the equipment and accelerator components due to the accidental release of a partial or total beam at a given point. For this purpose, we carried out numerical simulations of full impact of one FCC beam on an extended solid copper target. These simulations have been done employing an energy deposition code, FLUKA, and a two-dimensional hydrodynamic code, BIG2, iteratively. This study shows that although the static range of a single FCC proton and its shower is about 1.5 m in solid copper, the entire beam will penetrate around 350 m into the target. This substantial increase in the range is due to the hydrodynamic tunneling of the beam. Our calculations also show that a large part of the target will be converted into high energy density matter including warm dense matter and strongly coupled plasmas.

  4. Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications.

    PubMed

    Cabezas, Javier; Gelado, Isaac; Stone, John E; Navarro, Nacho; Kirk, David B; Hwu, Wen-Mei

    2015-05-01

    Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice.

  5. Runtime and Architecture Support for Efficient Data Exchange in Multi-Accelerator Applications

    PubMed Central

    Cabezas, Javier; Gelado, Isaac; Stone, John E.; Navarro, Nacho; Kirk, David B.; Hwu, Wen-mei

    2014-01-01

    Heterogeneous parallel computing applications often process large data sets that require multiple GPUs to jointly meet their needs for physical memory capacity and compute throughput. However, the lack of high-level abstractions in previous heterogeneous parallel programming models force programmers to resort to multiple code versions, complex data copy steps and synchronization schemes when exchanging data between multiple GPU devices, which results in high software development cost, poor maintainability, and even poor performance. This paper describes the HPE runtime system, and the associated architecture support, which enables a simple, efficient programming interface for exchanging data between multiple GPUs through either interconnects or cross-node network interfaces. The runtime and architecture support presented in this paper can also be used to support other types of accelerators. We show that the simplified programming interface reduces programming complexity. The research presented in this paper started in 2009. It has been implemented and tested extensively in several generations of HPE runtime systems as well as adopted into the NVIDIA GPU hardware and drivers for CUDA 4.0 and beyond since 2011. The availability of real hardware that support key HPE features gives rise to a rare opportunity for studying the effectiveness of the hardware support by running important benchmarks on real runtime and hardware. Experimental results show that in a exemplar heterogeneous system, peer DMA and double-buffering, pinned buffers, and software techniques can improve the inter-accelerator data communication bandwidth by 2×. They can also improve the execution speed by 1.6× for a 3D finite difference, 2.5× for 1D FFT, and 1.6× for merge sort, all measured on real hardware. The proposed architecture support enables the HPE runtime to transparently deploy these optimizations under simple portable user code, allowing system designers to freely employ devices of different capabilities. We further argue that simple interfaces such as HPE are needed for most applications to benefit from advanced hardware features in practice. PMID:26180487

  6. Accelerated GPU based SPECT Monte Carlo simulations.

    PubMed

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency of SPECT imaging simulations.

  7. The development of a thermal hydraulic feedback mechanism with a quasi-fixed point iteration scheme for control rod position modeling for the TRIGSIMS-TH application

    NASA Astrophysics Data System (ADS)

    Karriem, Veronica V.

    Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.

  8. Operating experience with a VMEbus multiprocessor system for data acquisition and reduction in nuclear physics

    NASA Astrophysics Data System (ADS)

    Kutt, P. H.; Balamuth, D. P.

    1989-10-01

    Summary form only given, as follows. A multiprocessor system based on commercially available VMEbus components has been developed for the acquisition and reduction of event-mode data in nuclear physics experiments. The system contains seven 68000 CPUs and 14 Mbyte of memory. A minimal operating system handles data transfer and task allocation, and a compiler for a specially designed event analysis language produces code for the processors. The system has been in operation for four years at the University of Pennsylvania Tandem Accelerator Laboratory. Computation rates over three times that of a MicroVAX II have been achieved at a fraction of the cost. The use of WORM optical disks for event recording allows the processing of gigabyte data sets without operator intervention. A more powerful system is being planned which will make use of recently developed RISC (reduced instruction set computer) processors to obtain an order of magnitude increase in computing power per node.

  9. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.

  10. Convergence acceleration of the Proteus computer code with multigrid methods

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  11. GPU Linear Algebra Libraries and GPGPU Programming for Accelerating MOPAC Semiempirical Quantum Chemistry Calculations.

    PubMed

    Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B

    2012-09-11

    In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.

  12. Activation Assessment of the Soil Around the ESS Accelerator Tunnel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakhno, I. L.; Mokhov, N. V.; Tropin, I. S.

    Activation of the soil surrounding the ESS accelerator tunnel calculated by the MARS15 code is presented. A detailed composition of the soil, that comprises about 30 different chemical elements, is considered. Spatial distributions of the produced activity are provided in both transverse and longitudinal direction. A realistic irradiation profile for the entire planned lifetime of the facility is used. The nuclear transmutation and decay of the produced radionuclides is calculated with the DeTra code which is a built-in tool for the MARS15 code. Radionuclide production by low-energy neutrons is calculated using the ENDF/B-VII evaluated nuclear data library. In order tomore » estimate quality of this activation assessment, a comparison between calculated and measured activation of various foils in a similar radiation environment is presented.« less

  13. USPAS | U.S. Particle Accelerator School

    Science.gov Websites

    U.S. Particle Accelerator School U.S. Particle Accelerator School U.S. Particle Accelerator School U.S. Particle Accelerator School Education in Beam Physics and Accelerator Technology Home About About University Credits Joint International Accelerator School University-Style Programs Symposium-Style Programs

  14. Fact Sheets and Brochures | News

    Science.gov Websites

    Illinois Accelerator Research Center Economic Impact Particle Physics: Benefits to Society The Fermilab Saturday Morning Physics What are neutrinos? What are neutrinos? (large format) What is a Higgs boson? U.S Public Outreach America's particle physics and accelerator laboratory LBNF/DUNE - An international mega

  15. Physics of the inner heliosphere 1-10R sub O plasma diagnostics and models

    NASA Technical Reports Server (NTRS)

    Withbroe, G. L.

    1984-01-01

    The physics of solar wind flow in the acceleration region and impulsive phenomena in the solar corona is studied. The study of magnetohydrodynamic wave propagation in the corona and the solutions for steady state and time dependent solar wind equations gives insights concerning the physics of the solar wind acceleration region, plasma heating and plasma acceleration processes and the formation of shocks. Also studied is the development of techniques for placing constraints on the mechanisms responsible for coronal heating.

  16. The International Committee for Future Accelerators (ICFA): 1976 to the present

    DOE PAGES

    Rubinstein, Roy

    2016-12-14

    The International Committee for Future Accelerators (ICFA) has been in existence now for four decades. It plays an important role in allowing discussions by the world particle physics community on the status and future of very large particle accelerators and the particle physics and related fields associated with them. Here, this paper gives some indication of what ICFA is and does, and also describes its involvement in some of the more important developments in the particle physics field since its founding.

  17. A Comprehensive Comparison of Relativistic Particle Integrators

    NASA Astrophysics Data System (ADS)

    Ripperda, B.; Bacchini, F.; Teunissen, J.; Xia, C.; Porth, O.; Sironi, L.; Lapenta, G.; Keppens, R.

    2018-03-01

    We compare relativistic particle integrators commonly used in plasma physics, showing several test cases relevant for astrophysics. Three explicit particle pushers are considered, namely, the Boris, Vay, and Higuera–Cary schemes. We also present a new relativistic fully implicit particle integrator that is energy conserving. Furthermore, a method based on the relativistic guiding center approximation is included. The algorithms are described such that they can be readily implemented in magnetohydrodynamics codes or Particle-in-Cell codes. Our comparison focuses on the strengths and key features of the particle integrators. We test the conservation of invariants of motion and the accuracy of particle drift dynamics in highly relativistic, mildly relativistic, and non-relativistic settings. The methods are compared in idealized test cases, i.e., without considering feedback onto the electrodynamic fields, collisions, pair creation, or radiation. The test cases include uniform electric and magnetic fields, {\\boldsymbol{E}}× {\\boldsymbol{B}} fields, force-free fields, and setups relevant for high-energy astrophysics, e.g., a magnetic mirror, a magnetic dipole, and a magnetic null. These tests have direct relevance for particle acceleration in shocks and in magnetic reconnection.

  18. High-order space charge effects using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, M.F.; Bruhwiler, D.L.

    1997-02-01

    The Northrop Grumman Topkark code has been upgraded to Fortran 90, making use of operator overloading, so the same code can be used to either track an array of particles or construct a Taylor map representation of the accelerator lattice. We review beam optics and beam dynamics simulations conducted with TOPKARK in the past and we present a new method for modeling space charge forces to high-order with automatic differentiation. This method generates an accurate, high-order, 6-D Taylor map of the phase space variable trajectories for a bunched, high-current beam. The spatial distribution is modeled as the product of amore » Taylor Series times a Gaussian. The variables in the argument of the Gaussian are normalized to the respective second moments of the distribution. This form allows for accurate representation of a wide range of realistic distributions, including any asymmetries, and allows for rapid calculation of the space charge fields with free space boundary conditions. An example problem is presented to illustrate our approach. {copyright} {ital 1997 American Institute of Physics.}« less

  19. Matlab Based LOCO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Portmann, Greg; /LBL, Berkeley; Safranek, James

    The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRANmore » code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.« less

  20. CICART Center For Integrated Computation And Analysis Of Reconnection And Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharjee, Amitava

    CICART is a partnership between the University of New Hampshire (UNH) and Dartmouth College. CICART addresses two important science needs of the DoE: the basic understanding of magnetic reconnection and turbulence that strongly impacts the performance of fusion plasmas, and the development of new mathematical and computational tools that enable the modeling and control of these phenomena. The principal participants of CICART constitute an interdisciplinary group, drawn from the communities of applied mathematics, astrophysics, computational physics, fluid dynamics, and fusion physics. It is a main premise of CICART that fundamental aspects of magnetic reconnection and turbulence in fusion devices, smaller-scalemore » laboratory experiments, and space and astrophysical plasmas can be viewed from a common perspective, and that progress in understanding in any of these interconnected fields is likely to lead to progress in others. The establishment of CICART has strongly impacted the education and research mission of a new Program in Integrated Applied Mathematics in the College of Engineering and Applied Sciences at UNH by enabling the recruitment of a tenure-track faculty member, supported equally by UNH and CICART, and the establishment of an IBM-UNH Computing Alliance. The proposed areas of research in magnetic reconnection and turbulence in astrophysical, space, and laboratory plasmas include the following topics: (A) Reconnection and secondary instabilities in large high-Lundquist-number plasmas, (B) Particle acceleration in the presence of multiple magnetic islands, (C) Gyrokinetic reconnection: comparison with fluid and particle-in-cell models, (D) Imbalanced turbulence, (E) Ion heating, and (F) Turbulence in laboratory (including fusion-relevant) experiments. These theoretical studies make active use of three high-performance computer simulation codes: (1) The Magnetic Reconnection Code, based on extended two-fluid (or Hall MHD) equations, in an Adaptive Mesh Refinement (AMR) framework, (2) the Particle Simulation Code, a fully electromagnetic 3D Particle-In-Cell (PIC) code that includes a collision operator, and (3) GS2, an Eulerian, electromagnetic, kinetic code that is widely used in the fusion program, and simulates the nonlinear gyrokinetic equations, together with a self-consistent set of Maxwell’s equations.« less

  1. Special issue on compact x-ray sources

    NASA Astrophysics Data System (ADS)

    Hooker, Simon; Midorikawa, Katsumi; Rosenzweig, James

    2014-04-01

    Journal of Physics B: Atomic, Molecular and Optical Physics is delighted to announce a forthcoming special issue on compact x-ray sources, to appear in the winter of 2014, and invites you to submit a paper. The potential for high-brilliance x- and gamma-ray sources driven by advanced, compact accelerators has gained increasing attention in recent years. These novel sources—sometimes dubbed 'fifth generation sources'—will build on the revolutionary advance of the x-ray free-electron laser (FEL). New radiation sources of this type have widespread applications, including in ultra-fast imaging, diagnostic and therapeutic medicine, and studies of matter under extreme conditions. Rapid advances in compact accelerators and in FEL techniques make this an opportune moment to consider the opportunities which could be realized by bringing these two fields together. Further, the successful development of compact radiation sources driven by compact accelerators will be a significant milestone on the road to the development of high-gradient colliders able to operate at the frontiers of particle physics. Thus the time is right to publish a peer-reviewed collection of contributions concerning the state-of-the-art in: advanced and novel acceleration techniques; sophisticated physics at the frontier of FELs; and the underlying and enabling techniques of high brightness electron beam physics. Interdisciplinary research connecting two or more of these fields is also increasingly represented, as exemplified by entirely new concepts such as plasma based electron beam sources, and coherent imaging with fs-class electron beams. We hope that in producing this special edition of Journal of Physics B: Atomic, Molecular and Optical Physics (iopscience.iop.org/0953-4075/) we may help further a challenging mission and ongoing intellectual adventure: the harnessing of newly emergent, compact advanced accelerators to the creation of new, agile light sources with unprecedented capabilities. New schemes for compact accelerators: laser- and beam-driven plasma accelerators; dielectric laser accelerators; THz accelerators. Latest results for compact accelerators. Target design and staging of advanced accelerators. Advanced injection and phase space manipulation techniques. Novel diagnostics: single-shot measurement of sub-fs bunch duration; measurement of ultra-low emittance. Generation and characterization of incoherent radiation: betatron and undulator radiation; Thomson/Compton scattering sources, novel THz sources. Generation and characterization of coherent radiation. Novel FEL simulation techniques. Advances in simulations of novel accelerators: simulations of injection and acceleration processes; simulations of coherent and incoherent radiation sources; start-to-end simulations of fifth generation light sources. Novel undulator schemes. Novel laser drivers for laser-driven accelerators: high-repetition rate laser systems; high wall-plug efficiency systems. Applications of compact accelerators: imaging; radiography; medical applications; electron diffraction and microscopy. Please submit your article by 15 May 2014 (expected web publication: winter 2014); submissions received after this date will be considered for the journal, but may not be included in the special issue.

  2. The physics of sub-critical lattices in accelerator driven hybrid systems: The MUSE experiments in the MASURCA facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chauvin, J. P.; Lebrat, J. F.; Soule, R.

    Since 1991, the CEA has studied the physics of hybrid systems, involving a sub-critical reactor coupled with an accelerator. These studies have provided information on the potential of hybrid systems to transmute actinides and, long lived fission products. The potential of such a system remains to be proven, specifically in terms of the physical understanding of the different phenomena involved and their modelling, as well as in terms of experimental validation of coupled systems, sub-critical environment/accelerator. This validation must be achieved through mock-up studies of the sub-critical environments coupled to a source of external neutrons. The MUSE-4 mock-up experiment ismore » planed at the MASURCA facility and will use an accelerator coupled to a tritium target. The great step between the generator used in the past and the accelerator will allow to increase the knowledge in hybrid physic and to decrease the experimental biases and the measurement uncertainties.« less

  3. PHAZR: A phenomenological code for holeboring in air

    NASA Astrophysics Data System (ADS)

    Picone, J. M.; Boris, J. P.; Lampe, M.; Kailasanath, K.

    1985-09-01

    This report describes a new code for studying holeboring by a charged particle beam, laser, or electric discharge in a gas. The coordinates which parameterize the channel are radial displacement (r) from the channel axis and distance (z) along the channel axis from the energy source. The code is primarily phenomenological that is, we use closed solutions of simple models in order to represent many of the effects which are important in holeboring. The numerical simplicity which we gain from the use of these solutions enables us to estimate the structure of channel over long propagation distances while using a minimum of computer time. This feature makes PHAZR a useful code for those studying and designing future systems. Of particular interest is the design and implementation of the subgrid turbulence model required to compute the enhanced channel cooling caused by asymmetry-driven turbulence. The approximate equations of Boris and Picone form the basis of the model which includes the effects of turbulent diffusion and fluid transport on the turbulent field itself as well as on the channel parameters. The primary emphasis here is on charged particle beams, and as an example, we present typical results for an ETA-like beam propagating in air. These calculations demonstrate how PHAZAR may be used to investigate accelerator parameter space and to isolate the important physical parameters which determine the holeboring properties of a given system. The comparison with two-dimensional calculations provide a calibration of the subgrid turbulence model.

  4. Radiation Protection Studies for Medical Particle Accelerators using Fluka Monte Carlo Code.

    PubMed

    Infantino, Angelo; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano; Marengo, Mario

    2017-04-01

    Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. PHYSICS OF OUR DAYS Physical conditions in potential accelerators of ultra-high-energy cosmic rays: updated Hillas plot and radiation-loss constraints

    NASA Astrophysics Data System (ADS)

    Ptitsyna, Kseniya V.; Troitsky, Sergei V.

    2010-10-01

    We review basic constraints on the acceleration of ultra-high-energy (UHE) cosmic rays (CRs) in astrophysical sources, namely, the geometric (Hillas) criterion and the restrictions from radiation losses in different acceleration regimes. Using the latest available astrophysical data, we redraw the Hillas plot and find potential UHECR accelerators. For the acceleration in the central engines of active galactic nuclei, we constrain the maximal UHECR energy for a given black hole mass. Among active galaxies, only the most powerful ones, radio galaxies and blazars, are able to accelerate protons to UHE, although acceleration of heavier nuclei is possible in much more abundant lower-power Seyfert galaxies.

  6. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  7. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  8. 29 CFR 1915.90 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1915.90 Section 1915.90 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH... General Working Conditions § 1915.90 Safety color code for marking physical hazards. The requirements...

  9. Wakefield Computations for the CLIC PETS using the Parallel Finite Element Time-Domain Code T3P

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candel, A; Kabel, A.; Lee, L.

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the high-performance parallel 3D electromagnetic time-domain code, T3P, for simulations of wakefields and transients in complex accelerator structures. T3P is based on advanced higher-order Finite Element methods on unstructured grids with quadratic surface approximation. Optimized for large-scale parallel processing on leadership supercomputing facilities, T3P allows simulations of realistic 3D structures with unprecedented accuracy, aiding the design of the next generation of accelerator facilities. Applications to the Compact Linear Collider (CLIC) Power Extraction and Transfer Structure (PETS) are presented.

  10. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  11. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  12. Accelerated event-by-event Monte Carlo microdosimetric calculations of electrons and protons tracks on a multi-core CPU and a CUDA-enabled GPU.

    PubMed

    Kalantzis, Georgios; Tachibana, Hidenobu

    2014-01-01

    For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Use of color-coded sleeve shutters accelerates oscillograph channel selection

    NASA Technical Reports Server (NTRS)

    Bouchlas, T.; Bowden, F. W.

    1967-01-01

    Sleeve-type shutters mechanically adjust individual galvanometer light beams onto or away from selected channels on oscillograph papers. In complex test setups, the sleeve-type shutters are color coded to separately identify each oscillograph channel. This technique could be used on any equipment using tubular galvanometer light sources.

  14. Accelerating Innovation: How Nuclear Physics Benefits Us All

    DOE R&D Accomplishments Database

    2011-01-01

    Innovation has been accelerated by nuclear physics in the areas of improving our health; making the world safer; electricity, environment, archaeology; better computers; contributions to industry; and training the next generation of innovators.

  15. Proceedings of the 1995 Particle Accelerator Conference and international Conference on High-Energy Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1996-01-01

    Papers from the sixteenth biennial Particle Accelerator Conference, an international forum on accelerator science and technology held May 1–5, 1995, in Dallas, Texas, organized by Los Alamos National Laboratory (LANL) and Stanford Linear Accelerator Center (SLAC), jointly sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Nuclear and Plasma Sciences Society (NPSS), the American Physical Society (APS) Division of Particles and Beams (DPB), and the International Union of Pure and Applied Physics (IUPAP), and conducted with support from the US Department of Energy, the National Science Foundation, and the Office of Naval Research.

  16. Accelerator science in medical physics.

    PubMed

    Peach, K; Wilson, P; Jones, B

    2011-12-01

    The use of cyclotrons and synchrotrons to accelerate charged particles in hospital settings for the purpose of cancer therapy is increasing. Consequently, there is a growing demand from medical physicists, radiographers, physicians and oncologists for articles that explain the basic physical concepts of these technologies. There are unique advantages and disadvantages to all methods of acceleration. Several promising alternative methods of accelerating particles also have to be considered since they will become increasingly available with time; however, there are still many technical problems with these that require solving. This article serves as an introduction to this complex area of physics, and will be of benefit to those engaged in cancer therapy, or who intend to acquire such technologies in the future.

  17. Gymnastics in Phase Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Alexander Wu; /SLAC

    2012-03-01

    As accelerator technology advances, the requirements on accelerator beam quality become increasingly demanding. Facing these new demands, the topic of phase space gymnastics is becoming a new focus of accelerator physics R&D. In a phase space gymnastics, the beam's phase space distribution is manipulated and precision tailored to meet the required beam qualities. On the other hand, all realization of such gymnastics will have to obey accelerator physics principles as well as technological limitations. Recent examples of phase space gymnastics include Emittance exchanges, Phase space exchanges, Emittance partitioning, Seeded FELs and Microbunched beams. The emittance related topics of this listmore » are reviewed in this report. The accelerator physics basis, the optics design principles that provide these phase space manipulations, and the possible applications of these gymnastics, are discussed. This fascinating new field promises to be a powerful tool of the future.« less

  18. Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albright, Brian James; Yin, Lin; Stark, David James

    One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem

  19. Electron acceleration in the Solar corona - 3D PiC code simulations of guide field reconnection

    NASA Astrophysics Data System (ADS)

    Alejandro Munoz Sepulveda, Patricio

    2017-04-01

    The efficient electron acceleration in the solar corona detected by means of hard X-ray emission is still not well understood. Magnetic reconnection through current sheets is one of the proposed production mechanisms of non-thermal electrons in solar flares. Previous works in this direction were based mostly on test particle calculations or 2D fully-kinetic PiC simulations. We have now studied the consequences of self-generated current-aligned instabilities on the electron acceleration mechanisms by 3D magnetic reconnection. For this sake, we carried out 3D Particle-in-Cell (PiC) code numerical simulations of force free reconnecting current sheets, appropriate for the description of the solar coronal plasmas. We find an efficient electron energization, evidenced by the formation of a non-thermal power-law tail with a hard spectral index smaller than -2 in the electron energy distribution function. We discuss and compare the influence of the parallel electric field versus the curvature and gradient drifts in the guiding-center approximation on the overall acceleration, and their dependence on different plasma parameters.

  20. MAPA: an interactive accelerator design code with GUI

    NASA Astrophysics Data System (ADS)

    Bruhwiler, David L.; Cary, John R.; Shasharina, Svetlana G.

    1999-06-01

    The MAPA code is an interactive accelerator modeling and design tool with an X/Motif GUI. MAPA has been developed in C++ and makes full use of object-oriented features. We present an overview of its features and describe how users can independently extend the capabilities of the entire application, including the GUI. For example, a user can define a new model for a focusing or accelerating element. If the appropriate form is followed, and the new element is "registered" with a single line in the specified file, then the GUI will fully support this user-defined element type after it has been compiled and then linked to the existing application. In particular, the GUI will bring up windows for modifying any relevant parameters of the new element type. At present, one can use the GUI for phase space tracking, finding fixed points and generating line plots for the Twiss parameters, the dispersion and the accelerator geometry. The user can define new types of simulations which the GUI will automatically support by providing a menu option to execute the simulation and subsequently rendering line plots of the resulting data.

  1. Accelerator Technology Division annual report, FY 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-06-01

    This paper discusses: accelerator physics and special projects; experiments and injectors; magnetic optics and beam diagnostics; accelerator design and engineering; radio-frequency technology; accelerator theory and simulation; free-electron laser technology; accelerator controls and automation; and high power microwave sources and effects.

  2. Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System

    NASA Astrophysics Data System (ADS)

    Aizawa, Naoto; Iwasaki, Tomohiko

    2014-06-01

    Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.

  3. Particle acceleration, transport and turbulence in cosmic and heliospheric physics

    NASA Technical Reports Server (NTRS)

    Matthaeus, W.

    1992-01-01

    In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.

  4. Laboratory laser acceleration and high energy astrophysics: {gamma}-ray bursts and cosmic rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, T.; Takahashi, Y.

    1998-08-20

    Recent experimental progress in laser acceleration of charged particles (electrons) and its associated processes has shown that intense electromagnetic pulses can promptly accelerate charged particles to high energies and that their energy spectrum is quite hard. On the other hand some of the high energy astrophysical phenomena such as extremely high energy cosmic rays and energetic components of {gamma}-ray bursts cry for new physical mechanisms for promptly accelerating particles to high energies. The authors suggest that the basic physics involved in laser acceleration experiments sheds light on some of the underlying mechanisms and their energy spectral characteristics of the promptlymore » accelerated particles in these high energy astrophysical phenomena.« less

  5. Analysis of secondary particle behavior in multiaperture, multigrid accelerator for the ITER neutral beam injector.

    PubMed

    Mizuno, T; Taniguchi, M; Kashiwagi, M; Umeda, N; Tobari, H; Watanabe, K; Dairaku, M; Sakamoto, K; Inoue, T

    2010-02-01

    Heat load on acceleration grids by secondary particles such as electrons, neutrals, and positive ions, is a key issue for long pulse acceleration of negative ion beams. Complicated behaviors of the secondary particles in multiaperture, multigrid (MAMuG) accelerator have been analyzed using electrostatic accelerator Monte Carlo code. The analytical result is compared to experimental one obtained in a long pulse operation of a MeV accelerator, of which second acceleration grid (A2G) was removed for simplification of structure. The analytical results show that relatively high heat load on the third acceleration grid (A3G) since stripped electrons were deposited mainly on A3G. This heat load on the A3G can be suppressed by installing the A2G. Thus, capability of MAMuG accelerator is demonstrated for suppression of heat load due to secondary particles by the intermediate grids.

  6. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 5 2013-07-01 2013-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  7. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  8. 29 CFR 1910.144 - Safety color code for marking physical hazards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 5 2012-07-01 2012-07-01 false Safety color code for marking physical hazards. 1910.144... § 1910.144 Safety color code for marking physical hazards. (a) Color identification—(1) Red. Red shall be the basic color for the identification of: (i) Fire protection equipment and apparatus. [Reserved] (ii...

  9. NASA's Microgravity Fluid Physics Program: Tolerability to Residual Accelerations

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond

    1998-01-01

    An overview of the NASA microgravity fluid physics program is presented. The necessary quality of a reduced-gravity environment in terms of tolerable residual acceleration or g levels is a concern that is inevitably raised for each new microgravity experiment. Methodologies have been reported in the literature that provide guidance in obtaining reasonable estimates of residual acceleration sensitivity for a broad range of fluid physics phenomena. Furthermore, a relatively large and growing database of microgravity experiments that have successfully been performed in terrestrial reduced gravity facilities and orbiting platforms exists. Similarity of experimental conditions and hardware, in some cases, lead to new experiments adopting prior experiments g-requirements. Rationale applied to other experiments can, in principle, be a valuable guide to assist new Principal Investigators, PIs, in determining the residual acceleration tolerability of their flight experiments. The availability of g-requirements rationale from prior (mu)g experiments is discussed. An example of establishing g tolerability requirements is demonstrated, using a current microgravity fluid physics flight experiment. The Fluids and Combustion Facility (FCF) which is currently manifested on the US Laboratory of the International Space Station (ISS) will provide opportunities for fluid physics and combustion experiments throughout the life of the ISS. Although the FCF is not intended to accommodate all fluid physics experiments, it is expected to meet the science requirements of approximately 80% of the new PIs that enter the microgravity fluid physics program. The residual acceleration requirements for the FCF fluid physics experiments are based on a set of fourteen reference fluid physics experiments which are discussed.

  10. Path Toward a Unifid Geometry for Radiation Transport

    NASA Technical Reports Server (NTRS)

    Lee, Kerry; Barzilla, Janet; Davis, Andrew; Zachmann

    2014-01-01

    The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex computer-aided design (CAD) models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN [high charge and energy transport code developed by NASA Langley Research Center (LaRC)], are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit-specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The workflow for achieving radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats

  11. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  12. Lebedev acceleration and comparison of different photometric models in the inversion of lightcurves for asteroids

    NASA Astrophysics Data System (ADS)

    Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao

    2018-04-01

    In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.

  13. The Impact of IBM Cell Technology on the Programming Paradigm in the Context of Computer Systems for Climate and Weather Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shujia; Duffy, Daniel; Clune, Thomas

    The call for ever-increasing model resolutions and physical processes in climate and weather models demands a continual increase in computing power. The IBM Cell processor's order-of-magnitude peak performance increase over conventional processors makes it very attractive to fulfill this requirement. However, the Cell's characteristics, 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. As a trial, we selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (half the total computational time), (2) has an extremely high computational intensity: the ratiomore » of computational load to main memory transfers, and (3) exhibits embarrassingly parallel column computations. In this paper, we converted the baseline code (single-precision Fortran) to C and ported it to an IBM BladeCenter QS20. For performance, we manually SIMDize four independent columns and include several unrolling optimizations. Our results show that when compared with the baseline implementation running on one core of Intel's Xeon Woodcrest, Dempsey, and Itanium2, the Cell is approximately 8.8x, 11.6x, and 12.8x faster, respectively. Our preliminary analysis shows that the Cell can also accelerate the dynamics component (~;;25percent total computational time). We believe these dramatic performance improvements make the Cell processor very competitive as an accelerator.« less

  14. Implementation of an accelerated physical examination course in a doctor of pharmacy program.

    PubMed

    Ho, Jackie; Bidwal, Monica K; Lopes, Ingrid C; Shah, Bijal M; Ip, Eric J

    2014-12-15

    To describe the implementation of a 1-day accelerated physical examination course for a doctor of pharmacy program and to evaluate pharmacy students' knowledge, attitudes, and confidence in performing physical examination. Using a flipped teaching approach, course coordinators collaborated with a physician faculty member to design and develop the objectives of the course. Knowledge, attitude, and confidence survey questions were administered before and after the practical laboratory. Following the practical laboratory, knowledge improved by 8.3% (p<0.0001). Students' perceived ability and confidence to perform a physical examination significantly improved (p<0.0001). A majority of students responded that reviewing the training video (81.3%) and reading material (67.4%) prior to the practical laboratory was helpful in learning the physical examination. An accelerated physical examination course using a flipped teaching approach was successful in improving students' knowledge of, attitudes about, and confidence in using physical examination skills in pharmacy practice.

  15. Benchmark of neutron production cross sections with Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Tsai, Pi-En; Lai, Bo-Lun; Heilbronn, Lawrence H.; Sheu, Rong-Jiun

    2018-02-01

    Aiming to provide critical information in the fields of heavy ion therapy, radiation shielding in space, and facility design for heavy-ion research accelerators, the physics models in three Monte Carlo simulation codes - PHITS, FLUKA, and MCNP6, were systematically benchmarked with comparisons to fifteen sets of experimental data for neutron production cross sections, which include various combinations of 12C, 20Ne, 40Ar, 84Kr and 132Xe projectiles and natLi, natC, natAl, natCu, and natPb target nuclides at incident energies between 135 MeV/nucleon and 600 MeV/nucleon. For neutron energies above 60% of the specific projectile energy per nucleon, the LAQGMS03.03 in MCNP6, the JQMD/JQMD-2.0 in PHITS, and the RQMD-2.4 in FLUKA all show a better agreement with data in heavy-projectile systems than with light-projectile systems, suggesting that the collective properties of projectile nuclei and nucleon interactions in the nucleus should be considered for light projectiles. For intermediate-energy neutrons whose energies are below the 60% projectile energy per nucleon and above 20 MeV, FLUKA is likely to overestimate the secondary neutron production, while MCNP6 tends towards underestimation. PHITS with JQMD shows a mild tendency for underestimation, but the JQMD-2.0 model with a modified physics description for central collisions generally improves the agreement between data and calculations. For low-energy neutrons (below 20 MeV), which are dominated by the evaporation mechanism, PHITS (which uses GEM linked with JQMD and JQMD-2.0) and FLUKA both tend to overestimate the production cross section, whereas MCNP6 tends to underestimate more systems than to overestimate. For total neutron production cross sections, the trends of the benchmark results over the entire energy range are similar to the trends seen in the dominate energy region. Also, the comparison of GEM coupled with either JQMD or JQMD-2.0 in the PHITS code indicates that the model used to describe the first stage of a nucleus-nucleus collision also affects the low-energy neutron production. Thus, in this case, a proper combination of two physics models is desired to reproduce the measured results. In addition, code users should be aware that certain models consistently produce secondary neutrons within a constant fraction of another model in certain energy regions, which might be correlated to different physics treatments in different models.

  16. Preoperative predictors of returning to work following primary total knee arthroplasty.

    PubMed

    Styron, Joseph F; Barsoum, Wael K; Smyth, Kathleen A; Singer, Mendel E

    2011-01-05

    There is little in the literature to guide clinicians in advising patients regarding their return to work following a primary total knee arthroplasty. In this study, we aimed to identify which factors are important in estimating a patient's time to return to work following primary total knee arthroplasty, how long patients can anticipate being off from work, and the types of jobs to which patients are able to return following primary total knee arthroplasty. A prospective cohort study was performed in which patients scheduled for a primary total knee arthroplasty completed a validated questionnaire preoperatively and at four to six weeks, three months, and six months postoperatively. The questionnaire assessed the patient's occupational physical demands, ability to perform job responsibilities, physical status, and motivation to return to work as well as factors that may impact his or her recovery and other workplace characteristics. Two survival analysis models were constructed to evaluate the time to return to work either at least part-time or full-time. Acceleration factors were calculated to indicate the relative percentage of time until the patient returned to work. The median time to return to work was 8.9 weeks. Patients who reported a sense of urgency about returning to work were found to return in half the time taken by other employees (acceleration factor = 0.468; p < 0.001). Other preoperative factors associated with a faster return to work included being female (acceleration factor = 0.783), self-employment (acceleration factor = 0.792), higher mental health scores (acceleration factor = 0.891), higher physical function scores (acceleration factor = 0.809), higher Functional Comorbidity Index scores (acceleration factor = 0.914), and a handicap accessible workplace (acceleration factor = 0.736). A slower return to work was associated with having less pain preoperatively (acceleration factor = 1.132), having a more physically demanding job (acceleration factor = 1.116), and receiving Workers' Compensation (acceleration factor = 4.360). Although the physical demands of a patient's job have a moderate influence on the patient's ability to return to work following a primary total knee arthroplasty, the patient's characteristics, particularly motivation, play a more important role.

  17. The conversion of CESR to operate as the Test Accelerator, CesrTA. Part 3: Electron cloud diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billing, M. G.; Conway, J. V.; Crittenden, J. A.

    Cornell's electron/positron storage ring (CESR) was modified over a series of accelerator shutdowns beginning in May 2008, which substantially improves its capability for research and development for particle accelerators. CESR's energy span from 1.8 to 5.6 GeV with both electrons and positrons makes it ideal for the study of a wide spectrum of accelerator physics issues and instrumentation related to present light sources and future lepton damping rings. Additionally a number of these are also relevant for the beam physics of proton accelerators. This paper is the third in a series of four describing the conversion of CESR to themore » test accelerator, CESRTA. The first two papers discuss the overall plan for the conversion of the storage ring to an instrument capable of studying advanced accelerator physics issues [1] and the details of the vacuum system upgrades [2]. This paper focuses on the necessary development of new instrumentation, situated in four dedicated experimental regions, capable of studying such phenomena as electron clouds (ECs) and methods to mitigate EC effects. The fourth paper in this series describes the vacuum system modifications of the superconducting wigglers to accommodate the diagnostic instrumentation for the study of EC behavior within wigglers. Lastly, while the initial studies of CESRTA focused on questions related to the International Linear Collider damping ring design, CESRTA is a very versatile storage ring, capable of studying a wide range of accelerator physics and instrumentation questions.« less

  18. The conversion of CESR to operate as the Test Accelerator, CesrTA. Part 3: Electron cloud diagnostics

    DOE PAGES

    Billing, M. G.; Conway, J. V.; Crittenden, J. A.; ...

    2016-04-28

    Cornell's electron/positron storage ring (CESR) was modified over a series of accelerator shutdowns beginning in May 2008, which substantially improves its capability for research and development for particle accelerators. CESR's energy span from 1.8 to 5.6 GeV with both electrons and positrons makes it ideal for the study of a wide spectrum of accelerator physics issues and instrumentation related to present light sources and future lepton damping rings. Additionally a number of these are also relevant for the beam physics of proton accelerators. This paper is the third in a series of four describing the conversion of CESR to themore » test accelerator, CESRTA. The first two papers discuss the overall plan for the conversion of the storage ring to an instrument capable of studying advanced accelerator physics issues [1] and the details of the vacuum system upgrades [2]. This paper focuses on the necessary development of new instrumentation, situated in four dedicated experimental regions, capable of studying such phenomena as electron clouds (ECs) and methods to mitigate EC effects. The fourth paper in this series describes the vacuum system modifications of the superconducting wigglers to accommodate the diagnostic instrumentation for the study of EC behavior within wigglers. Lastly, while the initial studies of CESRTA focused on questions related to the International Linear Collider damping ring design, CESRTA is a very versatile storage ring, capable of studying a wide range of accelerator physics and instrumentation questions.« less

  19. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    NASA Astrophysics Data System (ADS)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  20. Collective Temperature Anisotropy Instabilities in Intense Charged Particle Beams

    NASA Astrophysics Data System (ADS)

    Startsev, Edward

    2006-10-01

    Periodic focusing accelerators, transport systems and storage rings have a wide range of applications ranging from basic scientific research in high energy and nuclear physics, to applications such as ion-beam-driven high energy density physics and fusion, and spallation neutron sources. Of particular importance at the high beam currents and charge densities of practical interest, are the effects of the intense self fields produced by the beam space charge and current on determining the detailed equilibrium, stability and transport properties. Charged particle beams confined by external focusing fields represent an example of nonneutral plasma. A characteristic feature of such plasmas is the non-uniformity of the equilibrium density profiles and the nonlinearity of the self fields, which makes detailed analytical investigation very difficult. The development and application of advanced numerical tools such as eigenmode codes [1] and Monte-Carlo particle simulation methods [2] are often the only tractable approach to understand the underlying physics of different instabilities familiar in electrically neutral plasmas which may cause a degradation in beam quality. Two such instabilities are the electrostatic Harris instability [2] and the electromagnetic Weibel instability [1], both driven by a large temperature anisotropy which develops naturally in accelerators. The beam acceleration causes a large reduction in the longitudinal temperature and provides the free energy to drive collective temperature anisotropy instabilities. Such instabilities may lead to an increase in the longitudinal velocity spread, which will make focusing the beam difficult, and may impose a limit on the beam luminosity and the minimum spot size achievable in focusing experiments. This paper reviews recent advances in the theory and simulation of collective instabilities in intense charged particle beams caused by temperature anisotropy. We also describe new simulation tools that have been developed to study these instabilities. The results of the investigations that identify the instability growth rates, levels of saturations, and conditions for quiescent beam propagation will also be discussed. [1] E.A. Startsev and R.C. Davidson, Phys.Plasmas 10, 4829 (2003). [2] E.A. Startsev, R.C. Davidson and H. Qin, Phys.Rev. ST Accel. Beams 8,124201 (2005).

  1. An ion accelerator for undergraduate research and teaching

    NASA Astrophysics Data System (ADS)

    Monce, Michael

    1997-04-01

    We have recently upgraded our 400kV, single beam line ion accelerator to a 1MV, multiple beam line machine. This upgrade has greatly expanded the opportunities for student involvement in the laboratory. We will describe four areas of work in which students now participate. The first is the continuing research being conducted in excitations produced in ion-molecule collisions, which recently involved the use of digital imaging. The second area of research now opened up by the new accelerator involves PIXE. We are currently beginning a cross disciplinary study of archaeological specimens using PIXE and involving students from both anthropology and physics. Finally, two beam lines from the accelerator will be used for basic work in nuclear physics: Rutherford scattering and nuclear resonances. These two nuclear physics experiments will be integrated into our sophomore-junior level, year-long course in experimental physics.

  2. plasmaFoam: An OpenFOAM framework for computational plasma physics and chemistry

    NASA Astrophysics Data System (ADS)

    Venkattraman, Ayyaswamy; Verma, Abhishek Kumar

    2016-09-01

    As emphasized in the 2012 Roadmap for low temperature plasmas (LTP), scientific computing has emerged as an essential tool for the investigation and prediction of the fundamental physical and chemical processes associated with these systems. While several in-house and commercial codes exist, with each having its own advantages and disadvantages, a common framework that can be developed by researchers from all over the world will likely accelerate the impact of computational studies on advances in low-temperature plasma physics and chemistry. In this regard, we present a finite volume computational toolbox to perform high-fidelity simulations of LTP systems. This framework, primarily based on the OpenFOAM solver suite, allows us to enhance our understanding of multiscale plasma phenomenon by performing massively parallel, three-dimensional simulations on unstructured meshes using well-established high performance computing tools that are widely used in the computational fluid dynamics community. In this talk, we will present preliminary results obtained using the OpenFOAM-based solver suite with benchmark three-dimensional simulations of microplasma devices including both dielectric and plasma regions. We will also discuss the future outlook for the solver suite.

  3. Migration of Hazardous Substances through Soil. Part 4. Development of a Serial Batch Extraction Method and Application to the Accelerated Testing of Seven Industrial Wastes

    DTIC Science & Technology

    1987-09-01

    Evaluation Commnand &_. ADMASS Coly, 1W~., and ZIP Code ) 7b. ADDRESS (C01y, State, wid ZIP Code ) Dugwiay, Utahi 84022-5000 Aberdeen Proving Ground...Aency_________________________ 9L AoOMS(CRY, 0to, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS Hazardous Waste Environmental RLsearch Lab PROGRAM PROJECT TASK...CLASSIFICATION 0 UNO.ASSIFIEDAIJNLIMITED 0l SAME AS RPT. 03 OTIC USERS UNCLA.SSIFIED 22a. RAWE OF RESPONSIBLE INDIVIDUAL 22b TELEPHONE (Include Area Code ) I

  4. New estimation method of neutron skyshine for a high-energy particle accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook

    2016-09-01

    A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.

  5. Accelerator science and technology in Europe: EuCARD 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the third annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.

  6. Measurement of Coriolis Acceleration with a Smartphone

    NASA Astrophysics Data System (ADS)

    Shakur, Asif; Kraft, Jakob

    2016-05-01

    Undergraduate physics laboratories seldom have experiments that measure the Coriolis acceleration. This has traditionally been the case owing to the inherent complexities of making such measurements. Articles on the experimental determination of the Coriolis acceleration are few and far between in the physics literature. However, because modern smartphones come with a raft of built-in sensors, we have a unique opportunity to experimentally determine the Coriolis acceleration conveniently in a pedagogically enlightening environment at modest cost by using student-owned smartphones. Here we employ the gyroscope and accelerometer in a smartphone to verify the dependence of Coriolis acceleration on the angular velocity of a rotatingtrack and the speed of the sliding smartphone.

  7. Status and Prospects of Hirfl Experiments on Nuclear Physics

    NASA Astrophysics Data System (ADS)

    Xu, H. S.; Zheng, C.; Xiao, G. Q.; Zhan, W. L.; Zhou, X. H.; Zhang, Y. H.; Sun, Z. Y.; Wang, J. S.; Gan, Z. G.; Huang, W. X.; Ma, X. W.

    HIRFL is an accelerator complex consisting of 3 accelerators, 2 radioactive beams lines, 1 storage rings and a number of experimental setups. The research activities at HIRFL cover the fields of radio-biology, material science, atomic physics, and nuclear physics. This report mainly concentrates on the experiments of nuclear physics with the existing and planned experimental setups such as SHANS, RIBLL1, ETF, CSRe, PISA and HPLUS at HIRFL.

  8. Calculations of beam dynamics in Sandia linear electron accelerators, 1984

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poukey, J.W.; Coleman, P.D.

    1985-03-01

    A number of code and analytic studies were made during 1984 which pertain to the Sandia linear accelerators MABE and RADLAC. In this report the authors summarize the important results of the calculations. New results include a better understanding of gap-induced radial oscillations, leakage currents in a typical MABE gas, emittance growth in a beam passing through a series of gaps, some new diocotron results, and the latest diode simulations for both accelerators. 23 references, 30 figures, 1 table.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A; Kwan, J

    Earlier this year, the U.S. Department of Energy Office of Fusion Energy Sciences approved the NDCX-II project, a second-generation Neutralized Drift Compression eXperiment. NDCX-II is a collaborative effort of scientists and engineers from Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and the Princeton Plasma Physics Laboratory (PPPL), in a formal collaboration known as the Virtual National Laboratory for Heavy Ion Fusion Science (HIFS-VNL). Supported by $11 M of funding from the American Recovery and Reinvestment Act, construction at LBNL commenced in July of 2009, with completion anticipated in March of 2012. Applications of this facility will includemore » studies of: the basic physics of the poorly understood 'warm dense matter' regime of temperatures around 1 eV and densities near solid, using uniform, volumetric ion heating of thin foil targets; ion energy coupling into an ablating plasma (such as that which occurs in an inertial fusion target) using beams with time-varying kinetic energy; space-charge-dominated ion beam dynamics; and beam focusing and pulse compression in neutralizing plasma. The machine will complement facilities at GSI in Darmstadt, Germany, but will employ lower ion kinetic energies and commensurately shorter stopping ranges in matter. Much of this research will contribute directly toward the collaboration's ultimate goal of electric power production via heavy-ion beam-driven inertial confinement fusion ('Heavy-Ion Fusion', or HIF). In inertial fusion, a target containing fusion fuel is heated by energetic 'driver' beams, and undergoes a miniature thermonuclear explosion. Currently the largest U.S. research program in inertial confinement is at Livermore's National Ignition Facility (NIF), a multibillion-dollar, stadium-sized laser facility optimized for studying physics issues relevant to nuclear stockpile stewardship. Nonetheless, NIF is expected to establish the fundamental feasibility of fusion ignition on the laboratory scale, and thus advance this approach to fusion energy. Heavy ion accelerators have a number of attributes (such as efficiency, longevity, and use of magnetic fields for final focusing) that make them attractive candidates as Inertial Fusion energy (IFE) drivers As with LBNL's existing NDCX-I, the new machine will produce short ion pulses using the technique of neutralized drift compression. A head-to-tail velocity gradient is imparted to the beam, which then shortens as it drifts in neutralizing plasma that suppresses space-charge forces. NDCX-II will make extensive use of induction cells and other hardware from the decommissioned ATA facility at LLNL. Figure (1) shows the layout of the facility, to be sited in LBNL's Building 58 alongside the existing NDCX-I apparatus. This second-generation facility represents a significant upgrade from the existing NDCX-I. It will be extensible and reconfigurable; in the configuration that has received the most emphasis, each NDCX-II pulse will deliver 30 nC of ions at 3 MeV into a mm-scale spot onto a thin-foil target. Pulse compression to {approx} 1 ns occurs in the accelerator as well as in the drift compression line; the beam is manipulated using suitably tailored voltage waveforms in the accelerating gaps. NDCX-II employs novel beam dynamics. To use the 200 kV Blumlein power supplies from ATA (blue cylinders in the figure), the pulse duration must first be reduced to less than 70 ns. This shortening is accomplished in an initial stage of non-neutral drift compression, downstream of the injector and the first few induction cells. The compression is sufficiently rapid that fewer than ten long-pulse waveform generators are needed, with Blumleins powering the rest of the acceleration. Extensive simulation studies have enabled an attractive physics design; these employ both a new 1-D code (ASP) and the VNL's workhorse 2-D/3-D code Warp. Snapshots from a simulation movie (available online) appear in Fig. 2. Studies on a dedicated test stand are quantifying the performance of the ATA hardware and of pulsed solenoids that will provide transverse beam confinement (ions require much stronger fields than the electrons accelerated by ATA). For more information, see the recent article in the Berkeley Lab News and references therein. Joe Kwan is the NDCX-II project manager and Alex Friedman is the leader for the physics design.« less

  10. Test Particle Simulations of Electron Injection by the Bursty Bulk Flows (BBFs) using High Resolution Lyon-Feddor-Mobarry (LFM) Code

    NASA Astrophysics Data System (ADS)

    Eshetu, W. W.; Lyon, J.; Wiltberger, M. J.; Hudson, M. K.

    2017-12-01

    Test particle simulations of electron injection by the bursty bulk flows (BBFs) have been done using a test particle tracer code [1], and the output fields of the Lyon-Feddor-Mobarry global magnetohydro- dynamics (MHD) code[2]. The MHD code was run with high resolu- tion (oct resolution), and with specified solar wind conditions so as to reproduce the observed qualitative picture of the BBFs [3]. Test par- ticles were injected so that they interact with earthward propagating BBFs. The result of the simulation shows that electrons are pushed ahead of the BBFs and accelerated into the inner magnetosphere. Once electrons are in the inner magnetosphere they are further energized by drift resonance with the azimuthal electric field. In addition pitch angle scattering of electrons resulting in the violation conservation of the first adiabatic invariant has been observed. The violation of the first adiabatic invariant occurs as electrons cross a weak magnetic field region with a strong gradient of the field perturbed by the BBFs. References 1. Kress, B. T., Hudson,M. K., Looper, M. D. , Albert, J., Lyon, J. G., and Goodrich, C. C. (2007), Global MHD test particle simulations of ¿ 10 MeV radiation belt electrons during storm sudden commencement, J. Geophys. Res., 112, A09215, doi:10.1029/2006JA012218. Lyon,J. G., Fedder, J. A., and Mobarry, C.M., The Lyon- Fedder-Mobarry (LFM) Global MHD Magnetospheric Simulation Code (2004), J. Atm. And Solar-Terrestrial Phys., 66, Issue 15-16, 1333- 1350,doi:10.1016/j.jastp. Wiltberger, Merkin, M., Lyon, J. G., and Ohtani, S. (2015), High-resolution global magnetohydrodynamic simulation of bursty bulk flows, J. Geophys. Res. Space Physics, 120, 45554566, doi:10.1002/2015JA021080.

  11. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  12. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE PAGES

    Wang, Bei; Ethier, Stephane; Tang, William; ...

    2017-06-29

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  13. Modern gyrokinetic particle-in-cell simulation of fusion plasmas on top supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability ofmore » the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon Phi (MIC) co-processors and performance comparisons with state-of-the-art homogeneous HPC systems such as Blue Gene/Q. New discovery science capabilities in the magnetic fusion energy application domain are enabled, including investigations of Ion-Temperature-Gradient (ITG) driven turbulence simulations with unprecedented spatial resolution and long temporal duration. Performance studies with realistic fusion experimental parameters are carried out on multiple supercomputing systems spanning a wide range of cache capacities, cache-sharing configurations, memory bandwidth, interconnects and network topologies. These performance comparisons using a realistic discovery-science-capable domain application code provide valuable insights on optimization techniques across one of the broadest sets of current high-end computing platforms worldwide.« less

  14. Proposal for an Accelerator R&D User Facility at Fermilab's Advanced Superconducting Test Accelerator (ASTA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Church, M.; Edwards, H.; Harms, E.

    2013-10-01

    Fermilab is the nation’s particle physics laboratory, supported by the DOE Office of High Energy Physics (OHEP). Fermilab is a world leader in accelerators, with a demonstrated track-record— spanning four decades—of excellence in accelerator science and technology. We describe the significant opportunity to complete, in a highly leveraged manner, a unique accelerator research facility that supports the broad strategic goals in accelerator science and technology within the OHEP. While the US accelerator-based HEP program is oriented toward the Intensity Frontier, which requires modern superconducting linear accelerators and advanced highintensity storage rings, there are no accelerator test facilities that support themore » accelerator science of the Intensity Frontier. Further, nearly all proposed future accelerators for Discovery Science will rely on superconducting radiofrequency (SRF) acceleration, yet there are no dedicated test facilities to study SRF capabilities for beam acceleration and manipulation in prototypic conditions. Finally, there are a wide range of experiments and research programs beyond particle physics that require the unique beam parameters that will only be available at Fermilab’s Advanced Superconducting Test Accelerator (ASTA). To address these needs we submit this proposal for an Accelerator R&D User Facility at ASTA. The ASTA program is based on the capability provided by an SRF linac (which provides electron beams from 50 MeV to nearly 1 GeV) and a small storage ring (with the ability to store either electrons or protons) to enable a broad range of beam-based experiments to study fundamental limitations to beam intensity and to develop transformative approaches to particle-beam generation, acceleration and manipulation which cannot be done elsewhere. It will also establish a unique resource for R&D towards Energy Frontier facilities and a test-bed for SRF accelerators and high brightness beam applications in support of the OHEP mission of Accelerator Stewardship.« less

  15. Symplectic orbit and spin tracking code for all-electric storage rings

    NASA Astrophysics Data System (ADS)

    Talman, Richard M.; Talman, John D.

    2015-07-01

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring "trap." At the "magic" kinetic energy of 232.792 MeV, proton spins are "frozen," for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10-29e -cm or greater will produce a statistically significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial "symplectification"). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to "resurrect," or reverse engineer, the "AGS-analog" all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM's. The companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.

  16. Can Accelerators Accelerate Learning?

    NASA Astrophysics Data System (ADS)

    Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.

    2009-03-01

    The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.

  17. Software Tools for Stochastic Simulations of Turbulence

    DTIC Science & Technology

    2015-08-28

    client interface to FTI. Specefic client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH...client programs using this interface include the weather forecasting code WRF ; the high energy physics code, FLASH; and two locally constructed fluid...45 4.4.2.2 FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4.2.3 WRF

  18. Seismo-Acoustic Numerical Investigation of Land Impacts, Water Impacts, or Air Bursts of Asteroids

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.; Miller, P. L.; Dearborn, D. S.

    2016-12-01

    The annual probability of an asteroid impact is low, but over time, such catastrophic events are inevitable. Interest in assessing the impact consequences has led us to develop a physics-based framework to seamlessly simulate the event from entry to impact, including air, water and ground shock propagation and wave generation. The non-linear effects are simulated using the hydrodynamics code GEODYN. As effects propagate outward, they become a wave source for the linear-elastic-wave propagation code and simulated using SAW or SWWP, depends on whether the asteroid impacts the land or the ocean, respectively. The GEODYN-SAW-SWWP coupling is based on the structured adaptive-mesh-refinement infrastructure, SAMRAI, and has been used in FEMA table-top exercises conducted in 2013 and 2014, and more recently, the 2015 Planetary Defense Conference exercise. Moreover, during atmospheric entry, asteroids create an acoustic trace that could be used to infer several physical characteristics of asteroid itself. Using SAW we explore the physical space parameters in order to rank the most important characteristics; Results from these simulations provide an estimate of onshore and offshore effects and can inform more sophisticated inundation and structural models. The capabilities of this methodology are illustrated by providing results for different impact locations, and an exploration of asteroid size on the waves arriving at the shoreline of area cities. We constructed the maximum and minimum envelops of water-wave heights or acceleration spectra given the size of the asteroid and the location of the impact along the risk corridor. Such profiles can inform emergency response and disaster-mitigation efforts. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  19. Seismo-Acoustic Numerical Investigation of Land Impacts, Water Impacts, or Air Bursts of Asteroids

    NASA Astrophysics Data System (ADS)

    Ezzedine, S. M.; Dearborn, D. S.; Miller, P. L.

    2017-12-01

    The annual probability of an asteroid impact is low, but over time, such catastrophic events are inevitable. Interest in assessing the impact consequences has led us to develop a physics-based framework to seamlessly simulate the event from entry to impact, including air, water and ground shock propagation and wave generation. The non-linear effects are simulated using the hydrodynamics code GEODYN. As effects propagate outward, they become a wave source for the linear-elastic-wave propagation code and simulated using SAW or SWWP, depends on whether the asteroid impacts the land or the ocean, respectively. The GEODYN-SAW-SWWP coupling is based on the structured adaptive-mesh-refinement infrastructure, SAMRAI, and has been used in FEMA table-top exercises conducted in 2013 and 2014, and more recently, the 2015 Planetary Defense Conference exercise. Moreover, during atmospheric entry, asteroids create an acoustic trace that could be used to infer several physical characteristics of asteroid itself. Using SAW we explore the physical space parameters in order to rank the most important characteristics; Results from these simulations provide an estimate of onshore and offshore effects and can inform more sophisticated inundation and structural models. The capabilities of this methodology are illustrated by providing results for different impact locations, and an exploration of asteroid size on the waves arriving at the shoreline of area cities. We constructed the maximum and minimum envelops of water-wave heights or acceleration spectra given the size of the asteroid and the location of the impact along the risk corridor. Such profiles can inform emergency response and disaster-mitigation efforts. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  20. Steady-State Ion Beam Modeling with MICHELLE

    NASA Astrophysics Data System (ADS)

    Petillo, John

    2003-10-01

    There is a need to efficiently model ion beam physics for ion implantation, chemical vapor deposition, and ion thrusters. Common to all is the need for three-dimensional (3D) simulation of volumetric ion sources, ion acceleration, and optics, with the ability to model charge exchange of the ion beam with a background neutral gas. The two pieces of physics stand out as significant are the modeling of the volumetric source and charge exchange. In the MICHELLE code, the method for modeling the plasma sheath in ion sources assumes that the electron distribution function is a Maxwellian function of electrostatic potential over electron temperature. Charge exchange is the process by which a neutral background gas with a "fast" charged particle streaming through exchanges its electron with the charged particle. An efficient method for capturing this is essential, and the model presented is based on semi-empirical collision cross section functions. This appears to be the first steady-state 3D algorithm of its type to contain multiple generations of charge exchange, work with multiple species and multiple charge state beam/source particles simultaneously, take into account the self-consistent space charge effects, and track the subsequent fast neutral particles. The solution used by MICHELLE is to combine finite element analysis with particle-in-cell (PIC) methods. The basic physics model is based on the equilibrium steady-state application of the electrostatic particle-in-cell (PIC) approximation employing a conformal computational mesh. The foundation stems from the same basic model introduced in codes such as EGUN. Here, Poisson's equation is used to self-consistently include the effects of space charge on the fields, and the relativistic Lorentz equation is used to integrate the particle trajectories through those fields. The presentation will consider the complexity of modeling ion thrusters.

  1. Processing Motion: Using Code to Teach Newtonian Physics

    NASA Astrophysics Data System (ADS)

    Massey, M. Ryan

    Prior to instruction, students often possess a common-sense view of motion, which is inconsistent with Newtonian physics. Effective physics lessons therefore involve conceptual change. To provide a theoretical explanation for concepts and how they change, the triangulation model brings together key attributes of prototypes, exemplars, theories, Bayesian learning, ontological categories, and the causal model theory. The triangulation model provides a theoretical rationale for why coding is a viable method for physics instruction. As an experiment, thirty-two adolescent students participated in summer coding academies to learn how to design Newtonian simulations. Conceptual and attitudinal data was collected using the Force Concept Inventory and the Colorado Learning Attitudes about Science Survey. Results suggest that coding is an effective means for teaching Newtonian physics.

  2. Advanced Multi-Physics (AMP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby

    2012-06-01

    The Advanced Multi-Physics (AMP) code, in its present form, will allow a user to build a multi-physics application code for existing mechanics and diffusion operators and extend them with user-defined material models and new physics operators. There are examples that demonstrate mechanics, thermo-mechanics, coupled diffusion, and mechanical contact. The AMP code is designed to leverage a variety of mathematical solvers (PETSc, Trilinos, SUNDIALS, and AMP solvers) and mesh databases (LibMesh and AMP) in a consistent interchangeable approach.

  3. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    DTIC Science & Technology

    2017-04-13

    modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a

  4. SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.

    PubMed

    Liu, T; Ding, A; Xu, X

    2012-06-01

    To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m 2 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.

  5. Acceleration of neutrons in a scheme of a tautochronous mathematical pendulum (physical principles)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivlin, Lev A

    We consider the physical principles of neutron acceleration through a multiple synchronous interaction with a gradient rf magnetic field in a scheme of a tautochronous mathematical pendulum. (laser applications and other aspects of quantum electronics)

  6. Emulating RRTMG Radiation with Deep Neural Networks for the Accelerated Model for Climate and Energy

    NASA Astrophysics Data System (ADS)

    Pal, A.; Norman, M. R.

    2017-12-01

    The RRTMG radiation scheme in the Accelerated Model for Climate and Energy Multi-scale Model Framework (ACME-MMF), is a bottleneck and consumes approximately 50% of the computational time. To simulate a case using RRTMG radiation scheme in ACME-MMF with high throughput and high resolution will therefore require a speed-up of this calculation while retaining physical fidelity. In this study, RRTMG radiation is emulated with Deep Neural Networks (DNNs). The first step towards this goal is to run a case with ACME-MMF and generate input data sets for the DNNs. A principal component analysis of these input data sets are carried out. Artificial data sets are created using the previous data sets to cover a wider space. These artificial data sets are used in a standalone RRTMG radiation scheme to generate outputs in a cost effective manner. These input-output pairs are used to train multiple architectures DNNs(1). Another DNN(2) is trained using the inputs to predict the error. A reverse emulation is trained to map the output to input. An error controlled code is developed with the two DNNs (1 and 2) and will determine when/if the original parameterization needs to be used.

  7. Beam focal spot position determination for an Elekta linac with the Agility® head; practical guide with a ready-to-go procedure.

    PubMed

    Chojnowski, Jacek M; Taylor, Lee M; Sykes, Jonathan R; Thwaites, David I

    2018-05-14

    A novel phantomless, EPID-based method of measuring the beam focal spot offset of a linear accelerator was proposed and validated for Varian machines. In this method, one set of jaws and the MLC were utilized to form a symmetric field and then a 180 o collimator rotation was utilized to determine the radiation isocenter defined by the jaws and the MLC, respectively. The difference between these two isocentres is directly correlated with the beam focal spot offset of the linear accelerator. In the current work, the method has been considered for Elekta linacs. An Elekta linac with the Agility ® head does not have two set of jaws, therefore, a modified method is presented making use of one set of diaphragms, the MLC and a full 360 o collimator rotation. The modified method has been tested on two Elekta Synergy ® linacs with Agility ® heads and independently validated. A practical guide with instructions and a MATLAB ® code is attached for easy implementation. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. Long-term activity recognition from wristwatch accelerometer data.

    PubMed

    Garcia-Ceja, Enrique; Brena, Ramon F; Carrasco-Jimenez, Jose C; Garrido, Leonardo

    2014-11-27

    With the development of wearable devices that have several embedded sensors, it is possible to collect data that can be analyzed in order to understand the user's needs and provide personalized services. Examples of these types of devices are smartphones, fitness-bracelets, smartwatches, just to mention a few. In the last years, several works have used these devices to recognize simple activities like running, walking, sleeping, and other physical activities. There has also been research on recognizing complex activities like cooking, sporting, and taking medication, but these generally require the installation of external sensors that may become obtrusive to the user. In this work we used acceleration data from a wristwatch in order to identify long-term activities. We compare the use of Hidden Markov Models and Conditional Random Fields for the segmentation task. We also added prior knowledge into the models regarding the duration of the activities by coding them as constraints and sequence patterns were added in the form of feature functions. We also performed subclassing in order to deal with the problem of intra-class fragmentation, which arises when the same label is applied to activities that are conceptually the same but very different from the acceleration point of view.

  9. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  10. Reliability enhancement of Navier-Stokes codes through convergence acceleration

    NASA Technical Reports Server (NTRS)

    Merkle, Charles L.; Dulikravich, George S.

    1995-01-01

    Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pang, Xiaoying; Rybarcyk, Larry

    HPSim is a GPU-accelerated online multi-particle beam dynamics simulation tool for ion linacs. It was originally developed for use on the Los Alamos 800-MeV proton linac. It is a “z-code” that contains typical linac beam transport elements. The linac RF-gap transformation utilizes transit-time-factors to calculate the beam acceleration therein. The space-charge effects are computed using the 2D SCHEFF (Space CHarge EFFect) algorithm, which calculates the radial and longitudinal space charge forces for cylindrically symmetric beam distributions. Other space- charge routines to be incorporated include the 3D PICNIC and a 3D Poisson solver. HPSim can simulate beam dynamics in drift tubemore » linacs (DTLs) and coupled cavity linacs (CCLs). Elliptical superconducting cavity (SC) structures will also be incorporated into the code. The computational core of the code is written in C++ and accelerated using the NVIDIA CUDA technology. Users access the core code, which is wrapped in Python/C APIs, via Pythons scripts that enable ease-of-use and automation of the simulations. The overall linac description including the EPICS PV machine control parameters is kept in an SQLite database that also contains calibration and conversion factors required to transform the machine set points into model values used in the simulation.« less

  12. Transform coding for hardware-accelerated volume rendering.

    PubMed

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  13. Standard interface files and procedures for reactor physics codes, version III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, B.M.

    Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)

  14. Accelerator-based techniques for the support of senior-level undergraduate physics laboratories

    NASA Astrophysics Data System (ADS)

    Williams, J. R.; Clark, J. C.; Isaacs-Smith, T.

    2001-07-01

    Approximately three years ago, Auburn University replaced its aging Dynamitron accelerator with a new 2MV tandem machine (Pelletron) manufactured by the National Electrostatics Corporation (NEC). This new machine is maintained and operated for the University by Physics Department personnel, and the accelerator supports a wide variety of materials modification/analysis studies. Computer software is available that allows the NEC Pelletron to be operated from a remote location, and an Internet link has been established between the Accelerator Laboratory and the Upper-Level Undergraduate Teaching Laboratory in the Physics Department. Additional software supplied by Canberra Industries has also been used to create a second Internet link that allows live-time data acquisition in the Teaching Laboratory. Our senior-level undergraduates and first-year graduate students perform a number of experiments related to radiation detection and measurement as well as several standard accelerator-based experiments that have been added recently. These laboratory exercises will be described, and the procedures used to establish the Internet links between our Teaching Laboratory and the Accelerator Laboratory will be discussed.

  15. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  16. Chirped pulse inverse free-electron laser vacuum accelerator

    DOEpatents

    Hartemann, Frederic V.; Baldis, Hector A.; Landahl, Eric C.

    2002-01-01

    A chirped pulse inverse free-electron laser (IFEL) vacuum accelerator for high gradient laser acceleration in vacuum. By the use of an ultrashort (femtosecond), ultrahigh intensity chirped laser pulse both the IFEL interaction bandwidth and accelerating gradient are increased, thus yielding large gains in a compact system. In addition, the IFEL resonance condition can be maintained throughout the interaction region by using a chirped drive laser wave. In addition, diffraction can be alleviated by taking advantage of the laser optical bandwidth with negative dispersion focusing optics to produce a chromatic line focus. The combination of these features results in a compact, efficient vacuum laser accelerator which finds many applications including high energy physics, compact table-top laser accelerator for medical imaging and therapy, material science, and basic physics.

  17. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  18. A preliminary design of the collinear dielectric wakefield accelerator

    NASA Astrophysics Data System (ADS)

    Zholents, A.; Gai, W.; Doran, S.; Lindberg, R.; Power, J. G.; Strelnikov, N.; Sun, Y.; Trakhtenberg, E.; Vasserman, I.; Jing, C.; Kanareykin, A.; Li, Y.; Gao, Q.; Shchegolkov, D. Y.; Simakov, E. I.

    2016-09-01

    A preliminary design of the multi-meter long collinear dielectric wakefield accelerator that achieves a highly efficient transfer of the drive bunch energy to the wakefields and to the witness bunch is considered. It is made from 0.5 m long accelerator modules containing a vacuum chamber with dielectric-lined walls, a quadrupole wiggler, an rf coupler, and BPM assembly. The single bunch breakup instability is a major limiting factor for accelerator efficiency, and the BNS damping is applied to obtain the stable multi-meter long propagation of a drive bunch. Numerical simulations using a 6D particle tracking computer code are performed and tolerances to various errors are defined.

  19. New Concepts and Fermilab Facilities for Antimatter Research

    NASA Astrophysics Data System (ADS)

    Jackson, Gerald

    2008-04-01

    There has long been significant interest in continuing antimatter research at the Fermi National Accelerator Laboratory. Beam kinetic energies ranging from 10 GeV all the way down to the eV scale and below are of interest. There are three physics missions currently being developed: the continuation of charmonium physics utilizing an internal target; atomic physics with in-flight generated antihydrogen atoms; and deceleration to thermal energies and paasage of antiprotons through a grating system to determine their gravitation acceleration. Non-physics missions include the study of medical applications, tests of deep-space propulsion concepts, low-risk testing of nuclear fuel elements, and active interrogation for smuggled nuclear materials in support of homeland security. This paper reviews recent beam physics and accelerator technology innovations in the development of methods and new Fermilab facilities for the above missions.

  20. Implementation of an Accelerated Physical Examination Course in a Doctor of Pharmacy Program

    PubMed Central

    Ho, Jackie; Lopes, Ingrid C.; Shah, Bijal M.; Ip, Eric J.

    2014-01-01

    Objective. To describe the implementation of a 1-day accelerated physical examination course for a doctor of pharmacy program and to evaluate pharmacy students’ knowledge, attitudes, and confidence in performing physical examination. Design. Using a flipped teaching approach, course coordinators collaborated with a physician faculty member to design and develop the objectives of the course. Knowledge, attitude, and confidence survey questions were administered before and after the practical laboratory. Assessment. Following the practical laboratory, knowledge improved by 8.3% (p<0.0001). Students’ perceived ability and confidence to perform a physical examination significantly improved (p<0.0001). A majority of students responded that reviewing the training video (81.3%) and reading material (67.4%) prior to the practical laboratory was helpful in learning the physical examination. Conclusion. An accelerated physical examination course using a flipped teaching approach was successful in improving students’ knowledge of, attitudes about, and confidence in using physical examination skills in pharmacy practice. PMID:25657369

  1. Direct measurement of the image displacement instability in a linear induction accelerator

    NASA Astrophysics Data System (ADS)

    Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.

    2017-06-01

    The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, it appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. This becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.

  2. Laser beam coupling with capillary discharge plasma for laser wakefield acceleration applications

    NASA Astrophysics Data System (ADS)

    Bagdasarov, G. A.; Sasorov, P. V.; Gasilov, V. A.; Boldarev, A. S.; Olkhovskaya, O. G.; Benedetti, C.; Bulanov, S. S.; Gonsalves, A.; Mao, H.-S.; Schroeder, C. B.; van Tilborg, J.; Esarey, E.; Leemans, W. P.; Levato, T.; Margarone, D.; Korn, G.

    2017-08-01

    One of the most robust methods, demonstrated to date, of accelerating electron beams by laser-plasma sources is the utilization of plasma channels generated by the capillary discharges. Although the spatial structure of the installation is simple in principle, there may be some important effects caused by the open ends of the capillary, by the supplying channels etc., which require a detailed 3D modeling of the processes. In the present work, such simulations are performed using the code MARPLE. First, the process of capillary filling with cold hydrogen before the discharge is fired, through the side supply channels is simulated. Second, the simulation of the capillary discharge is performed with the goal to obtain a time-dependent spatial distribution of the electron density near the open ends of the capillary as well as inside the capillary. Finally, to evaluate the effectiveness of the beam coupling with the channeling plasma wave guide and of the electron acceleration, modeling of the laser-plasma interaction was performed with the code INF&RNO.

  3. Direct measurement of the image displacement instability in a linear induction accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.

    The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, itmore » appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. Finally, this becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.« less

  4. Direct measurement of the image displacement instability in a linear induction accelerator

    DOE PAGES

    Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.

    2017-06-19

    The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, itmore » appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. Finally, this becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.« less

  5. GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy

    NASA Astrophysics Data System (ADS)

    Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro

    2011-03-01

    The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.

  6. Development of Maximum Considered Earthquake Ground Motion Maps

    USGS Publications Warehouse

    Leyendecker, E.V.; Hunt, R.J.; Frankel, A.D.; Rukstales, K.S.

    2000-01-01

    The 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings use a design procedure that is based on spectral response acceleration rather than the traditional peak ground acceleration, peak ground velocity, or zone factors. The spectral response accelerations are obtained from maps prepared following the recommendations of the Building Seismic Safety Council's (BSSC) Seismic Design Procedures Group (SDPG). The SDPG-recommended maps, the Maximum Considered Earthquake (MCE) Ground Motion Maps, are based on the U.S. Geological Survey (USGS) probabilistic hazard maps with additional modifications incorporating deterministic ground motions in selected areas and the application of engineering judgement. The MCE ground motion maps included with the 1997 NEHRP Provisions also serve as the basis for the ground motion maps used in the seismic design portions of the 2000 International Building Code and the 2000 International Residential Code. Additionally the design maps prepared for the 1997 NEHRP Provisions, combined with selected USGS probabilistic maps, are used with the 1997 NEHRP Guidelines for the Seismic Rehabilitation of Buildings.

  7. Breaking the Code: The Creative Use of QR Codes to Market Extension Events

    ERIC Educational Resources Information Center

    Hill, Paul; Mills, Rebecca; Peterson, GaeLynn; Smith, Janet

    2013-01-01

    The use of smartphones has drastically increased in recent years, heralding an explosion in the use of QR codes. The black and white square barcodes that link the physical and digital world are everywhere. These simple codes can provide many opportunities to connect people in the physical world with many of Extension online resources. The…

  8. Numerical Simulation of MIG for 42 GHz, 200 kW Gyrotron

    NASA Astrophysics Data System (ADS)

    Singh, Udaybir; Bera, Anirban; Kumar, Narendra; Purohit, L. P.; Sinha, Ashok K.

    2010-06-01

    A triode type magnetron injection gun (MIG) of a 42 GHz, 200 kW gyrotron for an Indian TOKAMAK system is designed by using the commercially available code EGUN. The operating voltages of the modulating anode and the accelerating anode are 29 kV and 65 kV respectively. The operating mode of the gyrotron is TE03 and it is operated in fundamental harmonic. The simulated results of MIG obtained with the EGUN code are validated with another trajectory code TRAK.

  9. Aquarius Project: Research in the System Architecture of Accelerators for the High Performance Execution of Logic Programs.

    DTIC Science & Technology

    1991-05-31

    benchmarks ............ .... . .. .. . . .. 220 Appendix G : Source code of the Aquarius Prolog compiler ........ . 224 Chapter I Introduction "You’re given...notation, a tool that is used throughout the compiler’s implementation. Appendix F lists the source code of the C and Prolog benchmarks. Appendix G lists the...source code of the compilcr. 5 "- standard form Prolog / a-sfomadon / head umrvln Convert to tmeikernel Prol g vrans~fonaon 1symbolic execution

  10. Negative ion beam development at Cadarache (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonin, A.; Bucalossi, J.; Desgranges, C.

    1996-03-01

    Neutral beam injection (NBI) is one of the candidates for plasma heating and current drive in the new generation of large magnetic fusion devices (ITER). In order to produce the required deuterium atom beams with energies of 1 MeV and powers of tens of MW, negative D{sup {minus}} ion beams are required. For this purpose, multiampere D{sup {minus}} beam production and 1 MeV electrostatic acceleration is being studied at Cadarache. The SINGAP experiment, a 1 MeV 0.1 A D{sup {minus}} multisecond beam accelerator facility, has recently started operation. It is equipped with a Pagoda ion source, a multiaperture 60 keVmore » preaccelerator and a 1 MV 120 mA power supply. The particular feature of SINGAP is that the postaccelerator merges the 60 keV beamlets, aiming at accelerating the whole beam to 1 MeV in a single gap. The 1 MV level was obtained in less than 2 weeks, the accumulated voltage on-time of being {approximately}22 min. A second test bed MANTIS, is devoted to the development of multiampere D{sup {minus}} sources. It is capable of driving discharges with current up to 2500 A at arc voltages up to 150 V. A large multicusp source has been tested in pure volume and cesiated operation. With cesium seeding, an accelerated D{sup {minus}} beam current density of up to 5.2 mA/cm{sup 2} (2 A of D{sup {minus}}) was obtained. A modification of the extractor is underway in order to improve this performance. A 3D Monte Carlo code has been developed to simulate the negative ion transport in magnetized plasma sources and optimize magnetic field configuration of the large area D{sup {minus}} sources. {copyright} {ital 1996 American Institute of Physics.}« less

  11. Alternative modeling methods for plasma-based Rf ion sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veitzer, Seth A., E-mail: veitzer@txcorp.com; Kundrapu, Madhusudhan, E-mail: madhusnk@txcorp.com; Stoltz, Peter H., E-mail: phstoltz@txcorp.com

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. Inmore » particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.« less

  12. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  13. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models for the SNS source and present simulation results demonstrating plasma evolution over many Rf periods for different plasma temperatures. We perform the calculations in parallel, on unstructured meshes, using finite-volume solvers in order to obtain results in reasonable time.

  14. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. I. DESCRIPTION OF THE PHYSICS AND THE NUMERICAL METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less

  15. Vine—A Numerical Code for Simulating Astrophysical Systems Using Particles. I. Description of the Physics and the Numerical Methods

    NASA Astrophysics Data System (ADS)

    Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.

    2009-10-01

    We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.

  16. Will there be energy frontier colliders after LHC?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiltsev, Vladimir

    2016-09-15

    High energy particle colliders have been in the forefront of particle physics for more than three decades. At present the near term US, European and international strategies of the particle physics community are centered on full exploitation of the physics potential of the Large Hadron Collider (LHC) through its high-luminosity upgrade (HL-LHC). The future of the world-wide HEP community critically depends on the feasibility of possible post-LHC colliders. The concept of the feasibility is complex and includes at least three factors: feasibility of energy, feasibility of luminosity and feasibility of cost. Here we overview all current options for post-LHC collidersmore » from such perspective (ILC, CLIC, Muon Collider, plasma colliders, CEPC, FCC, HE-LHC) and discuss major challenges and accelerator R&D required to demonstrate feasibility of an energy frontier accelerator facility following the LHC. We conclude by taking a look into ultimate energy reach accelerators based on plasmas and crystals, and discussion on the perspectives for the far future of the accelerator-based particle physics.« less

  17. Compensation Techniques in Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayed, Hisham Kamal

    2011-05-01

    Accelerator physics is one of the most diverse multidisciplinary fields of physics, wherein the dynamics of particle beams is studied. It takes more than the understanding of basic electromagnetic interactions to be able to predict the beam dynamics, and to be able to develop new techniques to produce, maintain, and deliver high quality beams for different applications. In this work, some basic theory regarding particle beam dynamics in accelerators will be presented. This basic theory, along with applying state of the art techniques in beam dynamics will be used in this dissertation to study and solve accelerator physics problems. Twomore » problems involving compensation are studied in the context of the MEIC (Medium Energy Electron Ion Collider) project at Jefferson Laboratory. Several chromaticity (the energy dependence of the particle tune) compensation methods are evaluated numerically and deployed in a figure eight ring designed for the electrons in the collider. Furthermore, transverse coupling optics have been developed to compensate the coupling introduced by the spin rotators in the MEIC electron ring design.« less

  18. Simulations of an accelerator-based shielding experiment using the particle and heavy-ion transport code system PHITS.

    PubMed

    Sato, T; Sihver, L; Iwase, H; Nakashima, H; Niita, K

    2005-01-01

    In order to estimate the biological effects of HZE particles, an accurate knowledge of the physics of interaction of HZE particles is necessary. Since the heavy ion transport problem is a complex one, there is a need for both experimental and theoretical studies to develop accurate transport models. RIST and JAERI (Japan), GSI (Germany) and Chalmers (Sweden) are therefore currently developing and bench marking the General-Purpose Particle and Heavy-Ion Transport code System (PHITS), which is based on the NMTC and MCNP for nucleon/meson and neutron transport respectively, and the JAM hadron cascade model. PHITS uses JAERI Quantum Molecular Dynamics (JQMD) and the Generalized Evaporation Model (GEM) for calculations of fission and evaporation processes, a model developed at NASA Langley for calculation of total reaction cross sections, and the SPAR model for stopping power calculations. The future development of PHITS includes better parameterization in the JQMD model used for the nucleus-nucleus reactions, and improvement of the models used for calculating total reaction cross sections, and addition of routines for calculating elastic scattering of heavy ions, and inclusion of radioactivity and burn up processes. As a part of an extensive bench marking of PHITS, we have compared energy spectra of secondary neutrons created by reactions of HZE particles with different targets, with thicknesses ranging from <1 to 200 cm. We have also compared simulated and measured spatial, fluence and depth-dose distributions from different high energy heavy ion reactions. In this paper, we report simulations of an accelerator-based shielding experiment, in which a beam of 1 GeV/n Fe-ions has passed through thin slabs of polyethylene, Al, and Pb at an acceptance angle up to 4 degrees. c2005 Published by Elsevier Ltd on behalf of COSPAR.

  19. A Particle Module for the PLUTO Code. I. An Implementation of the MHD–PIC Equations

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Bodo, G.; Vaidya, B.; Mattia, G.

    2018-05-01

    We describe an implementation of a particle physics module available for the PLUTO code appropriate for the dynamical evolution of a plasma consisting of a thermal fluid and a nonthermal component represented by relativistic charged particles or cosmic rays (CRs). While the fluid is approached using standard numerical schemes for magnetohydrodynamics, CR particles are treated kinetically using conventional Particle-In-Cell (PIC) techniques. The module can be used either to describe test-particle motion in the fluid electromagnetic field or to solve the fully coupled magnetohydrodynamics (MHD)–PIC system of equations with particle backreaction on the fluid as originally introduced by Bai et al. Particle backreaction on the fluid is included in the form of momentum–energy feedback and by introducing the CR-induced Hall term in Ohm’s law. The hybrid MHD–PIC module can be employed to study CR kinetic effects on scales larger than the (ion) skin depth provided that the Larmor gyration scale is properly resolved. When applicable, this formulation avoids resolving microscopic scales, offering substantial computational savings with respect to PIC simulations. We present a fully conservative formulation that is second-order accurate in time and space, and extends to either the Runge–Kutta (RK) or the corner transport upwind time-stepping schemes (for the fluid), while a standard Boris integrator is employed for the particles. For highly energetic relativistic CRs and in order to overcome the time-step restriction, a novel subcycling strategy that retains second-order accuracy in time is presented. Numerical benchmarks and applications including Bell instability, diffusive shock acceleration, and test-particle acceleration in reconnecting layers are discussed.

  20. GPU acceleration of Runge Kutta-Fehlberg and its comparison with Dormand-Prince method

    NASA Astrophysics Data System (ADS)

    Seen, Wo Mei; Gobithaasan, R. U.; Miura, Kenjiro T.

    2014-07-01

    There is a significant reduction of processing time and speedup of performance in computer graphics with the emergence of Graphic Processing Units (GPUs). GPUs have been developed to surpass Central Processing Unit (CPU) in terms of performance and processing speed. This evolution has opened up a new area in computing and researches where highly parallel GPU has been used for non-graphical algorithms. Physical or phenomenal simulations and modelling can be accelerated through General Purpose Graphic Processing Units (GPGPU) and Compute Unified Device Architecture (CUDA) implementations. These phenomena can be represented with mathematical models in the form of Ordinary Differential Equations (ODEs) which encompasses the gist of change rate between independent and dependent variables. ODEs are numerically integrated over time in order to simulate these behaviours. The classical Runge-Kutta (RK) scheme is the common method used to numerically solve ODEs. The Runge Kutta Fehlberg (RKF) scheme has been specially developed to provide an estimate of the principal local truncation error at each step, known as embedding estimate technique. This paper delves into the implementation of RKF scheme for GPU devices and compares its result with Dorman Prince method. A pseudo code is developed to show the implementation in detail. Hence, practitioners will be able to understand the data allocation in GPU, formation of RKF kernels and the flow of data to/from GPU-CPU upon RKF kernel evaluation. The pseudo code is then written in C Language and two ODE models are executed to show the achievable speedup as compared to CPU implementation. The accuracy and efficiency of the proposed implementation method is discussed in the final section of this paper.

  1. Propagation of Reactions in Thermally-damaged PBX-9501

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tringe, J W; Glascoe, E A; Kercher, J R

    A thermally-initiated explosion in PBX-9501 (octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) is observed in situ by flash x-ray imaging, and modeled with the LLNL multi-physics arbitrary-Lagrangian-Eulerian code ALE3D. The containment vessel deformation provides a useful estimate of the reaction pressure at the time of the explosion, which we calculate to be in the range 0.8-1.4 GPa. Closely-coupled ALE3D simulations of these experiments, utilizing the multi-phase convective burn model, provide detailed predictions of the reacted mass fraction and deflagration front acceleration. During the preinitiation heating phase of these experiments, the solid HMX portion of the PBX-9501 undergoes a {beta}-phase to {delta}-phase transition which damages the explosivemore » and induces porosity. The multi-phase convective burn model results demonstrate that damaged particle size and pressure are critical for predicting reaction speed and violence. In the model, energetic parameters are taken from LLNL's thermochemical-kinetics code Cheetah and burn rate parameters from Son et al. (2000). Model predictions of an accelerating deflagration front are in qualitative agreement with the experimental images assuming a mode particle diameter in the range 300-400 {micro}m. There is uncertainty in the initial porosity caused by thermal damage of PBX-9501 and, thus, the effective surface area for burning. To better understand these structures, we employ x-ray computed tomography (XRCT) to examine the microstructure of PBX-9501 before and after thermal damage. Although lack of contrast between grains and binder prevents the determination of full grain size distribution in this material, there are many domains visible in thermally damaged PBX-9501 with diameters in the 300-400 {micro}m range.« less

  2. Physical and digital simulations for IVA robotics

    NASA Technical Reports Server (NTRS)

    Hinman, Elaine; Workman, Gary L.

    1992-01-01

    Space based materials processing experiments can be enhanced through the use of IVA robotic systems. A program to determine requirements for the implementation of robotic systems in a microgravity environment and to develop some preliminary concepts for acceleration control of small, lightweight arms has been initiated with the development of physical and digital simulation capabilities. The physical simulation facilities incorporate a robotic workcell containing a Zymark Zymate II robot instrumented for acceleration measurements, which is able to perform materials transfer functions while flying on NASA's KC-135 aircraft during parabolic manuevers to simulate reduced gravity. Measurements of accelerations occurring during the reduced gravity periods will be used to characterize impacts of robotic accelerations in a microgravity environment in space. Digital simulations are being performed with TREETOPS, a NASA developed software package which is used for the dynamic analysis of systems with a tree topology. Extensive use of both simulation tools will enable the design of robotic systems with enhanced acceleration control for use in the space manufacturing environment.

  3. Fermilab | Tritium at Fermilab | Frequently asked questions

    Science.gov Websites

    computing Quantum initiatives Research and development Key discoveries Benefits of particle physics Particle Accelerators Leading accelerator technology Accelerator complex Illinois Accelerator Research Center Fermilab questions about tritium Tritium in surface water Indian Creek Kress Creek Ferry Creek Tritium in sanitary

  4. Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay Receiver Design Issues

    DTIC Science & Technology

    2011-03-01

    222 IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 59, NO. 9, SEPTEMBER 2011 2595 Noncoherent Physical-Layer Network Coding with FSK Modulation: Relay... noncoherent reception, channel estima- tion. I. INTRODUCTION IN the two-way relay channel (TWRC), a pair of sourceterminals exchange information...2011 4. TITLE AND SUBTITLE Noncoherent Physical-Layer Network Coding with FSK Modulation:Relay Receiver Design Issues 5a. CONTRACT NUMBER 5b

  5. Development and Benchmarking of a Hybrid PIC Code For Dense Plasmas and Fast Ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witherspoon, F. Douglas; Welch, Dale R.; Thompson, John R.

    Radiation processes play an important role in the study of both fast ignition and other inertial confinement schemes, such as plasma jet driven magneto-inertial fusion, both in their effect on energy balance, and in generating diagnostic signals. In the latter case, warm and hot dense matter may be produced by the convergence of a plasma shell formed by the merging of an assembly of high Mach number plasma jets. This innovative approach has the potential advantage of creating matter of high energy densities in voluminous amount compared with high power lasers or particle beams. An important application of this technologymore » is as a plasma liner for the flux compression of magnetized plasma to create ultra-high magnetic fields and burning plasmas. HyperV Technologies Corp. has been developing plasma jet accelerator technology in both coaxial and linear railgun geometries to produce plasma jets of sufficient mass, density, and velocity to create such imploding plasma liners. An enabling tool for the development of this technology is the ability to model the plasma dynamics, not only in the accelerators themselves, but also in the resulting magnetized target plasma and within the merging/interacting plasma jets during transport to the target. Welch pioneered numerical modeling of such plasmas (including for fast ignition) using the LSP simulation code. Lsp is an electromagnetic, parallelized, plasma simulation code under development since 1995. It has a number of innovative features making it uniquely suitable for modeling high energy density plasmas including a hybrid fluid model for electrons that allows electrons in dense plasmas to be modeled with a kinetic or fluid treatment as appropriate. In addition to in-house use at Voss Scientific, several groups carrying out research in Fast Ignition (LLNL, SNL, UCSD, AWE (UK), and Imperial College (UK)) also use LSP. A collaborative team consisting of HyperV Technologies Corp., Voss Scientific LLC, FAR-TECH, Inc., Prism Computational Sciences, Inc. and Advanced Energy Systems Inc. joined efforts to develop new physics and numerical models for LSP in several key areas to enhance the ability of LSP to model high energy density plasmas (HEDP). This final report details those efforts. Areas addressed in this research effort include: adding radiation transport to LSP, first in 2D and then fully 3D, extending the EMHD model to 3D, implementing more advanced radiation and electrode plasma boundary conditions, and installing more efficient implicit numerical algorithms to speed complex 2-D and 3-D computations. The new capabilities allow modeling of the dominant processes in high energy density plasmas, and further assist the development and optimization of plasma jet accelerators, with particular attention to MHD instabilities and plasma/wall interaction (based on physical models for ion drag friction and ablation/erosion of the electrodes). In the first funding cycle we implemented a solver for the radiation diffusion equation. To solve this equation in 2-D, we used finite-differencing and applied the parallelized sparse-matrix solvers in the PETSc library (Argonne National Laboratory) to the resulting system of equations. A database of the necessary coefficients for materials of interest was assembled using the PROPACEOS and ATBASE codes from Prism. The model was benchmarked against Prism's 1-D radiation hydrodynamics code HELIOS, and against experimental data obtained from HyperV's separately funded plasma jet accelerator development program. Work in the second funding cycle focused on extending the radiation diffusion model to full 3-D, continued development of the EMHD model, optimizing the direct-implicit model to speed up calculations, add in multiply ionized atoms, and improved the way boundary conditions are handled in LSP. These new LSP capabilities were then used, along with analytic calculations and Mach2 runs, to investigate plasma jet merging, plasma detachment and transport, restrike and advanced jet accelerator design. In addition, a strong linkage to diagnostic measurements was made by modeling plasma jet experiments on PLX to support benchmarking of the code. A large number of upgrades and improvements advancing hybrid PIC algorithms were implemented in LSP during the second funding cycle. These include development of fully 3D radiation transport algorithms, new boundary conditions for plasma-electrode interactions, and a charge conserving equation of state that permits multiply ionized high-Z ions. The final funding cycle focused on 1) mitigating the effects of a slow-growing grid instability which is most pronounced in plasma jet frame expansion problems using the two-fluid Eulerian remap algorithm, 2) extension of the Eulerian Smoothing Algorithm to allow EOS/Radiation modeling, 3) simulations of collisionless shocks formed by jet merging, 4) simulations of merging jets using high-Z gases, 5) generation of PROPACEOS EOS/Opacity databases, 6) simulations of plasma jet transport experiments, 7) simulations of plasma jet penetration through transverse magnetic fields, and 8) GPU PIC code development The tools developed during this project are applicable not only to the study of plasma jets, but also to a wide variety of HEDP plasmas of interest to DOE, including plasmas created in short-pulse laser experiments performed to study fast ignition concepts for inertial confinement fusion.« less

  6. Large calculation of the flow over a hypersonic vehicle using a GPU

    NASA Astrophysics Data System (ADS)

    Elsen, Erich; LeGresley, Patrick; Darve, Eric

    2008-12-01

    Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.

  7. 3D Hybrid Simulations of Interactions of High-Velocity Plasmoids with Obstacles

    NASA Astrophysics Data System (ADS)

    Omelchenko, Y. A.; Weber, T. E.; Smith, R. J.

    2015-11-01

    Interactions of fast plasma streams and objects with magnetic obstacles (dipoles, mirrors, etc) lie at the core of many space and laboratory plasma phenomena ranging from magnetoshells and solar wind interactions with planetary magnetospheres to compact fusion plasmas (spheromaks and FRCs) to astrophysics-in-lab experiments. Properly modeling ion kinetic, finite-Larmor radius and Hall effects is essential for describing large-scale plasma dynamics, turbulence and heating in complex magnetic field geometries. Using an asynchronous parallel hybrid code, HYPERS, we conduct 3D hybrid (particle-in-cell ion, fluid electron) simulations of such interactions under realistic conditions that include magnetic flux coils, ion-ion collisions and the Chodura resistivity. HYPERS does not step simulation variables synchronously in time but instead performs time integration by executing asynchronous discrete events: updates of particles and fields carried out as frequently as dictated by local physical time scales. Simulations are compared with data from the MSX experiment which studies the physics of magnetized collisionless shocks through the acceleration and subsequent stagnation of FRC plasmoids against a strong magnetic mirror and flux-conserving boundary.

  8. Generation of low-emittance electron beams in electrostatic accelerators for FEL applications

    NASA Astrophysics Data System (ADS)

    Chen, Teng; Elias, Luis R.

    1995-02-01

    This paper reports results of transverse emittance studies and beam propagation in electrostatic accelerators for free electron laser applications. In particular, we discuss emittance growth analysis of a low current electron beam system consisting of a miniature thermoionic electron gun and a National Electrostatics Accelerator (NEC) tube. The emittance growth phenomenon is discussed in terms of thermal effects in the electron gun cathode and aberrations produced by field gradient changes occurring inside the electron gun and throughout the accelerator tube. A method of reducing aberrations using a magnetic solenoidal field is described. Analysis of electron beam emittance was done with the EGUN code. Beam propagation along the accelerator tube was studied using a cylindrically symmetric beam envelope equation that included beam self-fields and the external accelerator fields which were derived from POISSON simulations.

  9. Assessment of the prevailing physics codes: LEOPARD, LASER, and EPRI-CELL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, J.S.

    1981-01-01

    In order to analyze core performance and fuel management, it is necessary to verify reactor physics codes in great detail. This kind of work not only serves the purpose of understanding and controlling the characteristics of each code, but also ensures the reliability as codes continually change due to constant modifications and machine transfers. This paper will present the results of a comprehensive verification of three code packages - LEOPARD, LASER, and EPRI-CELL.

  10. Simulations of High Current NuMI Magnetic Horn Striplines at FNAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sipahi, Taylan; Biedron, Sandra; Hylen, James

    2016-06-01

    Both the NuMI (Neutrinos and the Main Injector) beam line, that has been providing intense neutrino beams for several Fermilab experiments (MINOS, MINERVA, NOVA), and the newly proposed LBNF (Long Baseline Neutrino Facility) beam line which plans to produce the highest power neutrino beam in the world for DUNE (the Deep Underground Neutrino Experiment) need pulsed magnetic horns to focus the mesons which decay to produce the neutrinos. The high-current horn and stripline design has been evolving as NuMI reconfigures for higher beam power and to meet the needs of the LBNF design. The CSU particle accelerator group has aidedmore » the neutrino physics experiments at Fermilab by producing EM simulations of magnetic horns and the required high-current striplines. In this paper, we present calculations, using the Poisson and ANSYS Maxwell 3D codes, of the EM interaction of the stripline plates of the NuMI horns at critical stress points. In addition, we give the electrical simulation results using the ANSYS Electric code. These results are being used to support the development of evolving horn stripline designs to handle increased electrical current and higher beam power for NuMI upgrades and for LBNF« less

  11. Cryogenic distribution box for Fermi National Accelerator Laboratory

    NASA Astrophysics Data System (ADS)

    Svehla, M. R.; Bonnema, E. C.; Cunningham, E. K.

    2017-12-01

    Meyer Tool & Mfg., Inc (Meyer Tool) of Oak Lawn, Illinois is manufacturing a cryogenic distribution box for Fermi National Accelerator Laboratory (FNAL). The distribution box will be used for the Muon-to-electron conversion (Mu2e) experiment. The box includes twenty-seven cryogenic valves, two heat exchangers, a thermal shield, and an internal nitrogen separator vessel, all contained within a six-foot diameter ASME coded vacuum vessel. This paper discusses the design and manufacturing processes that were implemented to meet the unique fabrication requirements of this distribution box. Design and manufacturing features discussed include: 1) Thermal strap design and fabrication, 2) Evolution of piping connections to heat exchangers, 3) Nitrogen phase separator design, 4) ASME code design of vacuum vessel, and 5) Cryogenic valve installation.

  12. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    NASA Astrophysics Data System (ADS)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  13. Implementation of metal-friendly EAM/FS-type semi-empirical potentials in HOOMD-blue: A GPU-accelerated molecular dynamics software

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex

    2018-04-01

    We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.

  14. Λ CDM is Consistent with SPARC Radial Acceleration Relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, B. W.; Wadsley, J. W., E-mail: kellerbw@mcmaster.ca

    2017-01-20

    Recent analysis of the Spitzer Photometry and Accurate Rotation Curve (SPARC) galaxy sample found a surprisingly tight relation between the radial acceleration inferred from the rotation curves and the acceleration due to the baryonic components of the disk. It has been suggested that this relation may be evidence for new physics, beyond Λ CDM . In this Letter, we show that 32 galaxies from the MUGS2 match the SPARC acceleration relation. These cosmological simulations of star-forming, rotationally supported disks were simulated with a WMAP3 Λ CDM cosmology, and match the SPARC acceleration relation with less scatter than the observational data.more » These results show that this acceleration relation is a consequence of dissipative collapse of baryons, rather than being evidence for exotic dark-sector physics or new dynamical laws.« less

  15. Better physical activity classification using smartphone acceleration sensor.

    PubMed

    Arif, Muhammad; Bilal, Mohsin; Kattan, Ahmed; Ahamed, S Iqbal

    2014-09-01

    Obesity is becoming one of the serious problems for the health of worldwide population. Social interactions on mobile phones and computers via internet through social e-networks are one of the major causes of lack of physical activities. For the health specialist, it is important to track the record of physical activities of the obese or overweight patients to supervise weight loss control. In this study, acceleration sensor present in the smartphone is used to monitor the physical activity of the user. Physical activities including Walking, Jogging, Sitting, Standing, Walking upstairs and Walking downstairs are classified. Time domain features are extracted from the acceleration data recorded by smartphone during different physical activities. Time and space complexity of the whole framework is done by optimal feature subset selection and pruning of instances. Classification results of six physical activities are reported in this paper. Using simple time domain features, 99 % classification accuracy is achieved. Furthermore, attributes subset selection is used to remove the redundant features and to minimize the time complexity of the algorithm. A subset of 30 features produced more than 98 % classification accuracy for the six physical activities.

  16. The Richtmyer-Meshkov Instability on a Circular Interface in Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Black, Wolfgang; Maxon, W. Curtis; Denissen, Nicholas; McFarland, Jacob

    2017-11-01

    Hydrodynamic instabilities (HI) are ubiquitous in high energy density (HED) applications such as astrophysics, thermonuclear weapons, and inertial fusion. In these systems, fluid mixing is encouraged by the HI which can reduce the energy yield and eventually drive the system to equilibrium. The Richtmyer-Meshkov (RM) instability is one such HI and is created when a perturbed interface between a density gradient is impulsively accelerated. The physics can be complicated one step further by the inclusion of Magnetohydrodynamics (MHD), where HED systems experience the effects of magnetic and electric fields. These systems provide unique challenges and as such can be used to validate hydrodynamic codes capable of predicting HI. The work presented here will outline efforts to study the RMI in MHD for a circular interface utilizing the hydrocode FLAG, developed at Los Alamos National Laboratory.

  17. Assessment of computational issues associated with analysis of high-lift systems

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.; Jones, Kenneth M.; Waggoner, Edgar G.

    1992-01-01

    Thin-layer Navier-Stokes calculations for wing-fuselage configurations from subsonic to hypersonic flow regimes are now possible. However, efficient, accurate solutions for using these codes for two- and three-dimensional high-lift systems have yet to be realized. A brief overview of salient experimental and computational research is presented. An assessment of the state-of-the-art relative to high-lift system analysis and identification of issues related to grid generation and flow physics which are crucial for computational success in this area are also provided. Research in support of the high-lift elements of NASA's High Speed Research and Advanced Subsonic Transport Programs which addresses some of the computational issues is presented. Finally, fruitful areas of concentrated research are identified to accelerate overall progress for high lift system analysis and design.

  18. Uncertainty propagation from raw data to final results. [ALEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, N.M.

    1985-01-01

    Reduction of data from raw numbers (counts per channel) to physically meaningful quantities (such as cross sections) is in itself a complicated procedure. Propagation of experimental uncertainties through that reduction process has sometimes been perceived as even more difficult, if not impossible. At the Oak Ridge Electron Linear Accelerator, a computer code ALEX has been developed to assist in the propagation process. The purpose of ALEX is to carefully and correctly propagate all experimental uncertainties through the entire reduction procedure, yielding the complete covariance matrix for the reduced data, while requiring little additional input from the experimentalist beyond that whichmore » is needed for the data reduction itself. The theoretical method used in ALEX is described, with emphasis on transmission measurements. Application to the natural iron and natural nickel measurements of D.C. Larson is shown.« less

  19. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  20. The conversion of CESR to operate as the Test Accelerator, CesrTA. Part 1: overview

    NASA Astrophysics Data System (ADS)

    Billing, M. G.

    2015-07-01

    Cornell's electron/positron storage ring (CESR) was modified over a series of accelerator shutdowns beginning in May 2008, which substantially improves its capability for research and development for particle accelerators. CESR's energy span from 1.8 to 5.6 GeV with both electrons and positrons makes it ideal for the study of a wide spectrum of accelerator physics issues and instrumentation related to present light sources and future lepton damping rings. Additionally a number of these are also relevant for the beam physics of proton accelerators. This paper outlines the motivation, design and conversion of CESR to a test accelerator, CESRTA, enhanced to study such subjects as low emittance tuning methods, electron cloud (EC) effects, intra-beam scattering, fast ion instabilities as well as general improvements to beam instrumentation. While the initial studies of CESRTA focussed on questions related to the International Linear Collider (ILC) damping ring design, CESRTA is a very flexible storage ring, capable of studying a wide range of accelerator physics and instrumentation questions. This paper contains the outline and the basis for a set of papers documenting the reconfiguration of the storage ring and the associated instrumentation required for the studies described above. Further details may be found in these papers.

  1. Graduate Student Program in Materials and Engineering Research and Development for Future Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spentzouris, Linda

    The objective of the proposal was to develop graduate student training in materials and engineering research relevant to the development of particle accelerators. Many components used in today's accelerators or storage rings are at the limit of performance. The path forward in many cases requires the development of new materials or fabrication techniques, or a novel engineering approach. Often, accelerator-based laboratories find it difficult to get top-level engineers or materials experts with the motivation to work on these problems. The three years of funding provided by this grant was used to support development of accelerator components through a multidisciplinary approachmore » that cut across the disciplinary boundaries of accelerator physics, materials science, and surface chemistry. The following results were achieved: (1) significant scientific results on fabrication of novel photocathodes, (2) application of surface science and superconducting materials expertise to accelerator problems through faculty involvement, (3) development of instrumentation for fabrication and characterization of materials for accelerator components, (4) student involvement with problems at the interface of material science and accelerator physics.« less

  2. Calculations of skyshine from an intense portable electron linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estes, G.P.; Hughes, H.G.; Fry, D.A.

    1994-12-31

    The MCNP Monte carlo code has been used at Los Alamos to calculate skyshine and terrain albedo efects from an intense portable electron linear accelerator that is to be used by the Russian Federation to radiograph nuclear weapons that may have been damaged by accidents. Relative dose rate profiles have been calculated. The design of the accelerator, along with a diagram, is presented.

  3. Plasma Wakefield Acceleration and FACET - Facilities for Accelerator Science and Experimental Test Beams at SLAC

    ScienceCinema

    Seryi, Andrei

    2017-12-22

    Plasma wakefield acceleration is one of the most promising approaches to advancing accelerator technology. This approach offers a potential 1,000-fold or more increase in acceleration over a given distance, compared to existing accelerators.  FACET, enabled by the Recovery Act funds, will study plasma acceleration, using short, intense pulses of electrons and positrons. In this lecture, the physics of plasma acceleration and features of FACET will be presented.  

  4. Diet and Physical Activity Intervention Strategies for College Students

    PubMed Central

    Martinez, Yannica Theda S.; Harmon, Brook E.; Bantum, Erin O.; Strayhorn, Shaila

    2016-01-01

    Objectives To understand perceived barriers of a diverse sample of college students and their suggestions for interventions aimed at healthy eating, cooking, and physical activity. Methods Forty students (33% Asian American, 30% mixed ethnicity) were recruited. Six focus groups were audio-recorded, transcribed, and coded. Coding began with a priori codes, but allowed for additional codes to emerge. Analysis of questionnaires on participants’ dietary and physical activity practices and behaviors provided context for qualitative findings. Results Barriers included time, cost, facility quality, and intimidation. Tailoring towards a college student’s lifestyle, inclusion of hands-on skill building, and online support and resources were suggested strategies. Conclusions Findings provide direction for diet and physical activity interventions and policies aimed at college students. PMID:28480225

  5. Summary of the Physics Opportunities Working Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; McDonald, K.T.

    The Physics Opportunities Working Group was convened with the rather general mandate to explore physic opportunities that may arise as new accelerator technologies and facilities come into play. Five topics were considered during the workshop: QED at critical field strength, novel positron sources, crystal accelerators, suppression of beamstrahlung, and muon colliders. Of particular interest was the sense that a high energy muon collider might be technically feasible and certainly deserves serious study.

  6. Summary of the Physics Opportunities Working Group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; McDonald, K.T.

    1992-12-01

    The Physics Opportunities Working Group was convened with the rather general mandate to explore physic opportunities that may arise as new accelerator technologies and facilities come into play. Five topics were considered during the workshop: QED at critical field strength, novel positron sources, crystal accelerators, suppression of beamstrahlung, and muon colliders. Of particular interest was the sense that a high energy muon collider might be technically feasible and certainly deserves serious study.

  7. Which Accelerates Faster--A Falling Ball or a Porsche?

    ERIC Educational Resources Information Center

    Rall, James D.; Abdul-Razzaq, Wathiq

    2012-01-01

    An introductory physics experiment has been developed to address the issues seen in conventional physics lab classes including assumption verification, technological dependencies, and real world motivation for the experiment. The experiment has little technology dependence and compares the acceleration due to gravity by using position versus time…

  8. YALINA facility a sub-critical Accelerator- Driven System (ADS) for nuclear energy research facility description and an overview of the research program (1997-2008).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Smith, D. L.; Nuclear Engineering Division

    2010-04-28

    The YALINA facility is a zero-power, sub-critical assembly driven by a conventional neutron generator. It was conceived, constructed, and put into operation at the Radiation Physics and Chemistry Problems Institute of the National Academy of Sciences of Belarus located in Minsk-Sosny, Belarus. This facility was conceived for the purpose of investigating the static and dynamic neutronics properties of accelerator driven sub-critical systems, and to serve as a neutron source for investigating the properties of nuclear reactions, in particular transmutation reactions involving minor-actinide nuclei. This report provides a detailed description of this facility and documents the progress of research carried outmore » there during a period of approximately a decade since the facility was conceived and built until the end of 2008. During its history of development and operation to date (1997-2008), the YALINA facility has hosted several foreign groups that worked with the resident staff as collaborators. The participation of Argonne National Laboratory in the YALINA research programs commenced in 2005. For obvious reasons, special emphasis is placed in this report on the work at YALINA facility that has involved Argonne's participation. Attention is given here to the experimental program at YALINA facility as well as to analytical investigations aimed at validating codes and computational procedures and at providing a better understanding of the physics and operational behavior of the YALINA facility in particular, and ADS systems in general, during the period 1997-2008.« less

  9. Essay: In Memory of Robert Siemann

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Alexander W.; /SLAC

    Bob Siemann came to SLAC from Cornell in 1991. With the support from Burton Richter, then Director of SLAC, he took on a leadership role to formulate an academic program in accelerator physics at SLAC and the development of its accelerator faculty. Throughout his career he championed accelerator physics as an independent academic discipline, a vision that he fought so hard for and never retreated from. He convinced Stanford University and SLAC to create a line of tenured accelerator physics faculty and over the years he also regularly taught classes at Stanford and the U.S. Particle Accelerator School. After themore » shutdown of the SSC Laboratory, I returned to SLAC in 1993 to join the accelerator faculty he was forming. He had always visualized a need to have a professional academic journal for the accelerator field, and played a pivotal role in creating the journal Physical Review Special Topics - Accelerators and Beams, now the community standard for accelerator physics after nine years of his editorship. Today, Bob's legacy of accelerator physics as an independent academic discipline continues at SLAC as well as in the community, from which we all benefit. Bob was a great experimentalist. He specialized in experimental techniques and instrumentation, but what he wanted to learn is physics. If he had to learn theory - heaven forbid - to reach that goal, he would not hesitate one second to do so. In fact, he had written several theoretical papers as results of these efforts. Now this is what I call a true experimentalist! Ultimately, however, I think it was experimental instruments that he loved most. His eyes widened when he talked about his instruments. Prompted by a question, he would proceed to a nearby blackboard, with a satisfying grin, and draw his experimental device in a careful thinking manner, then describe his experiment and educate the questioner with some insightful physics. These moments were most enjoyable, to him and the questioner alike. When I think of Bob today, it is these moments that first come to mind, and it is these moments I will miss the most. I should like to mention another curious thing about Bob, namely he had a special talent of finding persuasive arguments that went his way. It was difficult to argue with Bob because it was so difficult to win. Generally quiet otherwise, he was too good and too methodical a debater. I had never seen him losing a debate on a policy issue or in a committee setting. However, when it comes to physics, his soft spot, he occasionally let go some weakness. When so doing, he would lose the debate, but his grin revealed that the loss was more than compensated by the physics he gained together with his debater. It is hard to believe that the office around the corner is now empty. The dear colleague we have come to know, to talk to, and to seek advice from, together with the feet-on-the-desk posture and the familiar grin, are no longer there. I wonder, who will now occupy that office next? And who will continue to carry on Bob Siemann's legacy? Many of us are waiting.« less

  10. ALICE: A non-LTE plasma atomic physics, kinetics and lineshape package

    NASA Astrophysics Data System (ADS)

    Hill, E. G.; Pérez-Callejo, G.; Rose, S. J.

    2018-03-01

    All three parts of an atomic physics, atomic kinetics and lineshape code, ALICE, are described. Examples of the code being used to model the emissivity and opacity of plasmas are discussed and interesting features of the code which build on the existing corpus of models are shown throughout.

  11. Propulsion Physics Under the Changing Density Field Model

    NASA Technical Reports Server (NTRS)

    Robertson, Glen A.

    2011-01-01

    To grow as a space faring race, future spaceflight systems will requires new propulsion physics. Specifically a propulsion physics model that does not require mass ejection without limiting the high thrust necessary to accelerate within or beyond our solar system and return within a normal work period or lifetime. In 2004 Khoury and Weltman produced a density dependent cosmology theory they called Chameleon Cosmology, as at its nature, it is hidden within known physics. This theory represents a scalar field within and about an object, even in the vacuum. Whereby, these scalar fields can be viewed as vacuum energy fields with definable densities that permeate all matter; having implications to dark matter/energy with universe acceleration properties; implying a new force mechanism for propulsion physics. Using Chameleon Cosmology, the author has developed a new propulsion physics model, called the Changing Density Field (CDF) Model. This model relates to density changes in these density fields, where the density field density changes are related to the acceleration of matter within an object. These density changes in turn change how an object couples to the surrounding density fields. Whereby, thrust is achieved by causing a differential in the coupling to these density fields about an object. Since the model indicates that the density of the density field in an object can be changed by internal mass acceleration, even without exhausting mass, the CDF model implies a new propellant-less propulsion physics model

  12. Symplectic orbit and spin tracking code for all-electric storage rings

    DOE PAGES

    Talman, Richard M.; Talman, John D.

    2015-07-22

    Proposed methods for measuring the electric dipole moment (EDM) of the proton use an intense, polarized proton beam stored in an all-electric storage ring “trap.” At the “magic” kinetic energy of 232.792 MeV, proton spins are “frozen,” for example always parallel to the instantaneous particle momentum. Energy deviation from the magic value causes in-plane precession of the spin relative to the momentum. Any nonzero EDM value will cause out-of-plane precession—measuring this precession is the basis for the EDM determination. A proposed implementation of this measurement shows that a proton EDM value of 10 –29e–cm or greater will produce a statisticallymore » significant, measurable precession after multiply repeated runs, assuming small beam depolarization during 1000 s runs, with high enough precision to test models of the early universe developed to account for the present day particle/antiparticle population imbalance. This paper describes an accelerator simulation code, eteapot, a new component of the Unified Accelerator Libraries (ual), to be used for long term tracking of particle orbits and spins in electric bend accelerators, in order to simulate EDM storage ring experiments. Though qualitatively much like magnetic rings, the nonconstant particle velocity in electric rings gives them significantly different properties, especially in weak focusing rings. Like the earlier code teapot (for magnetic ring simulation) this code performs exact tracking in an idealized (approximate) lattice rather than the more conventional approach, which is approximate tracking in a more nearly exact lattice. The Bargmann-Michel-Telegdi (BMT) equation describing the evolution of spin vectors through idealized bend elements is also solved exactly—original to this paper. Furthermore the idealization permits the code to be exactly symplectic (with no artificial “symplectification”). Any residual spurious damping or antidamping is sufficiently small to permit reliable tracking for the long times, such as the 1000 s assumed in estimating the achievable EDM precision. This paper documents in detail the theoretical formulation implemented in eteapot. An accompanying paper describes the practical application of the eteapot code in the Universal Accelerator Libraries (ual) environment to “resurrect,” or reverse engineer, the “AGS-analog” all-electric ring built at Brookhaven National Laboratory in 1954. Of the (very few) all-electric rings ever commissioned, the AGS-analog ring is the only relativistic one and is the closest to what is needed for measuring proton (or, even more so, electron) EDM’s. As a result, the companion paper also describes preliminary lattice studies for the planned proton EDM storage rings as well as testing the code for long time orbit and spin tracking.« less

  13. Accurate and efficient spin integration for particle accelerators

    DOE PAGES

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...

    2015-02-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less

  14. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  15. Sheath field dynamics from time-dependent acceleration of laser-generated positrons

    NASA Astrophysics Data System (ADS)

    Kerr, Shaun; Fedosejevs, Robert; Link, Anthony; Williams, Jackson; Park, Jaebum; Chen, Hui

    2017-10-01

    Positrons produced in ultraintense laser-matter interactions are accelerated by the sheath fields established by fast electrons, typically resulting in quasi-monoenergetic beams. Experimental results from OMEGA EP show higher order features developing in the positron spectra when the laser energy exceeds one kilojoule. 2D PIC simulations using the LSP code were performed to give insight into these spectral features. They suggest that for high laser energies multiple, distinct phases of acceleration can occur due to time-dependent sheath field acceleration. The detailed dynamics of positron acceleration will be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344, and funded by LDRD 17-ERD-010.

  16. Interactions Between Energetic Electrons and Realistic Whistler Mode Waves in the Jovian Magnetosphere

    NASA Astrophysics Data System (ADS)

    de Soria-Santacruz Pich, M.; Drozdov, A.; Menietti, J. D.; Garrett, H. B.; Kellerman, A. C.; Shprits, Y. Y.

    2016-12-01

    The radiation belts of Jupiter are the most intense of all the planets in the solar system. Their source is not well understood but they are believed to be the result of inward radial transport beyond the orbit of Io. In the case of Earth, the radiation belts are the result of local acceleration and radial diffusion from whistler waves, and it has been suggested that this type of acceleration may also be significant in the magnetosphere of Jupiter. Multiple diffusion codes have been developed to study the dynamics of the Earth's magnetosphere and characterize the interaction between relativistic electrons and whistler waves; in the present paper we adapt one of these codes, the two-dimensional version of the Versatile Electron Radiation Belt (VERB) computer code, to the case of the Jovian magnetosphere. We use realistic parameters to determine the importance of whistler emissions in the acceleration and loss of electrons in the Jovian magnetosphere. More specifically, we use an extensive wave survey from the Galileo spacecraft and initial conditions derived from the Galileo Interim Radiation Electron Model version 2 (GIRE2) to estimate the pitch angle and energy diffusion of the electron population due to lower and upper band whistlers as a function of latitude and radial distance from the planet, and we calculate the decay rates that result from this interaction.

  17. Computational tools and lattice design for the PEP-II B-Factory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai Yunhai; Irwin, John; Nosochkov, Yuri

    1997-02-01

    Several accelerator codes were used to design the PEP-II lattices, ranging from matrix-based codes, such as MAD and DIMAD, to symplectic-integrator codes, such as TRACY and DESPOT. In addition to element-by-element tracking, we constructed maps to determine aberration strengths. Furthermore, we have developed a fast and reliable method (nPB tracking) to track particles with a one-turn map. This new technique allows us to evaluate performance of the lattices on the entire tune-plane. Recently, we designed and implemented an object-oriented code in C++ called LEGO which integrates and expands upon TRACY and DESPOT.

  18. Combined Modeling of Acceleration, Transport, and Hydrodynamic Response in Solar Flares. 1; The Numerical Model

    NASA Technical Reports Server (NTRS)

    Liu, Wei; Petrosian, Vahe; Mariska, John T.

    2009-01-01

    Acceleration and transport of high-energy particles and fluid dynamics of atmospheric plasma are interrelated aspects of solar flares, but for convenience and simplicity they were artificially separated in the past. We present here self consistently combined Fokker-Planck modeling of particles and hydrodynamic simulation of flare plasma. Energetic electrons are modeled with the Stanford unified code of acceleration, transport, and radiation, while plasma is modeled with the Naval Research Laboratory flux tube code. We calculated the collisional heating rate directly from the particle transport code, which is more accurate than those in previous studies based on approximate analytical solutions. We repeated the simulation of Mariska et al. with an injection of power law, downward-beamed electrons using the new heating rate. For this case, a -10% difference was found from their old result. We also used a more realistic spectrum of injected electrons provided by the stochastic acceleration model, which has a smooth transition from a quasi-thermal background at low energies to a non thermal tail at high energies. The inclusion of low-energy electrons results in relatively more heating in the corona (versus chromosphere) and thus a larger downward heat conduction flux. The interplay of electron heating, conduction, and radiative loss leads to stronger chromospheric evaporation than obtained in previous studies, which had a deficit in low-energy electrons due to an arbitrarily assumed low-energy cutoff. The energy and spatial distributions of energetic electrons and bremsstrahlung photons bear signatures of the changing density distribution caused by chromospheric evaporation. In particular, the density jump at the evaporation front gives rise to enhanced emission, which, in principle, can be imaged by X-ray telescopes. This model can be applied to investigate a variety of high-energy processes in solar, space, and astrophysical plasmas.

  19. Accelerator shield design of KIPT neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z.; Gohar, Y.

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine have been collaborating on the design development of a neutron source facility at KIPT utilizing an electron-accelerator-driven subcritical assembly. Electron beam power is 100 kW, using 100 MeV electrons. The facility is designed to perform basic and applied nuclear research, produce medical isotopes, and train young nuclear specialists. The biological shield of the accelerator building is designed to reduce the biological dose to less than 0.5-mrem/hr during operation. The main source of the biological dose is the photons and the neutrons generatedmore » by interactions of leaked electrons from the electron gun and accelerator sections with the surrounding concrete and accelerator materials. The Monte Carlo code MCNPX serves as the calculation tool for the shield design, due to its capability to transport electrons, photons, and neutrons coupled problems. The direct photon dose can be tallied by MCNPX calculation, starting with the leaked electrons. However, it is difficult to accurately tally the neutron dose directly from the leaked electrons. The neutron yield per electron from the interactions with the surrounding components is less than 0.01 neutron per electron. This causes difficulties for Monte Carlo analyses and consumes tremendous computation time for tallying with acceptable statistics the neutron dose outside the shield boundary. To avoid these difficulties, the SOURCE and TALLYX user subroutines of MCNPX were developed for the study. The generated neutrons are banked, together with all related parameters, for a subsequent MCNPX calculation to obtain the neutron and secondary photon doses. The weight windows variance reduction technique is utilized for both neutron and photon dose calculations. Two shielding materials, i.e., heavy concrete and ordinary concrete, were considered for the shield design. The main goal is to maintain the total dose outside the shield boundary at less than 0.5-mrem/hr. The shield configuration and parameters of the accelerator building have been determined and are presented in this paper. (authors)« less

  20. Dependency graph for code analysis on emerging architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail Jurievich; Lipnikov, Konstantin

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  1. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  2. Dynamics of Superfluid Helium in Low-Gravity

    NASA Technical Reports Server (NTRS)

    Frank, David J.

    1997-01-01

    This report summarizes the work performed under a contract entitled 'Dynamics of Superfluid Helium in Low Gravity'. This project performed verification tests, over a wide range of accelerations of two Computational Fluid Dynamics (CFD) codes of which one incorporates the two-fluid model of superfluid helium (SFHe). Helium was first liquefied in 1908 and not until the 1930s were the properties of helium below 2.2 K observed sufficiently to realize that it did not obey the ordinary physical laws of physics as applied to ordinary liquids. The term superfluidity became associated with these unique observations. The low temperature of SFHe and it's temperature unifonrmity have made it a significant cryogenic coolant for use in space applications in astronomical observations with infrared sensors and in low temperature physics. Superfluid helium has been used in instruments such as the Shuttle Infrared Astronomy Telescope (IRT), the Infrared Astronomy Satellite (IRAS), the Cosmic Background Observatory (COBE), and the Infrared Satellite Observatory (ISO). It is also used in the Space Infrared Telescope (SIRTF), Relativity Mission Satellite formally called Gravity Probe-B (GP-B), and the Test of the Equivalence Principle (STEP) presently under development. For GP-B and STEP, the use of SFHE is used to cool Superconducting Quantum Interference Detectors (SQUIDS) among other parts of the instruments. The Superfluid Helium On-Orbit Transfer (SHOOT) experiment flown in the Shuttle studied the behavior of SFHE. This experiment attempted to get low-gravity slosh data, however, the main emphasis was to study the low-gravity transfer of SFHE from tank to tank. These instruments carried tanks of SFHE of a few hundred liters to 2500 liters. The capability of modeling the behavior of SFHE is important to spacecraft control engineers who must design systems that can overcome disturbances created by the movement of the fluid. In addition instruments such as GP-B and STEP are very sensitive to quasi-steady changes in the mass distribution of the liquid. The CFD codes were used to model the fluid's dynamic motion. Tests in one-g were performed with the main emphasis on being able to compute the actual damping of the fluid. A series of flights on the NASA Lewis reduced gravity DC-9 aircraft were performed with the Jet Propulsion Laboratory (JPL) Low Temperature Flight Facility and a superfluid Test Cell. The data at approximately 0.04g, lg and 2g were used to determine if correct fundamental frequencies can be predicted based on the acceleration field. Tests in zero gravity were performed to evaluate zero gravity motion.

  3. Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2

    NASA Technical Reports Server (NTRS)

    Lorenzini, E.

    1985-01-01

    This Quarterly Report deals with the deployment maneuver of a single-axis, vertical constellation with three masses. A new, easy to handle, computer code that simulates the two-dimensional dynamics of the constellation has been implemented. This computer code is used for designing control laws for the deployment maneuver that minimizes the acceleration level of the low-g platform during the maneuver.

  4. pycola: N-body COLA method code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias

    2015-09-01

    pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.

  5. Simulations of the plasma dynamics in high-current ion diodes

    NASA Astrophysics Data System (ADS)

    Boine-Frankenheim, O.; Pointon, T. D.; Mehlhorn, T. A.

    Our time-implicit fluid/Particle-In-Cell (PIC) code DYNAID [1]is applied to problems relevant for applied- B ion diode operation. We present simulations of the laser ion source, which will soon be employed on the SABRE accelerator at SNL, and of the dynamics of the anode source plasma in the applied electric and magnetic fields. DYNAID is still a test-bed for a higher-dimensional simulation code. Nevertheless, the code can already give new theoretical insight into the dynamics of plasmas in pulsed power devices.

  6. NSEG, a segmented mission analysis program for low and high speed aircraft. Volume 1: Theoretical development

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is presented. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed characteristics were specified in tabular form. The code also contains extensive flight envelope performance mapping capabilities. Approximate take off and landing analyses were performed. At high speeds, centrifugal lift effects were accounted for. Extensive turbojet and ramjet engine scaling procedures were incorporated in the code.

  7. Figuring the Acceleration of the Simple Pendulum

    ERIC Educational Resources Information Center

    Lieberherr, Martin

    2011-01-01

    The centripetal acceleration has been known since Huygens' (1659) and Newton's (1684) time. The physics to calculate the acceleration of a simple pendulum has been around for more than 300 years, and a fairly complete treatise has been given by C. Schwarz in this journal. But sentences like "the acceleration is always directed towards the…

  8. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  9. Accelerated test plan for nickel cadmium spacecraft batteries

    NASA Technical Reports Server (NTRS)

    Hennigan, T. J.

    1973-01-01

    An accelerated test matrix is outlined that includes acceptance, baseline and post-cycling tests, chemical and physical analyses, and the data analysis procedures to be used in determining the feasibility of an accelerated test for sealed, nickel cadmium cells.

  10. Recent improvements of reactor physics codes in MHI

    NASA Astrophysics Data System (ADS)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  11. Recent improvements of reactor physics codes in MHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipatedmore » transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.« less

  12. Estimation of dose delivered to accelerator devices from stripping of 18.5 MeV/n 238U ions using the FLUKA code

    NASA Astrophysics Data System (ADS)

    Oranj, Leila Mokhtari; Lee, Hee-Seock; Leitner, Mario Santana

    2017-12-01

    In Korea, a heavy ion accelerator facility (RAON) has been designed for production of rare isotopes. The 90° bending section of this accelerator includes a 1.3- μm-carbon stripper followed by two dipole magnets and other devices. An incident beam is 18.5 MeV/n 238U33+,34+ ions passing through the carbon stripper at the beginning of the section. The two dipoles are tuned to transport 238U ions with specific charge states of 77+, 78+, 79+, 80+ and 81+. Then other ions will be deflected at the bends and cause beam losses. These beam losses are a concern to the devices of transport/beam line. The absorbed dose in devices and prompt dose in the tunnel were calculated using the FLUKA code in order to estimate radiation damage of such devices located at the 90° bending section and for the radiation protection. A novel method to transport multi-charged 238U ions beam was applied in the FLUKA code by using charge distribution of 238U ions after the stripper obtained from LISE++ code. The calculated results showed that the absorbed dose in the devices is influenced by the geometrical arrangement. The maximum dose was observed at the coils of first, second, fourth and fifth quadruples placed after first dipole magnet. The integrated doses for 30 years of operation with 9.5 p μA 238U ions were about 2 MGy for those quadrupoles. In conclusion, the protection of devices particularly, quadruples would be necessary to reduce the damage to devices. Moreover, results showed that the prompt radiation penetrated within the first 60 - 120 cm of concrete.

  13. Numerical investigation on the effects of acceleration reversal times in Rayleigh-Taylor Instability with multiple reversals

    NASA Astrophysics Data System (ADS)

    Farley, Zachary; Aslangil, Denis; Banerjee, Arindam; Lawrie, Andrew G. W.

    2017-11-01

    An implicit large eddy simulation (ILES) code, MOBILE, is used to explore the growth rate of the mixing layer width of the acceleration-driven Rayleigh-Taylor instability (RTI) under variable acceleration histories. The sets of computations performed consist of a series of accel-decel-accel (ADA) cases in addition to baseline constant acceleration and accel-decel (AD) cases. The ADA cases are a series of varied times for the second acceleration reversal (t2) and show drastic differences in the growth rates. Upon the deceleration phase, the kinetic energy of the flow is shifted into internal wavelike patterns. These waves are evidenced by the examined differences in growth rate in the second acceleration phase for the set of ADA cases. Here, we investigate global parameters that include mixing width, growth rates and the anisotropy tensor for the kinetic energy to better understand the behavior of the growth during the re-acceleration period. Authors acknowledge financial support from DOE-SSAA (DE-NA0003195) and NSF CAREER (#1453056) awards.

  14. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  15. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  16. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T; Lin, H; Xu, X

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analyticallymore » derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.« less

  17. Distant star clusters of the Milky Way in MOND

    NASA Astrophysics Data System (ADS)

    Haghi, H.; Baumgardt, H.; Kroupa, P.

    2011-03-01

    We determine the mean velocity dispersion of six Galactic outer halo globular clusters, AM 1, Eridanus, Pal 3, Pal 4, Pal 15, and Arp 2 in the weak acceleration regime to test classical vs. modified Newtonian dynamics (MOND). Owing to the nonlinearity of MOND's Poisson equation, beyond tidal effects, the internal dynamics of clusters is affected by the external field in which they are immersed. For the studied clusters, particle accelerations are much lower than the critical acceleration a0 of MOND, but the motion of stars is neither dominated by internal accelerations (ai ≫ ae) nor external accelerations (ae ≫ ai). We use the N-body code N-MODY in our analysis, which is a particle-mesh-based code with a numerical MOND potential solver developed by Ciotti et al. (2006, ApJ, 640, 741) to derive the line-of-sight velocity dispersion by adding the external field effect. We show that Newtonian dynamics predicts a low-velocity dispersion for each cluster, while in modified Newtonian dynamics the velocity dispersion is much higher. We calculate the minimum number of measured stars necessary to distinguish between Newtonian gravity and MOND with the Kolmogorov-Smirnov test. We also show that for most clusters it is necessary to measure the velocities of between 30 to 80 stars to distinguish between both cases. Therefore the observational measurement of the line-of-sight velocity dispersion of these clusters will provide a test for MOND.

  18. Physical Activity and Influenza-Coded Outpatient Visits, a Population-Based Cohort Study

    PubMed Central

    Siu, Eric; Campitelli, Michael A.; Kwong, Jeffrey C.

    2012-01-01

    Background Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection. Methodology/Principal Findings We conducted a cohort study of Ontario respondents to Statistics Canada’s population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74–0.94) and active (OR 0.87; 95% CI 0.77–0.98) individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75–0.98, moderately active: OR 0.85; 95% CI 0.74–0.97) but not for individuals ≥65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design. Conclusion/Significance Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza. PMID:22737242

  19. Physics at the SPS.

    PubMed

    Gatignon, L

    2018-05-01

    The CERN Super Proton Synchrotron (SPS) has delivered a variety of beams to a vigorous fixed target physics program since 1978. In this paper, we restrict ourselves to the description of a few illustrative examples in the ongoing physics program at the SPS. We will outline the physics aims of the COmmon Muon Proton Apparatus for Structure and Spectroscopy (COMPASS), north area 64 (NA64), north area 62 (NA62), north area 61 (NA61), and advanced proton driven plasma wakefield acceleration experiment (AWAKE). COMPASS studies the structure of the proton and more specifically of its spin. NA64 searches for the dark photon A', which is the messenger for interactions between normal and dark matter. The NA62 experiment aims at a 10% precision measurement of the very rare decay K + → π + νν. As this decay mode can be calculated very precisely in the Standard Model, it offers a very good opportunity to look for new physics beyond the Standard Model. The NA61/SHINE experiment studies the phase transition to Quark Gluon Plasma, a state in which the quarks and gluons that form the proton and the neutron are de-confined. Finally, AWAKE investigates proton-driven wake field acceleration: a promising technique to accelerate electrons with very high accelerating gradients. The Physics Beyond Colliders study at CERN is paving the way for a significant and diversified continuation of this already rich and compelling physics program that is complementary to the one at the big colliders like the Large Hadron Collider.

  20. Physics at the SPS

    NASA Astrophysics Data System (ADS)

    Gatignon, L.

    2018-05-01

    The CERN Super Proton Synchrotron (SPS) has delivered a variety of beams to a vigorous fixed target physics program since 1978. In this paper, we restrict ourselves to the description of a few illustrative examples in the ongoing physics program at the SPS. We will outline the physics aims of the COmmon Muon Proton Apparatus for Structure and Spectroscopy (COMPASS), north area 64 (NA64), north area 62 (NA62), north area 61 (NA61), and advanced proton driven plasma wakefield acceleration experiment (AWAKE). COMPASS studies the structure of the proton and more specifically of its spin. NA64 searches for the dark photon A', which is the messenger for interactions between normal and dark matter. The NA62 experiment aims at a 10% precision measurement of the very rare decay K+ → π+νν. As this decay mode can be calculated very precisely in the Standard Model, it offers a very good opportunity to look for new physics beyond the Standard Model. The NA61/SHINE experiment studies the phase transition to Quark Gluon Plasma, a state in which the quarks and gluons that form the proton and the neutron are de-confined. Finally, AWAKE investigates proton-driven wake field acceleration: a promising technique to accelerate electrons with very high accelerating gradients. The Physics Beyond Colliders study at CERN is paving the way for a significant and diversified continuation of this already rich and compelling physics program that is complementary to the one at the big colliders like the Large Hadron Collider.

Top